Search Results

Search found 3942 results on 158 pages for 'logged'.

Page 149/158 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • problem with logout script in php

    - by user225269
    I'm a beginner in php, and I am trying to create a login and logout. But I am having problems in logging out. My logout just calls for the login form which is this: <? session_start(); session_destroy(); ?> <table width="300" border="0" align="center" cellpadding="0" cellspacing="1" bgcolor="#CCCCCC"> <tr> <form name="form1" method="post" action="checklogin.php"> <td> <table width="100%" border="0" cellpadding="3" cellspacing="1" bgcolor="#FFFFFF"> <tr> <td colspan="3"><strong>Member Login </strong></td> </tr> <tr> <td width="78">Username</td> <td width="6">:</td> <td width="294"><input name="myusername" type="text" id="myusername"></td> </tr> <tr> <td>Password</td> <td>:</td> <td><input name="mypassword" type="text" id="mypassword"></td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td><input type="submit" name="Submit" value="Login"></td> </tr> </table> </td> </form> </tr> </table> My problem is, when I try to press the back button in the browser. Whoever user is using it can still access what is not supposed to be accessed when a user hasn't logged in. Do I need to add a code on the user page? I have this code on the user page: <? session_start(); if(!session_is_registered(myusername)){ header("location:main_login.php"); } ?> What can you recommend that I would do so that a script will prompt to enter the username and password again when a user clicks on the back button.

    Read the article

  • NHibernate and MySql is inserting and Selecting, not updating

    - by Chris Brandsma
    Something strange is going on with NHibernate for me. I can select, and I can insert. But I can't do and update against MySql. Here is my domain class public class UserAccount { public virtual int Id { get; set; } public virtual string UserName { get; set; } public virtual string Password { get; set; } public virtual bool Enabled { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual string Phone { get; set; } public virtual DateTime? DeletedDate { get; set; } public virtual UserAccount DeletedBy { get; set; } } Fluent Mapping public class UserAccountMap : ClassMap<UserAccount> { public UserAccountMap() { Table("UserAccount"); Id(x => x.Id); Map(x => x.UserName); Map(x => x.Password); Map(x => x.FirstName); Map(x => x.LastName); Map(x => x.Phone); Map(x => x.DeletedDate); Map(x => x.Enabled); } } Here is how I'm creating my Session Factory var dbconfig = MySQLConfiguration .Standard .ShowSql() .ConnectionString(a => a.FromAppSetting("MySqlConnStr")); FluentConfiguration config = Fluently.Configure() .Database(dbconfig) .Mappings(m => { var mapping = m.FluentMappings.AddFromAssemblyOf<TransactionDetail>(); mapping.ExportTo(mappingdir); }); and this is my NHibernate code: using (var trans = Session.BeginTransaction()) { var user = GetById(userId); user.Enabled = false; user.DeletedDate = DateTime.Now; user.UserName = "deleted_" + user.UserName; user.Password = "--removed--"; Session.Update(user); trans.Commit(); } No exceptions are being thrown. No queries are being logged. Nothing.

    Read the article

  • Magento - How to select mysql rows by max value?

    - by Damodar Bashyal
    mysql> SELECT * FROM `log_customer` WHERE `customer_id` = 224 LIMIT 0, 30; +--------+------------+-------------+---------------------+-----------+----------+ | log_id | visitor_id | customer_id | login_at | logout_at | store_id | +--------+------------+-------------+---------------------+-----------+----------+ | 817 | 50139 | 224 | 2011-03-21 23:56:56 | NULL | 1 | | 830 | 52317 | 224 | 2011-03-27 23:43:54 | NULL | 1 | | 1371 | 136549 | 224 | 2011-11-16 04:33:51 | NULL | 1 | | 1495 | 164024 | 224 | 2012-02-08 01:05:48 | NULL | 1 | | 2130 | 281854 | 224 | 2012-11-13 23:44:13 | NULL | 1 | +--------+------------+-------------+---------------------+-----------+----------+ 5 rows in set (0.00 sec) mysql> SELECT * FROM `customer_entity` WHERE `entity_id` = 224; +-----------+----------------+---------------------------+----------+---------------------+---------------------+ | entity_id | entity_type_id | email | group_id | created_at | updated_at | +-----------+----------------+---------------------------+----------+---------------------+---------------------+ | 224 | 1 | [email protected] | 3 | 2011-03-21 04:59:17 | 2012-11-13 23:46:23 | +-----------+----------------+---------------------------+----------+--------------+----------+-----------------+ 1 row in set (0.00 sec) How can i search for customers who hasn't logged in for last 10 months and their account has not been updated for last 10 months. I tried below but failed. $collection = Mage::getModel('customer/customer')->getCollection(); $collection->getSelect()->joinRight(array('l'=>'log_customer'), "customer_id=entity_id AND MAX(l.login_at) <= '" . date('Y-m-d H:i:s', strtotime('10 months ago')) . "'")->group('e.entity_id'); $collection->addAttributeToSelect('*'); $collection->addFieldToFilter('updated_at', array( 'lt' => date('Y-m-d H:i:s', strtotime('10 months ago')), 'datetime'=>true, )); $collection->addAttributeToFilter('group_id', array( 'neq' => 5, )); Above tables have one customer for reference. I have no idea how to use MAX() on joins. Thanks UPDATE: This seems returning correct data, but I would like to do magento way using resource collection, so i don't need to do load customer again on for loop. $read = Mage::getSingleton('core/resource')->getConnection('core_read'); $sql = "select * from ( select e.*,l.login_at from customer_entity as e left join log_customer as l on l.customer_id=e.entity_id group by e.entity_id order by l.login_at desc ) as l where ( l.login_at <= '".date('Y-m-d H:i:s', strtotime('10 months ago'))."' or ( l.created_at <= '".date('Y-m-d H:i:s', strtotime('10 months ago'))."' and l.login_at is NULL ) ) and group_id != 5"; $result = $read->fetchAll($sql); I have loaded full shell script to github https://github.com/dbashyal/Magento-ecommerce-Shell-Scripts/blob/master/shell/suspendCustomers.php

    Read the article

  • Problem carrying Session over to other pages

    - by AAA
    I am able to login a user, but while processing to the next page (memebers area) I can't display any user info let alone print the $_SESSION[email]. I am not sure what's up. Below is the login code and the testing members are page. Login page: session_start(); //also in a real app you would get the id dynamically $sql = "select `email`, `password` from `accounts` where `email` = '$_POST[email]'"; $query = mysql_query($sql) or die ("Error: ".mysql_error()); while ($row = mysql_fetch_array($query)){ $email = $row['email']; $secret = $row['password']; //we will echo these into the proper fields } mysql_free_result($query); // Process the POST variables $email = $_POST["email"]; //Variables $_SESSION["email"] = $_POST["email"]; $secret = $info['password']; //Checks if there is a login cookie if(isset($_COOKIE['ID_my_site'])) //if there is, it logs you in and directes you to the members page { $email = $_COOKIE['ID_my_site']; $pass = $_COOKIE['Key_my_site']; $check = mysql_query("SELECT email, password FROM accounts WHERE email = '$email'")or die(mysql_error()); while($info = mysql_fetch_array( $check )) { if (@ $info['password'] != $pass) { } else { header("Location: home.php"); } } } //if the login form is submitted if (isset($_POST['submit'])) { // if form has been submitted // makes sure they filled it in if(!$_POST['email'] | !$_POST['password']) { die('You did not fill in a required field.'); } // checks it against the database if (!get_magic_quotes_gpc()) { $_POST['email'] = addslashes($_POST['email']); } $check = mysql_query("SELECT email,password FROM accounts WHERE email = '".$_POST['email']."'")or die(mysql_error()); //Gives error if user dosen't exist $check2 = mysql_num_rows($check); if ($check2 == 0) { die('That user does not exist in our database. <a href=add.php>Click Here to Register</a>'); } while($info = mysql_fetch_array( $check )) //gives error if the password is wrong if (@ $_POST['password'] != $info['password']) { die('Incorrect password, please try again'); } else { // if login is ok then we add a cookie $_POST['email'] = stripslashes($_POST['email']); $hour = time() + 3600; setcookie(ID_my_site, $_POST['email'], $hour); setcookie(Key_my_site, $_POST['password'], $hour); //then redirect them to the members area header("Location: home.php"); } } } else { // if they are not logged in ?> <?php } ?> home.php session_start(); if(!isset($_SESSION['email'])) { header('Location: login_test3.php'); die('<a href="login_test3.php">Login first!</a>'); } //Variables $_SESSION["email"] = $email; print $_SESSION['name']; UPDATE Just realized the existing code gets in to the home.php file but will not echo anything. But as soon as you hit refresh the session is gone.

    Read the article

  • best alternative to in-definition initialization of static class members? (for SVN keywords)

    - by Jeff
    I'm storing expanded SVN keyword literals for .cpp files in 'static char const *const' class members and want to store the .h descriptions as similarly as possible. In short, I need to guarantee single instantiation of a static member (presumably in a .cpp file) to an auto-generated non-integer literal living in a potentially shared .h file. Unfortunately the language makes no attempt to resolve multiple instantiations resulting from assignments made outside class definitions and explicitly forbids non-integer inits inside class definitions. My best attempt (using static-wrapping internal classes) is not too dirty, but I'd really like to do better. Does anyone have a way to template the wrapper below or have an altogether superior approach? // Foo.h: class with .h/.cpp SVN info stored and logged statically class Foo { static Logger const verLog; struct hInfoWrap; public: static hInfoWrap const hInfo; static char const *const cInfo; }; // Would like to eliminate this per-class boilerplate. struct Foo::hInfoWrap { hInfoWrapper() : text("$Id$") { } char const *const text; }; ... // Foo.cpp: static inits called here Foo::hInfoWrap const Foo::hInfo; char const *const Foo::cInfo = "$Id$"; Logger const Foo::verLog(Foo::cInfo, Foo::hInfo.text); ... // Helper.h: output on construction, with no subsequent activity or stored fields class Logger { Logger(char const *info1, char const *info2) { cout << info0 << endl << info1 << endl; } }; Is there a way to get around the static linkage address issue for templating the hInfoWrap class on string literals? Extern char pointers assigned outside class definitions are linguistically valid but fail in essentially the same manner as direct member initializations. I get why the language shirks the whole resolution issue, but it'd be very convenient if an inverted extern member qualifier were provided, where the definition code was visible in class definitions to any caller but only actually invoked at the point of a single special declaration elsewhere. Anyway, I digress. What's the best solution for the language we've got, template or otherwise? Thanks!

    Read the article

  • How To Start Your Own Professional Blog with WordPress

    - by Matthew Guay
    Would you like to start your own blog or website?  With a free WordPress  account, it’s free and easy to get started creating your own professional quality blog site. This is the first part in a series on how to create your own professional quality blog site. No, we’re not talking about some cheapo looking blog from Blogger or something on Facebook, but creating a quality blog you can be proud of and present to millions of readers online. WordPress is one of the most popular blogging platforms, powering hundreds of high-profile websites and blogs around the world.  It’s both powerful and easy to use, which makes it great whether you’re just starting out or are a blogging pro.  To start out with your blogging project WordPress is completely free, and you can use the online interface or install the WordPress software on your own server and blog from there. Getting Started You can start a blog in just a few minutes.  Head over to WordPress.com and click Sign up now on the right-hand side of the main page. Enter a username and password, check that you agree with the legal terms, select the “Gimme a blog” bullet, and click Next. WordPress may inform you that your username is already taken, simply choose a new one and try again. Next, choose a domain for your blog.  This will be the address for your site, and cannot be changed, so be sure to choose exactly what you want.  If you’d prefer your address to be yourname.com instead of yourname.wordpress.com, you can add your own domain for a fee after your blog is setup…but we’ll cover that later. Once you click signup, you will be sent a confirmation email.  While you wait for the email to arrive you can go ahead and enter in your name and a short bio about yourself. When you receive your confirmation email, click the link.  Congratulations; you now have your own blog! You can view your new blog immediately, though the default theme isn’t very interesting without your content and pictures. Back on the page you opened from the email, click Login to access your blog’s administration page and to start adding stuff to your blog.  You can also access your blog’s admin page anytime by from yourname.wordpress.com/admin, substituting your own blog name for yourname. Enter your username and password, then click Log in to get started. Adding Content to your WordPress.com Blog When you sign in to your WordPress blog, you’ll first see the WordPress Admin page.  Here you can see recent posts and comments, and you can see stats of how many people have visited your site.  You can also access all of your blog tools and settings right from this page. To add a new post to your blog, click the Posts link on the left, then click “Add New” either on the left menu or on the top of the Edit Posts page.  Or, if you want to edit the default first post, hover over it and select Edit. Or click the New Posts button on the top of the page.  This menu bar is always visible whenever you’re logged in, so it’s an easy way to add a post. The editor lets you easily write anything you want in a Microsoft Word-style editor.  You can format your text, add lists, links, quotes, and more.  When you’re ready to share your content with the world, click Publish on the right side. To add pictures or other files, click the picture icon beside “Upload/Insert”.  Your free blog account can store up to 3Gb of pictures and documents which will definitely give you a good start. Click Select Files, and then choose the pictures or documents you want to add to your post. When the pictures have uploaded, you can add a caption and choose how to position the picture.  When you’re finished, select “Insert into Post”.   Or, if you want to add a video, click the video button.  You have to add a paid upgrade to upload videos directly, but you can add YouTube and other online videos for free. Click the “From URL” tab, and then paste the link to the YouTube video and click Insert into post. If you’re a code geek, click the HTML tab in the editor and edit the HTML of your blog post the geeky way. Once you’ve added all your content and edited it the way you want, click the Publish button on the right of the editor.  Or, you can click Preview to make sure it looks right, and then click Publish. Here’s our blog with the new blog post containing a picture and video.  While you’re getting to know you’re way around the controls in WordPress, the Preview feature will be your best friend while you try to organize the content to your liking.   Conclusion It only takes a couple minutes to get started blogging at WordPress.com. Whether you want to write about your daily life, share pictures of your children, or review the latest books and gadgets, WordPress.com is a great place to get started for free.  But we’ve only covered a small portion of the WordPress features…but this should get you started. Check back for more WordPress and blogging coverage coming up soon! Links Signup for a free WordPress.com account Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareProtecting Your WordPress Admin Panel From Hackers With .htaccessMake a Backup Copy of your Production Wordpress Blog on UbuntuLinux QuickTip: Downloading and Un-tarring in One Step TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Generating a twitter OAuth access key - the semi-manual way

    - by Piet
    [UPDATE] Apparently someone at Twitter was listening, or I’m going senile/blind. Let’s call it a combination of both. Instead of following all the steps below, you could just login with the Twitter account you want to use on http://dev.twitter.com, register your application and then click ‘Edit Details’ on the application overview page at http://dev.twitter.com/apps. Next click the ‘Application detail’ button on the right, followed by the ‘My Access Token’ button in order to get your Access Token and Access Token Secret. This makes the old post below rather obsolete. Clearly a case of me thinking everything is a nail and ruby is a hammer (don’t they usually say this about java coders?) [ORIGINAL POST] OAuth is great! OAuth allows your application to use your user’s data without the need to ask for their password. So Twitter made the API much safer for their and your users. Hurray! Free pizza for everyone! Unless of course you’re using the Twitter API for your own needs like running your own bot and don’t need access to other user’s data. In such cases a simple username/password combination is more than enough. I can understand however that the Twitter guys don’t really care that much about these exceptions(?). Most such uses for the API are probably rather spammy in nature. !!! If you have a twitter app that uses the API to access external user’s data: look for another solution. This solution is ONLY meant when you ONLY need access to your own account(s) through the API. Other Solutions Mr Dallas Devries posted a solution here which involves requesting and scraping a one-time PIN. But: I like to minimize the amount of calls I make to twitter’s API or pages to lessen my chances of meeting the fail whale. Also, as soon as the pin isn’t included in a div called ‘oauth_pin’ anymore, this will fail. However, mr Devries’ post was a starting point for my solution, so I’m much obliged to him posting his findings. Authenticating with the Twitter API: old vs new Acessing The Twitter API the old way: require ‘twitter’ httpauth = Twitter::HTTPAuth.new('my_account','my_secret_password') client = Twitter::Base.new(httpauth) client.update(‘Hurray!’) The OAuth way: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Hurray!’) In the above case, ve4whatafuzzksaMQKjoI is the ‘consumer key’ (sometimes also referred to as ‘consumer token’) and KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY is the ‘consumer secret’. You’ll get these from Twitter when you register your app. 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the ‘access token’ and fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the ‘access secret’. This combination gives the registered application access to your account. I’ll show you how to obtain these by following the steps below. (Basically you’ll need a bunch of keys and you’ll have to jump a bit through hoops to obtain them for your server/bot. ) How to get these keys 1. Surf to the twitter apps registration page go to http://dev.twitter.com/apps to register your app. Login with your twitter account. 2. Register your application Enter something for Application name, Description, website,… as I said: they make you jump through hoops. If you plan on using the API to post tweets, Your application name and website will be used in the ‘5 minutes ago via…’ line below your tweet. You could use the this to point to a page with info about your bot, or maybe it’s useful for SEO purposes. For application type I choose ‘browser’ and entered http://www.hadermann.be/callback as a ‘Callback URL’. This url returns a 404 error, which is ideal because after giving our account access to our ‘application’ (step 6), it will redirect to this url with an ‘oauth_token’ and ‘oauth_verifier’ in the url. We need to get these from the url. It doesn’t really matter what you enter here though, you could leave it blank because you need to explicitely specify it when generating a request token. You probably want read&write access so set this at ‘Default Access type’. 3. Get your consumer key and consumer secret On the next page, copy/paste your ‘consumer key’ and ‘consumer secret’. You’ll need these later on. You also need these as part of the authentication in your script later on: oauth = Twitter::OAuth.new([consumer key], [consumer secret]) 4. Obtain your request token run the following in IRB to obtain your ‘request token’ Replace my fake consumer key and consumer secret with the one you obtained in step 3. And use something else instead http://www.hadermann.be/callback: although this will only give a 404, you shouldn’t trust me. irb(main):001:0> require 'oauth' irb(main):002:0> c = OAuth::Consumer.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY', {:site => 'http://twitter.com'}) irb(main):003:0> request_token = c.get_request_token(:oauth_callback => 'http://www.hadermann.be/callback') irb(main):004:0> request_token.token => "UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1" This (UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1) is the request token: Copy/paste this token, you will need this next. 5. Authorize your application surf to https://api.twitter.com/oauth/authorize?oauth_token=[the above token], for example: https://api.twitter.com/oauth/authorize?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1 This will bring you to the ‘An application would like to connect to your account’- screen on Twitter where you can grant access to the app you just registered. If you aren’t still logged in, you need to login first. Click ‘Allow’. Unless you don’t trust yourself. 6. Get your oauth_verifier from the redirected url Your browser will be redirected to your callback url, with an oauth_token and oauth_verifier parameter appended. You’ll need the oauth_verifier. In my case the browser redirected to: http://www.hadermann.be/callback?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1&oauth_verifier=waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag Which returned a 404, giving me the chance to copy/paste my oauth_verifier: waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag 7. Request an access token Back to irb, use the oauth_verifier to request an access token, as follows: irb(main):005:0> at = request_token.get_access_token(:oauth_verifier => 'waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag') irb(main):006:0> at.params[:oauth_token] => "123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis" irb(main):007:0> at.params[:oauth_token_secret] => "fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh" We’re there! 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the access token. fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the access secret. Try it! Try the following to post an update: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Cowabunga!’) Now you can go to your twitter page and delete the tweet if you want to.

    Read the article

  • Week in Geek: US Govt E-card Scam Siphons Confidential Data Edition

    - by Asian Angel
    This week we learned how to “back up photos to Flickr, automate repetitive tasks, & normalize MP3 volume”, enable “stereo mix” in Windows 7 to record audio, create custom papercraft toys, read up on three alternatives to Apple’s flaky iOS alarm clock, decorated our desktops & app docks with Google icon packs, and more. Photo by alexschlegel. Random Geek Links It has been a busy week on the security & malware fronts and we have a roundup of the latest news to help keep you updated. Photo by TopTechWriter.US. US govt e-card scam hits confidential data A fake U.S. government Christmas e-card has managed to siphon off gigabytes of sensitive data from a number of law enforcement and military staff who work on cybersecurity matters, many of whom are involved in computer crime investigations. Security tool uncovers multiple bugs in every browser Michal Zalewski reports that he discovered the vulnerability in Internet Explorer a while ago using his cross_fuzz fuzzing tool and reported it to Microsoft in July 2010. Zalewski also used cross_fuzz to discover bugs in other browsers, which he also reported to the relevant organisations. Microsoft to fix Windows holes, but not ones in IE Microsoft said that it will release two security bulletins next week fixing three holes in Windows, but it is still investigating or working on fixing holes in Internet Explorer that have been reportedly exploited in attacks. Microsoft warns of Windows flaw affecting image rendering Microsoft has warned of a Windows vulnerability that could allow an attacker to take control of a computer if the user is logged on with administrative rights. Windows 7 Not Affected by Critical 0-Day in the Windows Graphics Rendering Engine While confirming that details on a Critical zero-day vulnerability have made their way into the wild, Microsoft noted that customers running the latest iteration of Windows client and server platforms are not exposed to any risks. Microsoft warns of Office-related malware Microsoft’s Malware Protection Center issued a warning this week that it has spotted malicious code on the Internet that can take advantage of a flaw in Word and infect computers after a user does nothing more than read an e-mail. *Refers to a flaw that was addressed in the November security patch releases. Make sure you have all of the latest security updates installed. Unpatched hole in ImgBurn disk burning application According to security specialist Secunia, a highly critical vulnerability in ImgBurn, a lightweight disk burning application, can be used to remotely compromise a user’s system. Hole in VLC Media Player Virtual Security Research (VSR) has identified a vulnerability in VLC Media Player. In versions up to and including 1.1.5 of the VLC Media Player. Flash Player sandbox can be bypassed Flash applications run locally can read local files and send them to an online server – something which the sandbox is supposed to prevent. Chinese auction site touts hacked iTunes accounts Tens of thousands of reportedly hacked iTunes accounts have been found on Chinese auction site Taobao, but the company claims it is unable to take action unless there are direct complaints. What happened in the recent Hotmail outage Mike Schackwitz explains the cause of the recent Hotmail outage. DOJ sends order to Twitter for Wikileaks-related account info The U.S. Justice Department has obtained a court order directing Twitter to turn over information about the accounts of activists with ties to Wikileaks, including an Icelandic politician, a legendary Dutch hacker, and a U.S. computer programmer. Google gets court to block Microsoft Interior Department e-mail win The U.S. Federal Claims Court has temporarily blocked Microsoft from proceeding with the $49.3 million, five-year DOI contract that it won this past November. Google Apps customers get email lockdown Companies and organisations using Google Apps are now able to restrict the email access of selected users. LibreOffice Is the Default Office Suite for Ubuntu 11.04 Matthias Klose has announced some details regarding the replacement of the old OpenOffice.org 3.2.1 packages with the new LibreOffice 3.3 ones, starting with the upcoming Ubuntu 11.04 (Natty Narwhal) Alpha 2 release. Sysadmin Geek Tips Photo by Filomena Scalise. How to Setup Software RAID for a Simple File Server on Ubuntu Do you need a file server that is cheap and easy to setup, “rock solid” reliable, and has Email Alerting? This tutorial shows you how to use Ubuntu, software RAID, and SaMBa to accomplish just that. How to Control the Order of Startup Programs in Windows While you can specify the applications you want to launch when Windows starts, the ability to control the order in which they start is not available. However, there are a couple of ways you can easily overcome this limitation and control the startup order of applications. Random TinyHacker Links Using Opera Unite to Send Large Files A tutorial on using Opera Unite to easily send huge files from your computer. WorkFlowy is a Useful To-do List Tool A cool to-do list tool that lets you integrate multiple tasks in one single list easily. Playing Flash Videos on iOS Devices Yes, you can play flash videos on jailbroken iPhones. Here’s a tutorial. Clear Safari History and Cookies On iPhone A tutorial on clearing your browser history on iPhone and other iOS devices. Monitor Your Internet Usage Here’s a cool, cross-platform tool to monitor your internet bandwidth. Super User Questions See what the community had to say on these popular questions from Super User this week. Why is my upload speed much less than my download speed? Where should I find drivers for my laptop if it didn’t come with a driver disk? OEM Office 2010 without media – how to reinstall? Is there a point to using theft tracking software like Prey on my laptop, if you have login security? Moving an “all-in-one” PC when turned on/off How-To Geek Weekly Article Recap Get caught up on your HTG reading with our hottest articles from this past week. How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk How To Boot 10 Different Live CDs From 1 USB Flash Drive What is Camera Raw, and Why Would a Professional Prefer it to JPG? Did You Know Facebook Has Built-In Shortcut Keys? The How-To Geek Guide to Audio Editing: The Basics One Year Ago on How-To Geek Enjoy looking through our latest gathering of retro article goodness. Learning Windows 7: Create a Homegroup & Join a New Computer To It How To Disconnect a Machine from a Homegroup Use Remote Desktop To Access Other Computers On a Small Office or Home Network How To Share Files and Printers Between Windows 7 and Vista Allow Users To Run Only Specified Programs in Windows 7 The Geek Note That is all we have for you this week and we hope your first week back at work or school has gone very well now that the holidays are over. Know a great tip? Send it in to us at [email protected]. Photo by Pamela Machado. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Security Trimmed Cross Site Collection Navigation

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This article will serve as documentation of a fully functional codeplex project that I just created. This project will give you a WebPart that will give you security trimmed navigation across site collections. The first question is, why create such a project? In every single SharePoint project you will do, one question you will always be faced with is, what should the boundaries of sites be, and what should the boundaries of site collections be? There is no good or bad answer to this, because it really really depends on your needs. There are some factors in play here. Site Collections will allow you to scale, as a Site collection is the smallest entity you can put inside a content database Site collections will allow you to offer different levels of SLAs, because you put a site collection on a separate content database, and put that database on a separate server. Site collections are a security boundary – and they can be moved around at will without affecting other site collections. Site collections are also a branding boundary. They are also a feature deployment boundary, so you can have two site collections on the same web application with completely different nature of services. But site collections break navigation, i.e. a site collection at “/”, and a site collection at “/sites/mySiteCollection”, are completely independent of each other. If you have access to both, the navigation of / won’t show you a link to /sites/mySiteCollection. Some people refer to this as a huge issue in SharePoint. Luckily, some workarounds exist. A long time ago, I had blogged about “Implementing Consistent Navigation across Site Collections”. That approach was a no-code solution, it worked – it gave you a consistent navigation across site collections. But, it didn’t work in a security trimmed fashion! i.e., if I don’t have access to Site Collection ‘X’, it would still show me a link to ‘X’. Well this project gets around that issue. Simply deploy this project, and it’ll give you a WebPart. You can use that WebPart as either a webpart or as a server control dropped via SharePoint designer, and it will give you Security Trimmed Cross Site Collection Navigation. The code has been written for SP2010, but it will work in SP2007 with the help of http://spwcfsupport.codeplex.com . What do I need to do to make it work? I’m glad you asked! Simple! Deploy the .wsp (which you can download here). This will give you a site collection feature called “Winsmarts Cross Site Collection Navigation” as shown below. Go ahead and activate it, and this will give you a WebPart called “Winsmarts Navigation Web Part” as shown below: Just drop this WebPart on your page, and it will show you all site collections that the currently logged in user has access to. Really it’s that easy! This is shown as below - In the above example, I have two site collections that I created at /sites/SiteCollection1 and /sites/SiteCollection2. The navigation shows the titles. You see some extraneous crap as well, you might want to clean that – I’ll talk about that in a minute. What? You’re running into problems? If the problem you’re running into is that you are prompted to login three times, and then it shows a blank webpart that says “Loading your applications ..” and then craps out!, then most probably you’re using a different authentication scheme. Behind the scenes I use a custom WCF service to perform this job. OOTB, I’ve set it to work with NTLM, but if you need to make it work alternate authentications such as forms based auth, or client side certs, you will need to edit the %14%\ISAPI\Winsmarts.CrossSCNav\web.config file, specifically, this section - 1: <bindings> 2: <webHttpBinding> 3: <binding name="customWebHttpBinding"> 4: <security mode="TransportCredentialOnly"> 5: <transport clientCredentialType="Ntlm"/> 6: </security> 7: </binding> 8: </webHttpBinding> 9: </bindings> For Kerberos, change the “clientCredentialType” to “Windows” For Forms auth, remove that transport line For client certs – well that’s a bit more involved, but it’s just web.config changes – hit a good book on WCF or hire me for a billion trillion $. But fair warning, I might be too busy to help immediately. If you’re running into a different problem, please leave a comment below, but the code is pretty rock solid, so .. hmm .. check what you’re doing! BTW, I don’t  make any guarantee/warranty on this – if this code makes you sterile, unpopular, bad hairstyle, anything else, that is your problem! But, there are some known issues - I wrote this as a concept – you can easily extend it to be more flexible. Example, hierarchical nav, or, horizontal nav, jazzy effects with jquery or silverlight– all those are possible very very easily. This webpart is not smart enough to co-exist with another instance of itself on the same page. I can easily extend it to do so, which I will do in my spare(!?) time! Okay good! But that’s not all! As you can see, just dropping the WebPart may show you many extraneous site collections, or maybe you want to restrict which site collections are shown, or exclude a certain site collection to be shown from the navigation. To support that, I created a property on the WebPart called “UrlMatchPattern”, which is a regex expression you specify to trim the results :). So, just edit the WebPart, and specify a string property of “http://sp2010/sites/” as shown below. Note that you can put in whatever regex expression you want! So go crazy, I don’t care! And this gives you a cleaner look.   w00t! Enjoy! Comment on the article ....

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • Tools to Help Post Content On Your WordPress Blog

    - by Matthew Guay
    Now that you’ve got a nice blog, you want to do more with it and start posting content.  Here we look at some tools that will allow you to post directly to your WordPress blog. Writing a new blog post is easy with WordPress as we saw in our previous post about Starting your own WordPress blog.  The web editor gives you a lot of features and even lets you edit your post’s source code if you enjoy hacking HTML.  There are other tools that will allow you to post content, here we look at how you can post with dedicated apps, browser plugins, and even by email. Windows Live Writer Windows Live Writer (part of the Windows Live Essentials Suite) is a great app for posting content to your blog.  This free program for Microsoft lets you post content to a variety of blogging services, including Blogger, Typepad, LiveJournal, and of course WordPress.  You can write blog posts directly from its Word-like editor, complete with pictures and advanced formatting.  Even if you’re offline, you can still write posts and save them for when you’re online again. For more information about installing Live writer, check out our article on how to Install Windows Live Essentials In Windows 7. Once Live Writer is installed, open it to add your blog.  If you already had Live Writer installed and configured for a blog, you can add your new blog, too.  Just click your blog’s name in the top right corner, and select “Add blog account”. Select “Other blog service” to add your WordPress blog to Writer, and click Next.   Enter your blog’s web address, and your username and password.  Check Remember my password so you don’t have to enter it every time you write something. Writer will analyze your blog and setup your account. During the setup process it may ask to post a temporary post.  This will let you preview blog posts using your blog’s real theme, which is helpful, so click Yes. Finally, add your Blog’s name, and click Finish. You can now use the rich editor to write and add content to a new blog post.   Select the Preview tab to see how your post will look on your blog… Or, if you’re a HTML geek, select the Source tab to edit the code of your blog post. From the bottom of the window, you can choose categories, insert tags, and even schedule the post to publish on a different day.  Live Writer is fully integrated with WordPress; you’re not missing anything by using the desktop editor. If you want to edit a post you’ve already published, click the Open button and select the post.  You can chose and edit any post, including ones you published via the web interface or other editors. Add Multimedia Content to your Posts with Live Writer Back in the Edit tab, you can add pictures, videos and more from the sidebar.  Select what you want to insert. Pictures If you insert a picture, you can add many nice borders and designs to it. Or, you can even add artistic effects from the Effects tab in the sidebar. Photo Gallery If you want to post several pictures, say some of your vacation shots, then inserting a picture gallery may be the best option.  Select Insert Photo Gallery in the sidebar, and then choose the pictures you want in the gallery. Once the gallery is inserted, you can choose from several styles to showcase your pictures. When you post the blog, you will be asked to sign in with your Windows Live ID as the gallery pictures will be stored in the free Skydrive storage service. Your blog readers can see the preview of your pictures directly on your blog, and then can view each individual picture, download them, or see a slideshow online via the link. Video If you want to add a video to your blog post, select Video from the sidebar as above.  You can select a video that’s already online, or you can choose a new video from file and upload it via YouTube directly from Windows Live Writer.   Note that you will have to sign in with your YouTube account to upload videos to YouTube, so if you’re not logged in you’ll be prompted to do so when you click Insert. Geek Tip:  If you ever want to copy your Live Writer settings to another computer, check out our article on how to Backup Your Windows Live Writer Settings. Microsoft Office Word Word 2007 and 2010 also let you post content directly to your blog.  This is especially nice if you’ve already typed up a document and think it would be good on your Blog as well.  Check out our in-depth tutorial on posting blog posts via Word 2007 using Word 2007 as a blogging tool. This works in Word 2010 too, except the Office Orb has been replaced by the new Backstage view.  So, in Word 2010, to start a new blog post, click File \ New then select Blog post.  Proceed as you would in Word 2007 to add your blog settings and post the content you want. Or, if you’ve already written a document and want to post it, select File \ Share (or Save and Send in the final version of Word 2010), and then click Publish as Blog Post.  If you haven’t setup your blog account yet, set it up as shown in the Word 2007 article. Post Via Email Most of us use email daily, and already have our favorite email app or service.  Whether on your desktop or mobile phone, it’s easy to create rich emails and add content.  WordPress lets you generate a unique email address that you can use to easily post content and email to your blog.  Just compose your email with the subject as the title of your post, and send it to this unique address.  Your new post will be up in minutes. To active this feature, click the My Account button in the top menu bar in your WordPress.com account, and select My Blogs. Click the Enable button under Post by Email beside your blog’s name.   Now you’ll have a private email you can use to post to your blog.  Anything you send to this email will be posted as a new post.  If you think your email may be compromised, click Regenerate to get a new publishing email address. Any email program or webapp now is a blog post editor.  Feel free to use rich formatting or insert pictures; it all comes through great.  This is also a great way to post to your blog from your mobile device.  Whether you’re using webmail or a dedicated email client on your phone, you can now blog from anywhere.   Mobile Applications WordPress also offer dedicated applications for blogging directly from your mobile device.  You can write new posts, edit existing ones, and manage comments all from your Smartphone.  Currently they offer apps for iPhone, Android, and Blackberry.  Check them out at the link below. Conclusion Whether you want to write from your browser or email a post to your blog, WordPress is flexible enough to work right along with your preferences.  However you post, you can be sure that it will look professional and be easily accessible with your WordPress blog. Download Windows Live Writer Download WordPress apps for your mobile device Similar Articles Productive Geek Tips Quick Tip: Set a Future Date for a Post in WordPressAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFuture Date a Post in Windows Live WriterHow To Start Your Own Professional Blog with WordPressUsing Word 2007 as a Blogging Tool TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • Quick guide to Oracle IRM 11g: Creating your first sealed document

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThe previous articles in this guide have detailed how to install, configure and secure your Oracle IRM 11g service. This article walks you through the process of now creating your first context and securing a document against it. I should mention that it would be worth reviewing the following to ensure your installation is ready for that all important first document. Ensure you have correctly configured the keystore for the IRM wrapper keys. If this is not correctly configured, creating the context below will fail. Make sure the IRM server URL correctly resolves and uses the right protocol (HTTP or HTTPS) ContentsCreate the first contextInstall the Oracle IRM Desktop Seal your first document Create the first contextIn Oracle 11g there is a built in classification and rights system called the "standard rights model" which is based on 10 years of customer use cases and innovation. It is a system which enables IRM to scale massively whilst retaining the ability to balance security and usability and also separate duties by allowing contacts in the business to own classifications. The final article in this guide goes into detail on this inbuilt classification model, but for the purposes of this current article all we need to do is create at least one context to test our system out.With a new IRM server there are a set of predefined context templates and roles which again are setup in a way which reflects the most common use we've learned from our customers. We will use these out of the box configurations as they are to create the first context against which we will seal some content.First login to your Oracle IRM Management Website located at https://irm.company.com/irm_rights/. Currently the system is only configured to use the built in LDAP for users, so use the only account we have at the moment, which by default is weblogic. Once logged in switch to the Contexts tab. Click on the New Context icon () in the menu bar on the left. In the resulting dialog select the Standard context template and enter in a name for the context. Then just hit finish, the weblogic account will automatically be made the manager. You'll now see your brand new context ready for users to be assigned. Now click on the Assign Role icon () in the menu bar and in the resulting dialog search for your only user account, weblogic, and add to the list on the right. Now select a role for this user. Because we need to create a document with this user we must select contributor, as this is the only role which allows for the ability to seal. Finally hit next and then finish. We now have a context with a user that has the rights to create a document. The next step is to configure the IRM Desktop to get these rights from the server. Install the Oracle IRM Desktop Before we can seal a document we need the client software installed. Oracle IRM has a very small, lightweight client called the Oracle IRM Desktop which can be freely downloaded in 27 languages from here. Double click on the installer and click on next... Next again... And finally on install... Very easy. You may get a warning about closing Outlook, Word or another application and most of the time no reboots are required. Once it is installed you will see the IRM Desktop icon running in your tool tray, bottom right of the desktop. Seal your first document Finally the prize is within reach, creating your first sealed document. The server is running, we've got a context ready, a user assigned a role in the context but there is the simple and obvious hoop left to jump through. To seal a document we need to have the users rights cached to the local machine. For this to take place, the IRM Desktop needs to know where the Oracle IRM server is on the network so we can synchronize these rights and then be able to seal a document. The usual way for the IRM Desktop to know about the IRM server is it learns automatically when you open an existing piece of content that someone has sent you... ack. Bit of a chicken or the egg dilemma. The solution is to manually tell the IRM Desktop the location of the IRM Server and then force a synchronization of rights. Right click on the Oracle IRM Desktop icon in the system tray and select Options.... Then switch to the Servers tab in the resulting dialog. There are no servers in the list because you've never opened any content. This list is usually populated automatically but we are going to add a server manually, so click on New.... Into the dialog enter in the full URL to the IRM server. Note that this time you use the path /irm_desktop/ and not /irm_rights/. You can see an example from the image below. Click on the validate button and you'll be asked to authenticate. Enter in your weblogic username and password and also check the Remember my password check box. Click OK and the IRM Desktop will confirm a successful connection to the server. OK all the dialogs and we are ready to Synchronize this users rights to the desktop. Right click once more on the Oracle IRM Desktop icon in the system tray. Now the Synchronize menu option is available. Select this and the IRM Desktop will now talk to the IRM server, authenticate using your weblogic account and get your rights to the context we created. Because this is the first time this users has communicated with the IRM server the IRM Desktop presents a privacy policy dialog. This is a chance for the business to ask users to agree to any policy about the use of IRM before opening secured documents. In our guide we've not bothered to setup this URL so just click on the check box and hit Accept. The IRM Desktop will then talk to the server, get your rights and display a success dialog. Lets protect a documentNow we are ready to seal a piece of content. In my guide i'm going to protect a Microsoft Word document. This mean's I have to have copy of Office installed, in this guide i'm using Microsoft Office 2007. You could also seal a PDF document, you'll need to download and install Adobe Acrobat Reader. A very simple test could be to seal a GIF/JPG/PNG or piece of HTML because this is rendered using Internet Explorer. But as I say, i'm going to protect a Word document. The following example demonstrates choosing a file in Windows Explorer, there are many ways to seal a file and you can watch a few in this video.Open a copy of Windows Explorer and locate the file you wish to seal. Right click on the document and select Seal To -> Context You are now presented with the Select Context dialog. You'll now have a sealed copy of the document sat in the same location. Double click on this document and it will open, again using the credentials you've already provided. That is it, now you just need to add more users, more documents, more classifications and start exploring the different roles and experiment with different offline periods etc. You may wish to setup the server against an existing LDAP or Active Directory environment instead of using the built in WebLogic LDAP store. You can read how to use your corporate directory here. But before we finish this guide, there is one more article and arguably the most important article of all. Next I discuss the all important decision making surrounding the actually implementation of Oracle IRM inside your business. Who has rights to what? How do you map contexts to your existing business practices? It is the next article which actually ensures you deploy a successful IRM solution by looking at the business and understanding how they use your sensitive information and then configuring Oracle IRM to reflect their use.

    Read the article

  • dasBlog

    - by Daniel Moth
    Some people like blogging on a site that is completely managed by someone else (e.g. http://wordpress.com/) and others, like me, prefer hosting their own blog at their own domain. In the latter case you need to decide what blog engine to install on your web space to power your blog. There are many free blog engines to choose from (e.g. the one from http://wordpress.org/). If, like me, you want to use a blog engine that is based on the .NET platform you have many choices including BlogEngine.NET, Subtext and the one I picked: dasBlog. In this post I'll describe the steps I took to get going with the open source dasBlog (home page, source page). A. Installing First I installed dasBlog on my local Windows 7 machine where I have IIS7 installed. To install dasBlog, I started by clicking the "Install" button on its web gallery page. After that I went through configuration, theming and adding content as described below. Once I was happy that everything was working correctly on the local machine, I set this up on a hosting service. I went for a Windows IIS7 shared hosting 3 month Economy plan from GoDaddy. The dasBlog site lists a bunch of other hosts. You can read the installation instructions for dasBlog, and with GoDaddy I just had to click one button since it is available as part of their quick-install apps. With GoDaddy I had a previewdns option that allowed me to play around and preview my site before going live. B. Configuring After it was installed (on local machine and/or hosting provider), I followed the obvious steps to create an admin user and logged in. This displays an admin navigation bar with the following options: 1. Navigator Links: I decided I was not going to use this feature. I manage links on the side of my blog manually elsewhere as part of the theme. So, I deleted every entry on this page and ignored it thereafter. 2. Blogroll: Ditto - same comment as for Navigator Links. 3. Content Filters: I did not delete (or add) these, but I did ensure both checkboxes are not checked. I.e. I am not using this feature now, but I may return to it in the future. 4. Activity: This is a read-only view of various statistics. So nothing to configure here, but useful to come back to for complementary statistics to whatever other statistical package you use (e.g. free stats as part of the hosting and I also use feedburner for syndication stats). 5. Cross-posting: I did not need that, so I turned it off via the Configuration Settings discussed next. 6. Configuration Settings: This is where the bulk of the configuration for the blog takes place and they are stored in a single XML file: Site.Config file. There are truly self-explanatory options to pick for Basic Settings, Services Settings and Services to Ping, Syndication Settings (this is where you link to your feedburner name if you have one) and Mail to Weblog Settings (I keep this turned off). There are also "Xml Storage System Settings" (I keep this turned off), "OpenId Settings" (I allow OpenID commenters), "Spammer Settings" (Enable captcha, never show email addresses) and "Comment settings" (Enable comments, don't allow on older posts, don't allow html). There are also Appearance Settings (I checked the "Use Post Title for Permalink", replaced spaces with hyphen and unchecked the "Use Unique Title"). Finally, there are also Notification Settings, but they are a bit of hit and miss in my case, in that I don’t always get the emails (still investigating this). C. Adding Content You can add content via the "Add Entry" link on the admin navigation bar or by configuring the "Mail to Weblog" settings and sending email or, do what I've started doing, use Live Writer (also the team has a blog). Another way to add content is programmatically if, for example, you are migrating content from another blog (and I'll cover that in separate post sharing the code). What you should know is that all blog content (posts and comments) live in XML files in a folder called "content" under your dasBlog installation. D. Theming There is a very good guide about themes for dasBlog, there is also a similar guide with screenshots (scroll down to "So how do I create a theme") and the dasBlog macro reference. When you install dasBlog, there are many themes available; each theme is in its own folder (representing the folder name) under the themes folder. You may have noticed that you can switch between these via the "Appearance Settings" described above (look for the combobox after the Default Theme label). I created my own theme by copy-pasting an existing theme folder, renaming it and then switching to it as the default. I then opened the folder in Visual Studio and hacked around the HTML in the 3 files (itemTemplate, homeTemplate and dayTemplate). These files have a blogtemplate file extension, which I temporarily renamed to HTML as I was editing them. There is no more advice I can offer here as this is a matter of taste and the aforementioned links is all I used. Personally, I had salvaged the CSS (and structure) from my previous blog and wanted to make this one match it as closely as possible - I think I have succeeded. E. If you run into any issue with dasBlog... ...use your favorite search engine to find answers. Many bloggers have been using this engine for a while and have documented issues and workarounds over time. One such example is ScottHa's dasBlog category; another example is therightstuff where I "borrowed" the idea/macro for the outlook-style on-page navigation. If you don't find what you want through searching, try posting a question to the forums. Comments about this post welcome at the original blog.

    Read the article

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • Oracle Solaris Cluster 4.2 Event and its SNMP Interface

    - by user12609115
    Background The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog. Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster. Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2 The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks. • Logs up to 100 most recent NOTICE or higher severity events to the MIB. • Sends SNMP traps to the hosts that are configured to receive the above events. The changes in the Oracle Solaris Cluster 4.2 release are1) Introduction of the NOTICE severity for the cluster configuration and status change events.The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high) • INFO (not exposed to the SNMP interface) • NOTICE (newly introduced in the 4.2 release) • WARNING • ERROR • CRITICAL • FATAL In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster componentsnode, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group. With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE. 2) Customization for the cluster event SNMP interface • The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500. • The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release). Examples, • Set the number of events stored in the MIB to 200 # clsnmpmib set -p log_number=200 event • Set the interface to take effect on WARNING or higher severity events. # clsnmpmib set -p min_severity=WARNING event Administering the Cluster Event SNMP Interface Oracle Solaris Cluster provides the following three commands to administer the SNMP interface. • clsnmpmib: administer the SNMP interface, and the MIB configuration. • clsnmphost: administer hosts for the SNMP traps • clsnmpuser: administer SNMP users (specific for SNMP v3 protocol) Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands. Examples: 1. Enable the cluster event SNMP interface on the local node # clsnmpmib enable event 2. Display the status of the cluster event SNMP interface on the local node # clsnmpmib show -v 3. Configure my_host to receive the cluster event SNMP traps. # clsnmphost add my_host Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example, # cacaoadm list-params Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties. # cacaoadm set-param snmp-adaptor-port=1161 Set the SNMP MIB port number to 1161. # cacaoadm set-param snmp-adaptor-trap-port=1162 Set the SNMP trap port number to 1162. The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is 1.3.6.1.4.1.42.2.80, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide. - Leland Chen 

    Read the article

  • Integrating Flickr with ASP.Net application

    - by sreejukg
    Flickr is the popular photo management and sharing application offered by yahoo. The services from flicker allow you to store and share photos and videos online. Flicker offers strong API support for almost all services they provide. Using this API, developers can integrate photos to their public website. Since 2005, developers have collaborated on top of Flickr's APIs to build fun, creative, and gorgeous experiences around photos that extend beyond Flickr. In this article I am going to demonstrate how easily you can bring the photos stored on flicker to your website. Let me explain the scenario this article is trying to address. I have a flicker account where I upload photos and share in many ways offered by Flickr. Now I have a public website, instead of re-upload the photos again to public website, I want to show this from Flickr. Also I need complete control over what photo to display. So I went and referred the Flickr documentation and there is API support ready to address my scenario (and more… ). FlickerAPI for ASP.Net To Integrate Flicker with ASP.Net applications, there is a library available in CodePlex. You can find it here http://flickrnet.codeplex.com/ Visit the URL and download the latest version. The download includes a Zip file, when you unzip you will get a number of dlls. Since I am going to use ASP.Net application, I need FlickrNet.dll. See the screenshot of all the dlls, and there is a help file available in the download (.chm) for your reference. Once you have the dll, you need to use Flickr API from your website. I assume you have a flicker account and you are familiar with Flicker services. Arrange your photos using Sets in Flickr In flicker, you can define sets and add your uploaded photos to sets. You can compare set to photo album. A set is a logical collection of photos, which is an excellent option for you to categorize your photos. Typically you will have a number of sets each set having few photos. You can write application that brings photos from sets to your website. For the purpose of this article I already created a set Flickr and added some photos to it. Once you logged in to Flickr, you can see the Sets under the Menu. In the Sets page, you will see all the sets you have created. As you notice, you can see certain sample images I have uploaded just to test the functionality. Though I wish I couldn’t create good photos so please bear with me. I have created 2 photo sets named Blue Album and Red Album. Click on the image for the set, will take you to the corresponding set page. In the set “Red Album” there are 4 photos and the set has a unique ID (highlighted in the URL). You can simply retrieve the photos with the set id from your application. In this article I am going to retrieve the images from Red album in my ASP.Net page. For that First I need to setup FlickrAPI for my usage. Configure Flickr API Key As I mentioned, we are going to use Flickr API to retrieve the photos stored in Flickr. In order to get access to Flickr API, you need an API key. To create an API key, navigate to the URL http://www.flickr.com/services/apps/create/ Click on Request an API key link, now you need to tell Flickr whether your application in commercial or non-commercial. I have selected a non-commercial key. Now you need to enter certain information about your application. Once you enter the details, Click on the submit button. Now Flickr will create the API key for your application. Generating non-commercial API key is very easy, in couple of steps the key will be generated and you can use the key in your application immediately. ASP.Net application for retrieving photos Now we need write an ASP.Net application that display pictures from Flickr. Create an empty web application (I named this as FlickerIntegration) and add a reference to FlickerNet.dll. Add a web form page to the application where you will retrieve and display photos(I have named this as Gallery.aspx). After doing all these, the solution explorer will look similar to following. I have used the below code in the Gallery.aspx page. The output for the above code is as follows. I am going to explain the code line by line here. First it is adding a reference to the FlickrNet namespace. using FlickrNet; Then create a Flickr object by using your API key. Flickr f = new Flickr("<yourAPIKey>"); Now when you retrieve photos, you can decide what all fields you need to retrieve from Flickr. Every photo in Flickr contains lots of information. Retrieving all will affect the performance. For the demonstration purpose, I have retrieved all the available fields as follows. PhotoSearchExtras.All But if you want to specify the fields you can use logical OR operator(|). For e.g. the following statement will retrieve owner name and date taken. PhotoSearchExtras extraInfo = PhotoSearchExtras.OwnerName | PhotoSearchExtras.DateTaken; Then retrieve all the photos from a photo set using PhotoSetsGetPhotos method. I have passed the PhotoSearchExtras object created earlier. PhotosetPhotoCollection photos = f.PhotosetsGetPhotos("72157629872940852", extraInfo); The PhotoSetsGetPhotos method will return a collection of Photo objects. You can just navigate through the collection using a foreach statement. foreach (Photo p in photos) {     //access each photo properties } Photo class have lot of properties that map with the properties from Flickr. The chm documentation comes along with the CodePlex download is a great asset for you to understand the fields. In the above code I just used the following p.LargeUrl – retrieves the large image url for the photo. p.ThumbnailUrl – retrieves the thumbnail url for the photo p.Title – retrieves the Title of the photo p.DateUploaded – retrieves the date of upload Visual Studio intellisense will give you all properties, so it is easy, you can just try with Visual Studio intellisense to find the right properties you are looking for. Most of hem are self-explanatory. So you can try retrieving the required properties. In the above code, I just pushed the photos to the page. In real time you can use the retrieved photos along with JQuery libraries to create animated photo galleries, slideshows etc. Configuration and Troubleshooting If you get access denied error while executing the code, you need to disable the caching in Flickr API. FlickrNet cache the photos to your local disk when retrieved. You can specify a cache folder where the application need write permission. You can specify the Cache folder in the code as follows. Flickr.CacheLocation = Server.MapPath("./FlickerCache/"); If the application doesn’t have have write permission to the cache folder, the application will throw access denied error. If you cannot give write permission to the cache folder, then you must disable the caching. You can do this from code as follows. Flickr.CacheDisabled = true; Disabling cache will have an impact on the performance. Take care! Also you can define the Flickr settings in web.config file.You can find the documentation here. http://flickrnet.codeplex.com/wikipage?title=ExampleConfigFile&ProjectName=flickrnet Flickr is a great place for storing and sharing photos. The API access allows developers to do seamless integration with the photos uploaded on Flickr.

    Read the article

  • Randomly displayed flashing lines, no response to all shortcuts, just power off. [syslog included]

    - by B. Roland
    Hello! I have an old machine, and I want to use for that to learn employees how to use Ubuntu, and to be easyer to switch from Windows. I've been installed 10.04, and updated, but this strange stuff is happend. Graphical installion failed, same strange thing. With alternate workd. Sometimes, when I boot up, a boot message displayed: Keyboard failure..., often diplayed after reboot, and after shutdown, when I haven't plugged off from AC. I replaced the keyboard yet, same failure... If I powered off, and plugged off from AC, no keyboard problems displayed in boot time. Details Configuration: Dell OptiPlex GX60 - in original cover, no changes. 256 MB DDR 166 MHz Intel® Celeron® Processor 2.40 GHz Dell 0C3207 Base Board I know, that is not enough, but I have three other Nec compuers, with nearly similar config, and they works well with 9.10, 10.04, 10.10. Live CDs I've been tried with 10.04 and 10.10, but the problem is displayed too. With 9.10 no strange things displayed, but it froze, during a simple apt-get install. Syslog An error loop is logged here, but I paste the whole startup and error lines. The flashing lines are displayed sometimes immediately after login, but sometimes after 10 minutes, but once occured, that nothing happend. Strange thing is displayed immediately after login: here. An other boot, after some minutes, strange lines, and loop in log appeard: here. The loop should be that: Jan 23 00:20:08 machine_name kernel: [ 46.782212] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 23 00:20:08 machine_name kernel: [ 47.100033] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 23 00:20:08 machine_name kernel: [ 47.100045] render error detected, EIR: 0x00000000 Jan 23 00:20:08 machine_name kernel: [ 47.101487] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 16 at 9) Jan 23 00:20:11 machine_name kernel: [ 49.152020] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 23 00:20:11 machine_name gdm-simple-slave[1245]: WARNING: Unable to load file '/etc/gdm/custom.conf': No such file or directory Jan 23 00:20:11 machine_name acpid: client 1239[0:0] has disconnected Jan 23 00:20:11 machine_name acpid: client connected from 1247[0:0] Jan 23 00:20:11 machine_name acpid: 1 client rule loaded UPDATE Added syslog things: before errors, error loop, the complete shutdown(after the big updates): Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully called chroot. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully dropped privileges. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully limited resources. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Watchdog thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Canary thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully made thread 1337 of process 1337 (n/a) owned by '1001' high priority at nice level -11. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Supervising 1 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1345 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 2 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1349 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 3 threads of 1 processes of 1 users. Jan 28 20:40:37 machine_name pulseaudio[1337]: ratelimit.c: 2 events suppressed Jan 28 20:41:33 machine_name AptDaemon: INFO: Initializing daemon Jan 28 20:41:44 machine_name kernel: [ 167.691563] lo: Disabled Privacy Extensions Jan 28 20:47:33 machine_name AptDaemon: INFO: Quiting due to inactivity Jan 28 20:47:33 machine_name AptDaemon: INFO: Shutdown was requested Jan 28 20:59:50 machine_name kernel: [ 1253.840513] lo: Disabled Privacy Extensions Jan 28 21:17:02 machine_name CRON[1874]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 21:17:38 machine_name kernel: [ 2321.553239] lo: Disabled Privacy Extensions Jan 28 22:07:44 machine_name kernel: [ 5327.840254] lo: Disabled Privacy Extensions Jan 28 22:17:02 machine_name CRON[2665]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 22:57:03 machine_name kernel: [ 8286.641472] lo: Disabled Privacy Extensions Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 23:07:42 machine_name kernel: [ 8925.272030] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:07:42 machine_name kernel: [ 8925.272048] render error detected, EIR: 0x00000000 Jan 28 23:07:42 machine_name kernel: [ 8925.272093] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171453 at 171452) Jan 28 23:07:45 machine_name kernel: [ 8928.868041] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:10 machine_name acpid: client 925[0:0] has disconnected Jan 28 23:08:10 machine_name acpid: client connected from 8127[0:0] Jan 28 23:08:10 machine_name acpid: 1 client rule loaded Jan 28 23:08:11 machine_name kernel: [ 8955.046248] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:12 machine_name kernel: [ 8955.364016] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:12 machine_name kernel: [ 8955.364027] render error detected, EIR: 0x00000000 Jan 28 23:08:12 machine_name kernel: [ 8955.364407] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171457 at 171452) Jan 28 23:08:14 machine_name kernel: [ 8957.472025] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:14 machine_name acpid: client 8127[0:0] has disconnected Jan 28 23:08:14 machine_name acpid: client connected from 8141[0:0] Jan 28 23:08:14 machine_name acpid: 1 client rule loaded Jan 28 23:08:15 machine_name kernel: [ 8958.671722] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:15 machine_name kernel: [ 8958.988015] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:15 machine_name kernel: [ 8958.988026] render error detected, EIR: 0x00000000 Jan 28 23:08:15 machine_name kernel: [ 8958.989400] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171459 at 171452) Jan 28 23:08:16 machine_name init: tty4 main process (848) killed by TERM signal Jan 28 23:08:16 machine_name init: tty5 main process (856) killed by TERM signal Jan 28 23:08:16 machine_name NetworkManager: nm_signal_handler(): Caught signal 15, shutting down normally. Jan 28 23:08:16 machine_name init: tty2 main process (874) killed by TERM signal Jan 28 23:08:16 machine_name init: tty3 main process (875) killed by TERM signal Jan 28 23:08:16 machine_name init: tty6 main process (877) killed by TERM signal Jan 28 23:08:16 machine_name init: cron main process (890) killed by TERM signal Jan 28 23:08:16 machine_name init: tty1 main process (1146) killed by TERM signal Jan 28 23:08:16 machine_name avahi-daemon[644]: Got SIGTERM, quitting. Jan 28 23:08:16 machine_name avahi-daemon[644]: Leaving mDNS multicast group on interface eth0.IPv4 with address 10.238.11.134. Jan 28 23:08:16 machine_name acpid: exiting Jan 28 23:08:16 machine_name init: avahi-daemon main process (644) terminated with status 255 Jan 28 23:08:17 machine_name kernel: Kernel logging (proc) stopped. Jan 28 23:09:00 machine_name kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 28 23:09:00 machine_name rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="516" x-info="http://www.rsyslog.com"] (re)start Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's groupid changed to 103 Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's userid changed to 101 Jan 28 23:09:00 machine_name rsyslogd-2039: Could no open output file '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] When I hit the On/Off button, the system shuts down normally. May be it a hardware problem, but I don't know... Can you say something useful to solve my problem?

    Read the article

  • It's not just “Single Sign-on” by Steve Knott (aurionPro SENA)

    - by Greg Jensen
    It is true that Oracle Enterprise Single Sign-on (Oracle ESSO) started out as purely an application single sign-on tool but as we have seen in the previous articles in this series the product has matured into a suite of tools that can do more than just automated single sign-on and can also provide rapidly deployed, cost effective solution to many demanding password management problems. In the last article of this series I would like to discuss three cases where customers faced password scenarios that required more than just single sign-on and how some of the less well known tools in the Oracle ESSO suite “kitbag” helped solve these challenges. Case #1 One of the issues often faced by our customers is how to keep their applications compliant. I had a client who liked the idea of automated single sign-on for most of his applications but had a key requirement to actually increase the security for one specific SOX application. For the SOX application he wanted to secure access by using two-factor authentication with a smartcard. The problem was that the application did not support two-factor authentication. The solution was to use a feature from the Oracle ESSO suite called authentication manager. This feature enables you to have multiple authentication methods for the same user which in this case was a smartcard and the Windows password.  Within authentication manager each authenticator can be configured with a security grade so we gave the smartcard a high grade and the Windows password a normal grade. Security grading in Oracle ESSO can be configured on a per application basis so we set the SOX application to require the higher grade smartcard authenticator. The end result for the user was that they enjoyed automated single sign-on for most of the applications apart from the SOX application. When the SOX application was launched, the user was required by ESSO to present their smartcard before being given access to the application. Case #2 Another example solving compliance issues was in the case of a large energy company who had a number of core billing applications. New regulations required that users change their password regularly and use a complex password. The problem facing the customer was that the core billing applications did not have any native user password change functionality. The customer could not replace the core applications because of the cost and time required to re-develop them. With a reputation for innovation aurionPro SENA were approached to provide a solution to this problem using Oracle ESSO. Oracle ESSO has a password expiry feature that can be triggered periodically based on the timestamp of the users’ last password creation therefore our strategy here was to leverage this feature to provide the password change experience. The trigger can launch an application change password event however in this scenario there was no native change password feature that could be launched therefore a “dummy” change password screen was created that could imitate the missing change password function and connect to the application database on behalf of the user. Oracle ESSO was configured to trigger a change password event every 60 days. After this period if the user launched the application Oracle ESSO would detect the logon screen and invoke the password expiry feature. Oracle ESSO would trigger the “dummy screen,” detect it automatically as the application change password screen and insert a complex password on behalf of the user. After the password event had completed the user was logged on to the application with their new password. All this was provided at a fraction of the cost of re-developing the core applications. Case #3 Recent popular initiatives such as the BYOD and working from home schemes bring with them many challenges in administering “unmanaged machines” and sometimes “unmanageable users.” In a recent case, a client had a dispersed community of casual contractors who worked for the business using their own laptops to access applications. To improve security the around password management the security goal was to provision the passwords directly to these contractors. In a previous article we saw how Oracle ESSO has the capability to provision passwords through Provisioning Gateway but the challenge in this scenario was how to get the Oracle ESSO agent to the casual contractor on an unmanaged machine. The answer was to use another tool in the suite, Oracle ESSO Anywhere. This component can compile the normal Oracle ESSO functionality into a deployment package that can be made available from a website in a similar way to a streamed application. The ESSO Anywhere agent does not actually install into the registry or program files but runs in a folder within the user’s profile therefore no local administrator rights are required for installation. The ESSO Anywhere package can also be configured to stay persistent or disable itself at the end of the user’s session. In this case the user just needed to be told where the website package was located and download the package. Once the download was complete the agent started automatically and the user was provided with single sign-on to their applications without ever knowing the application passwords. Finally, as we have seen in these series Oracle ESSO not only has great utilities in its own tool box but also has direct integration with Oracle Privileged Account Manager, Oracle Identity Manager and Oracle Access Manager. Integrated together with these tools provides a complete and complementary platform to address even the most complex identity and access management requirements. So what next for Oracle ESSO? “Agentless ESSO available in the cloud” – but that will be a subject for a future Oracle ESSO series!                                                                                                                               

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >