Search Results

Search found 6580 results on 264 pages for 'require'.

Page 187/264 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • All is working except if($_POST['submit']=='Update')

    - by user1319909
    I have a working registration and login system. I am trying to create a form where a user can add product registration info (via mysql update). I can't seem to get the db to actually update the fields. What am I missing here?!? <?php define('INCLUDE_CHECK',true); require 'connect.php'; require 'functions.php'; // Those two files can be included only if INCLUDE_CHECK is defined session_name('tzLogin'); // Starting the session session_set_cookie_params(2*7*24*60*60); // Making the cookie live for 2 weeks session_start(); if($_SESSION['id'] && !isset($_COOKIE['tzRemember']) && !$_SESSION['rememberMe']) { // If you are logged in, but you don't have the tzRemember cookie (browser restart) // and you have not checked the rememberMe checkbox: $_SESSION = array(); session_destroy(); // Destroy the session } if(isset($_GET['logoff'])) { $_SESSION = array(); session_destroy(); header("Location: index_login3.php"); exit; } if($_POST['submit']=='Login') { // Checking whether the Login form has been submitted $err = array(); // Will hold our errors if(!$_POST['username'] || !$_POST['password']) $err[] = 'All the fields must be filled in!'; if(!count($err)) { $_POST['username'] = mysql_real_escape_string($_POST['username']); $_POST['password'] = mysql_real_escape_string($_POST['password']); $_POST['rememberMe'] = (int)$_POST['rememberMe']; // Escaping all input data $row = mysql_fetch_assoc(mysql_query("SELECT * FROM electrix_users WHERE usr='{$_POST['username']}' AND pass='".md5($_POST['password'])."'")); if($row['usr']) { // If everything is OK login $_SESSION['usr']=$row['usr']; $_SESSION['id'] = $row['id']; $_SESSION['email'] = $row['email']; $_SESSION['first'] = $row['first']; $_SESSION['last'] = $row['last']; $_SESSION['address1'] = $row['address1']; $_SESSION['address2'] = $row['address2']; $_SESSION['city'] = $row['city']; $_SESSION['state'] = $row['state']; $_SESSION['zip'] = $row['zip']; $_SESSION['country'] = $row['country']; $_SESSION['product1'] = $row['product1']; $_SESSION['serial1'] = $row['serial1']; $_SESSION['product2'] = $row['product2']; $_SESSION['serial2'] = $row['serial2']; $_SESSION['product3'] = $row['product3']; $_SESSION['serial3'] = $row['serial3']; $_SESSION['rememberMe'] = $_POST['rememberMe']; // Store some data in the session setcookie('tzRemember',$_POST['rememberMe']); } else $err[]='Wrong username and/or password!'; } if($err) $_SESSION['msg']['login-err'] = implode('<br />',$err); // Save the error messages in the session header("Location: index_login3.php"); exit; } else if($_POST['submit']=='Register') { // If the Register form has been submitted $err = array(); if(strlen($_POST['username'])<4 || strlen($_POST['username'])>32) { $err[]='Your username must be between 3 and 32 characters!'; } if(preg_match('/[^a-z0-9\-\_\.]+/i',$_POST['username'])) { $err[]='Your username contains invalid characters!'; } if(!checkEmail($_POST['email'])) { $err[]='Your email is not valid!'; } if(!count($err)) { // If there are no errors $pass = substr(md5($_SERVER['REMOTE_ADDR'].microtime().rand(1,100000)),0,6); // Generate a random password $_POST['email'] = mysql_real_escape_string($_POST['email']); $_POST['username'] = mysql_real_escape_string($_POST['username']); $_POST['first'] = mysql_real_escape_string($_POST['first']); $_POST['last'] = mysql_real_escape_string($_POST['last']); $_POST['address1'] = mysql_real_escape_string($_POST['address1']); $_POST['address2'] = mysql_real_escape_string($_POST['address2']); $_POST['city'] = mysql_real_escape_string($_POST['city']); $_POST['state'] = mysql_real_escape_string($_POST['state']); $_POST['zip'] = mysql_real_escape_string($_POST['zip']); $_POST['country'] = mysql_real_escape_string($_POST['country']); // Escape the input data mysql_query(" INSERT INTO electrix_users(usr,pass,email,first,last,address1,address2,city,state,zip,country,regIP,dt) VALUES( '".$_POST['username']."', '".md5($pass)."', '".$_POST['email']."', '".$_POST['first']."', '".$_POST['last']."', '".$_POST['address1']."', '".$_POST['address2']."', '".$_POST['city']."', '".$_POST['state']."', '".$_POST['zip']."', '".$_POST['country']."', '".$_SERVER['REMOTE_ADDR']."', NOW() )"); if(mysql_affected_rows($link)==1) { send_mail( '[email protected]', $_POST['email'], 'Your New Electrix User Password', 'Thank you for registering at www.electrixpro.com. Your password is: '.$pass); $_SESSION['msg']['reg-success']='We sent you an email with your new password!'; } else $err[]='This username is already taken!'; } if(count($err)) { $_SESSION['msg']['reg-err'] = implode('<br />',$err); } header("Location: index_login3.php"); exit; } if($_POST['submit']=='Update') { { mysql_query(" UPDATE electrix_users(product1,serial1,product2,serial2,product3,serial3) WHERE usr='{$_POST['username']}' VALUES( '".$_POST['product1']."', '".$_POST['serial1']."', '".$_POST['product2']."', '".$_POST['serial2']."', '".$_POST['product3']."', '".$_POST['serial3']."', )"); if(mysql_affected_rows($link)==1) { $_SESSION['msg']['upd-success']='Thank you for registering your Electrix product'; } else $err[]='So Sad!'; } if(count($err)) { $_SESSION['msg']['upd-err'] = implode('<br />',$err); } header("Location: index_login3.php"); exit; } if($_SESSION['msg']) { // The script below shows the sliding panel on page load $script = ' <script type="text/javascript"> $(function(){ $("div#panel").show(); $("#toggle a").toggle(); }); </script>'; } ?>

    Read the article

  • MS Ajax Libraries and Configured Assemblies

    - by smehaffie
    Use Case You have a brand new IIS servers that has .Net 3.5 installed and are migrating sites to the new servers.  In the process of migrating sites you come across some sites that get an error about the version of AJAX libraries being references in the web.config.  In the web.config all the entries reference 1.0.61025.0, but the older version of the AJAX libraries are not installed on the new servers, only the latest version is installed that comes with .Net 3.5.  So what are the options to fix this issue. Solutions 1) Install the older version of the AJAX Libraries: Although this works, IMO it is never a great idea to install an older version of a library after a newer version has been installed.  Plus, if all new application use the latest versions, is it worth the effort of installing the older version for a few legacy applications? 2) Update the web.config files so all references use latest version (3.5.0.0):  This option is very time consuming and error prone. In addition, you will also have to update any pages where there is a register tag for the older libraries as well.  This would require you to redeploy any application that have this issue. 3) Use the Configured Assembly capabilities of .Net (aka: Assembly Bindings) to make any application that uses the older AJAX libraries to use the new AJAX libraries.  IMO, this is the easiest, quickest and least invasive way to fix the issue.  Below are the steps to implement this fix. Solution #3 Do the following steps on the IIS servers that the issue is occurring.  The 2 assemblies that need assemblies bindings created are: System.Web.Extension & System.Web.Extensions.Design 1) Go to Start - > All Program -> Administrative Tools -> Microsoft .NET Framework 2.0 Configuration. 2) Right click on "Configured Assemblies" to view list of configured assemblies. 3) Left Click on right pane to bring up menu and choose "Add". 4) Make sure "Choose and assembly from the assembly cache is checked" and click the "Choose Assembly" button. 5) Choose System.Web.Extension (does not matter what version). 6) Click the "Finish" button. 7) Binding Policy Tab      - Enter Requested Version = 1.0.61025.0      - Enter New Version = 3.5.0.0 8) Repeat steps 2-7 for the System.Web.Extensions.Design assembly. --------------------------------------------------------------------------------------------------------------------------------------------------------- Note: If "Microsoft .NET Framework 2.0 Configuration does not exist under Admin tools use mmc to access it (see below) 1) Start -> Run -> Enter MMC 2) File - > Add/Remove Snap-In then Click "Add" button 3) Choose ".Net 2.0 Configuration" then click "Add" button and then the "Close" Button. 4) On "Add/Remove Snapin" windows click the "OK" Button. 5) Expand the tree on the right and you can start following the directions above for adding the configured assemblies. ---------------------------------------------------------------------------------------------------------------------------------------------------------

    Read the article

  • Listen to Over 100,000 Radio Stations in Windows Media Center

    - by Mysticgeek
    A cool feature in Windows 7 Media Center is the ability to listen to local FM radio. But what if you don’t have a tuner card that supports a connected radio antenna? The RadioTime plugin solves the problem by allowing access to thousands of online radio stations. With the RadioTime plugin for Windows Media Center, you’ll have access to over 100,000 online radio stations from around the world. Their guide is broken down into different categories such as Talk Radio, Music Radio, Sports Radio and more. It’s completely free, but does require registration to save preset stations. RadioTime It works with Media Center in XP, Vista, and Windows 7 (which we’re demonstrating here). When installing it for Windows 7, make sure to click the Installer link below the “Get It Now – Free” button as the installer works best for the new OS. Installation is extremely quick and easy… Now when you open Windows 7 Media Center you’ll find it located in the Extras category from the main menu. After you launch it, you’re presented with the RadioTime guide where you can browse through the different categories of stations. Your shown various station suggestions each time you start it up. The main categories are broken down further so you can find the right genre of the music your looking for.   World Radio offers you stations from all over the world categorized into different regions. RadioTime does support local stations via an FM tuner, but if you don’t have one, you can still access local stations provided they broadcast online. One thing about listening to your local stations online is the audio quality may not be as good as if you had a tuner connected. It provides information on most of the online stations. For example here we look at Minnesota Public Radio info and you get a schedule of when certain programs are on. Then get even more information about the topics on the shows. To use the Presets option you’ll need to log into your RadioTime account, or if you don’t have one just click on the link to create a free one.   Creating a free account is simple and basic on their site. You aren’t required to have an account to use the RadioTime plugin, it’s only if you want the additional benefits. Conclusion For this article we only tried it with Windows 7 Media Center, and sometimes the interface felt clunky when moving quickly through menus. Also, there isn’t a search feature from within Media Center, however, you can search stations from their site and add them to your presets. Despite a few shortcomings, this is a very cool way to get access to thousands of online radio stations through Windows Media Center. If you’re looking for a way to access thousands of radio stations through WMC, you might want to give RadioTime a try. Download RadioTime for Windows Media Center Similar Articles Productive Geek Tips Listen To XM Radio with Windows Media Center in Windows 7Listen and Record Over 12,000 Online Radio Stations with RadioSureUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Learning Windows 7: Manage Your Music with Windows Media PlayerSchedule Updates for Windows Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • What is the Oracle Utilities Application Framework?

    - by Anthony Shorten
    The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way. Note: Even though the Framework is built in java it can be integrated with COBOL based extensions for backward compatibility. When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects of that product be separated to allow for reuse and independence from technical issues. The idea was that all the technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities Application Framework (oufw is the product code). The technical components are contained in the Oracle Utilities Application Framework which can be summarized as follows: Metadata - The Oracle Utilities Application Framework is responsible for defining and using the metadata to define the runtime behavior of the product. All the metadata definition and management is contained within the Oracle Utilities Application Framework. UI Management - The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale. Integration - The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details Tools - The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products. Technology - The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration. There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process. There are a number of key benefits that the Oracle Utilities Application Framework provides to these products: Common facilities - The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions. Common methods of configuration - The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products. Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility. Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products. Quicker adoption of new technologies - As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products. Cross product reuse - As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement. Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy market needs. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary). The Framework is not available as a product itself and is bundled with Tax and Utilities Global Business Unit prodicts. At the present time the following products are on the Framework: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Enterprise Taxation Management (V2 and above) Oracle Utilities Business Intelligence (V2 and above) Oracle Utilities Mobile Workforice Management (V2 and above)

    Read the article

  • Active Directory and Apple's Workgroup Manager

    - by qbn
    I thought I'd share my experiences here. I work for a small business with only ~20 users. I wanted the ability to use managed client preferences to assign things like the software update server. Basically the ability to manage my Macs easily and in a native way. At first I tried the magic triangle solution, but I found this to be very complicated. Not only does it require a Mac OS X Server, but it gives you two points of failure. Additionally each Mac workstation must be bound to both servers. Eventually I sucked it up and went with the schema changes documented here. I was hesitant at first, because the instructions require a lot of manual work. However it was fairly basic and only took me about an hour and a half. Below you'll find the schema changes file that was a result of my work. I followed the instructions exactly and double checked everything, after six months of having this in place things have been running great. Too good to not share. I hope I save someone a couple of hours. # ================================================================== # # This file should be imported with the following command: # ldifde -i -u -f Apple AD Schema Changes.ldf -s server:port -b username domain password -j . -c "cn=Configuration,dc=X" #configurationNamingContext # LDIFDE.EXE from AD/AM V1.0 or above must be used. # This LDIF file should be imported into AD or AD/AM. It may not work for other directories. # # ================================================================== # ================================================================== # Attributes # ================================================================== # Attribute: apple-category dn: cn=apple-category,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.10.4 ldapDisplayName: apple-category attributeSyntax: 2.5.5.12 adminDescription: Category for the computer or neighborhood oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computeralias dn: cn=apple-computeralias,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.3 ldapDisplayName: apple-computeralias attributeSyntax: 2.5.5.12 adminDescription: XML plist referring to a computer record oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computer-list-groups dn: cn=apple-computer-list-groups,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.11.4 ldapDisplayName: apple-computer-list-groups attributeSyntax: 2.5.5.12 adminDescription: groups oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computers dn: cn=apple-computers,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.11.3 ldapDisplayName: apple-computers attributeSyntax: 2.5.5.12 adminDescription: computers oMSyntax: 64 systemOnly: FALSE # Attribute: apple-data-stamp dn: cn=apple-data-stamp,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.12.2 ldapDisplayName: apple-data-stamp attributeSyntax: 2.5.5.5 adminDescription: data stamp oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-dns-domain dn: cn=apple-dns-domain,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.18.1 ldapDisplayName: apple-dns-domain attributeSyntax: 2.5.5.12 adminDescription: DNS domain oMSyntax: 64 systemOnly: FALSE # Attribute: apple-dnsname dn: cn=apple-dnsname,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.4 ldapDisplayName: apple-dnsname attributeSyntax: 2.5.5.12 adminDescription: DNS name oMSyntax: 64 systemOnly: FALSE # Attribute: apple-dns-nameserver dn: cn=apple-dns-nameserver,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.18.2 ldapDisplayName: apple-dns-nameserver attributeSyntax: 2.5.5.12 adminDescription: DNS name server list oMSyntax: 64 systemOnly: FALSE # Attribute: apple-group-homeowner dn: cn=apple-group-homeowner,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.14.2 ldapDisplayName: apple-group-homeowner attributeSyntax: 2.5.5.5 adminDescription: group home owner settings oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-group-homeurl dn: cn=apple-group-homeurl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.14.1 ldapDisplayName: apple-group-homeurl attributeSyntax: 2.5.5.5 adminDescription: group home url oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-imhandle dn: cn=apple-imhandle,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.21 ldapDisplayName: apple-imhandle attributeSyntax: 2.5.5.12 adminDescription: IM handle (service:account name) oMSyntax: 64 systemOnly: FALSE # Attribute: apple-keyword dn: cn=apple-keyword,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.19 ldapDisplayName: apple-keyword attributeSyntax: 2.5.5.12 adminDescription: keywords oMSyntax: 64 systemOnly: FALSE # Attribute: apple-mcxflags dn: cn=apple-mcxflags,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.10 ldapDisplayName: apple-mcxflags attributeSyntax: 2.5.5.12 adminDescription: mcx flags oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-mcxsettings dn: cn=apple-mcxsettings,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.16 ldapDisplayName: apple-mcxsettings attributeSyntax: 2.5.5.12 adminDescription: mcx settings oMSyntax: 64 systemOnly: FALSE # Attribute: apple-neighborhoodalias dn: cn=apple-neighborhoodalias,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.2 ldapDisplayName: apple-neighborhoodalias attributeSyntax: 2.5.5.12 adminDescription: XML plist referring to another neighborhood record oMSyntax: 64 systemOnly: FALSE # Attribute: apple-networkview dn: cn=apple-networkview,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.10.3 ldapDisplayName: apple-networkview attributeSyntax: 2.5.5.12 adminDescription: Network view for the computer oMSyntax: 64 systemOnly: FALSE # Attribute: apple-nodepathxml dn: cn=apple-nodepathxml,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.1 ldapDisplayName: apple-nodepathxml attributeSyntax: 2.5.5.12 adminDescription: XML plist of directory node path oMSyntax: 64 systemOnly: FALSE # Attribute: apple-service-location dn: cn=apple-service-location,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.5 ldapDisplayName: apple-service-location attributeSyntax: 2.5.5.12 adminDescription: Service location oMSyntax: 64 systemOnly: FALSE # Attribute: apple-service-port dn: cn=apple-service-port,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.3 ldapDisplayName: apple-service-port attributeSyntax: 2.5.5.9 adminDescription: Service port number oMSyntax: 2 systemOnly: FALSE # Attribute: apple-service-type dn: cn=apple-service-type,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.1 ldapDisplayName: apple-service-type attributeSyntax: 2.5.5.5 adminDescription: type of service oMSyntax: 22 systemOnly: FALSE # Attribute: apple-service-url dn: cn=apple-service-url,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.2 ldapDisplayName: apple-service-url attributeSyntax: 2.5.5.5 adminDescription: URL of service oMSyntax: 22 systemOnly: FALSE # Attribute: apple-user-authenticationhint dn: cn=apple-user-authenticationhint,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.15 ldapDisplayName: apple-user-authenticationhint attributeSyntax: 2.5.5.12 adminDescription: password hint oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-class dn: cn=apple-user-class,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.7 ldapDisplayName: apple-user-class attributeSyntax: 2.5.5.5 adminDescription: user class oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homequota dn: cn=apple-user-homequota,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.8 ldapDisplayName: apple-user-homequota attributeSyntax: 2.5.5.5 adminDescription: home directory quota oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homesoftquota dn: cn=apple-user-homesoftquota,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.17 ldapDisplayName: apple-user-homesoftquota attributeSyntax: 2.5.5.5 adminDescription: home directory soft quota oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homeurl dn: cn=apple-user-homeurl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.6 ldapDisplayName: apple-user-homeurl attributeSyntax: 2.5.5.5 adminDescription: home directory URL oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-mailattribute dn: cn=apple-user-mailattribute,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.9 ldapDisplayName: apple-user-mailattribute attributeSyntax: 2.5.5.12 adminDescription: mail attribute oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-picture dn: cn=apple-user-picture,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.12 ldapDisplayName: apple-user-picture attributeSyntax: 2.5.5.12 adminDescription: picture oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-printattribute dn: cn=apple-user-printattribute,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.13 ldapDisplayName: apple-user-printattribute attributeSyntax: 2.5.5.12 adminDescription: print attribute oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-webloguri dn: cn=apple-webloguri,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.22 ldapDisplayName: apple-webloguri attributeSyntax: 2.5.5.12 adminDescription: Weblog URI oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-xmlplist dn: cn=apple-xmlplist,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.17.1 ldapDisplayName: apple-xmlplist attributeSyntax: 2.5.5.12 adminDescription: XML plist data oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: ipHostNumber dn: cn=ipHostNumber,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.1.1.1.19 ldapDisplayName: ipHostNumber attributeSyntax: 2.5.5.5 adminDescription: IP address oMSyntax: 22 systemOnly: FALSE rangeUpper: 128 # Attribute: macAddress dn: cn=macAddress,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.1.1.1.22 ldapDisplayName: macAddress attributeSyntax: 2.5.5.5 adminDescription: MAC address oMSyntax: 22 systemOnly: FALSE rangeUpper: 128 # Attribute: mountDirectory dn: cn=apple-mountDirectory,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.1 ldapDisplayName: mountDirectory attributeSyntax: 2.5.5.12 adminDescription: mount path oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountDumpFrequency dn: cn=apple-mountDumpFrequency,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.4 ldapDisplayName: mountDumpFrequency attributeSyntax: 2.5.5.5 adminDescription: mount dump frequency oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountOption dn: cn=apple-mountOption,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.3 ldapDisplayName: mountOption attributeSyntax: 2.5.5.5 adminDescription: mount options oMSyntax: 22 systemOnly: FALSE # Attribute: mountPassNo dn: cn=apple-mountPassNo,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.5 ldapDisplayName: mountPassNo attributeSyntax: 2.5.5.5 adminDescription: mount passno oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountType dn: cn=apple-mountType,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.2 ldapDisplayName: mountType attributeSyntax: 2.5.5.5 adminDescription: mount VFS type oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: ttl dn: cn=ttl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.250.1.60 ldapDisplayName: ttl attributeSyntax: 2.5.5.9 oMSyntax: 2 isSingleValued: TRUE systemOnly: FALSE dn: changetype: modify add: schemaUpdateNow schemaUpdateNow: 1 - # ================================================================== # Classes # ================================================================== # Class: apple-computer dn: cn=apple-computer,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.10 ldapDisplayName: apple-computer adminDescription: computer objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-category mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.4 # mayContain: apple-computer-list-groups mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.4 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-networkview mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.3 # mayContain: apple-service-url mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.2 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: macAddress mayContain: 1.3.6.1.1.1.1.22 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 # Class: apple-computer-list dn: cn=apple-computer-list,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.11 ldapDisplayName: apple-computer-list adminDescription: computer list objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-computer-list-groups mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.4 # mayContain: apple-computers mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.3 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-configuration dn: cn=apple-configuration,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.12 ldapDisplayName: apple-configuration adminDescription: configuration objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-data-stamp mayContain: 1.3.6.1.4.1.63.1000.1.1.1.12.2 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-group dn: cn=apple-group,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.14 ldapDisplayName: apple-group adminDescription: group account objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-group-homeowner mayContain: 1.3.6.1.4.1.63.1000.1.1.1.14.2 # mayContain: apple-group-homeurl mayContain: 1.3.6.1.4.1.63.1000.1.1.1.14.1 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-user-picture mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.12 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 # Class: apple-location dn: cn=apple-location,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.18 ldapDisplayName: apple-location objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-dns-domain mayContain: 1.3.6.1.4.1.63.1000.1.1.1.18.1 # mayContain: apple-dns-nameserver mayContain: 1.3.6.1.4.1.63.1000.1.1.1.18.2 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-neighborhood dn: cn=apple-neighborhood,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.20 ldapDisplayName: apple-neighborhood objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-category mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.4 # mayContain: apple-computeralias mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.3 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-neighborhoodalias mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.2 # mayContain: apple-nodepathxml mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.1 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 possSuperiors: 2.5.6.5 possSuperiors: container # Class: apple-serverassistant-config dn: cn=apple-serverassistant-config,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.17 ldapDisplayName: apple-serverassistant-config objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-service dn: cn=apple-service,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.19 ldapDisplayName: apple-service objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mustContain: apple-service-type mustContain: 1.3.6.1.4.1.63.1000.1.1.1.19.1 # mayContain: apple-dnsname mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.4 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-service-location mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.5 # mayContain: apple-service-port mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.3 # mayContain: apple-service-url mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.2 # mayContain: ipHostNumber mayContain: 1.3.6.1.1.1.1.19 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-user dn: cn=apple-user,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.1 ldapDisplayName: apple-user adminDescription: apple user account objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-imhandle mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.21 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-user-authenticationhint mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.15 # mayContain: apple-user-class mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.7 # mayContain: apple-user-homequota mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.8 # mayContain: apple-user-homesoftquota mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.17 # mayContain: apple-user-homeurl mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.6 # mayContain: apple-user-mailattribute mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.9 # mayContain: apple-user-picture mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.12 # mayContain: apple-user-printattribute mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.13 # mayContain: apple-webloguri mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.22 # Class: mount dn: cn=apple-mount,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.8 ldapDisplayName: mount objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: mountDirectory mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.1 # mayContain: mountDumpFrequency mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.4 # mayContain: mountOption mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.3 # mayContain: mountPassNo mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.5 # mayContain: mountType mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.2 possSuperiors: 2.5.6.5 possSuperiors: container dn: changetype: modify add: schemaUpdateNow schemaUpdateNow: 1 - # ================================================================== # Updating present elements # ================================================================== # Add the new class to the user object dn: CN=User,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-user - # Add the new class to the computer object dn: CN=Computer,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-computer - # Add the new class to the group object dn: CN=Group,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-group - # Add the new class to the configuration object dn: CN=Configuration,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-configuration -

    Read the article

  • Designing an email system to guarantee delivery

    - by GlenH7
    We are looking to expand our use of email for notification purposes. We understand it will generate more inbox volume, but we are being selective about which events we fire notification on in order to keep the signal-to-noise ratio high. The big question we are struggling with is designing a system that guarantees that the email was delivered. If an email isn't delivered, we will consider that an exception event that needs to be investigated. In reality, I say almost guarantees because there aren't any true guarantees with email. We're just looking for a practical solution to making sure the email got there and experiences others have had with the various approaches to guaranteeing delivery. For the TL;DR crowd - how do we go about designing a system to guarantee delivery of emails? What techniques should we consider so we know the emails were delivered? Our biggest area of concern is what techniques to use so that we know when a message is sent out that it either lands in an inbox or it failed and we need to do something else. Additional requirements: We're not at the stage of including an escalation response, but we'll want that in the future or so we think. Most notifications will be internal to our enterprise, but we will have some notifications being sent to external clients. Some of our application is in a hosted environment. We haven't determined if those servers can access our corporate email servers for relaying or if they'll be acting as their own mail servers. Base design / modules (at the moment): A module to assign tracking identification A module to send out emails A module to receive delivery notification (perhaps this is the same as the email module) A module that checks sent messages against delivery notification and alerts on undelivered email. Some references: Atwood: Send some email Email Tracking Some approaches: Request a response (aka read-receipt or Message Disposition Notification). Seems prone to failure since we have cross-compatibility issues due to differing mail servers and software. Return receipt (aka Delivery Status Notification). Not sure if all mail servers honor this request or not Require an action and therefore prove reply. Seems burdensome to force the recipients to perform an additional task not related to resolving the issue. And no, we haven't come up with a way of linking getting the issue fixed to whether or not the email was received. Force a click-through / Other site sign-in. Similar to requiring some sort of action, this seems like an additional burden and will annoy the users. On the other hand, it seems the most likely to guarantee someone received the notification. Hidden image tracking. Not all email providers automatically load the image, and how would we associate the image(s) with the email tracking ID? Outsource delivery. This gets us out of the email business, but goes back to how to guarantee the out-sourcer's receipt and subsequent delivery to the end recipient. As a related concern, there will be an n:n relationship between issue notification and recipients. The 1 issue : n recipients subset isn't as much of a concern although if we had a delivery failure we would want to investigate and fix the core issue. Of bigger concern is n issues : 1 recipient, and we're specifically concerned in making sure that all n issues were received by the recipient. How does forum software or issue tracking software handle this requirement? If a tracking identifier is used, Where is it placed in the email? In the Subject, or the Body?

    Read the article

  • Performance and Optimization Isn’t Evil

    - by Reed
    Donald Knuth is a fairly amazing guy.  I consider him one of the most influential contributors to computer science of all time.  Unfortunately, most of the time I hear his name, I cringe.  This is because it’s typically somebody quoting a small portion of one of his famous statements on optimization: “premature optimization is the root of all evil.” I mention that this is only a portion of the entire quote, and, as such, I feel that Knuth is being quoted out of context.  Optimization is important.  It is a critical part of every software development effort, and should never be ignored.  A developer who ignores optimization is not a professional.  Every developer should understand optimization – know what to optimize, when to optimize it, and how to think about code in a way that is intelligent and productive from day one. I want to start by discussing my own, personal motivation here.  I recently wrote about a performance issue I ran across, and was slammed by multiple comments and emails that effectively boiled down to: “You’re an idiot.  Premature optimization is the root of all evil.  This doesn’t matter.”  It didn’t matter that I discovered this while measuring in a profiler, and that it was a portion of my code base that can take “many hours to complete.”  Even so, multiple people instantly jump to “it’s premature – it doesn’t matter.” This is a common thread I see.  For example, StackOverflow has many pages of posts with answers that boil down to (mis)quoting Knuth.  In fact, just about any question relating to a performance related issue gets this quote thrown at it immediately – whether it deserves it or not.  That being said, I did receive some positive comments and emails as well.  Many people want to understand how to optimize their code, approaches to take, tools and techniques they can use, and any other advice they can discover. First, lets get back to Knuth – I mentioned before that Knuth is being quoted out of context.  Lets start by looking at the entire quote from his 1974 paper Structured Programming with go to Statements: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.” Ironically, if you read Knuth’s original paper, this statement was made in the middle of a discussion of how Knuth himself had changed how he approaches optimization.  It was never a statement saying “don’t optimize”, but rather, “optimizing intelligently provides huge advantages.”  His approach had three benefits: “a) it doesn’t take long” … “b) the payoff is real”, c) you can “be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.” Looking at Knuth’s premise here, and reading that section of his paper, really leads to a few observations: Optimization is important  “he will be wise to look carefully at the critical code” Normally, 3% of your code – three lines out of every 100 you write, are “critical code” and will require some optimization: “we should not pass up our opportunities in that critical 3%” Optimization, if done well, should not be time consuming: “it doesn’t take long” Optimization, if done correctly, provides real benefits: “the payoff is real” None of this is new information.  People who care about optimization have been discussing this for years – for example, Rico Mariani’s Designing For Performance (a fantastic article) discusses many of the same issues very intelligently. That being said, many developers seem unable or unwilling to consider optimization.  Many others don’t seem to know where to start.  As such, I’m going to spend some time writing about optimization – what is it, how should we think about it, and what can we do to improve our own code.

    Read the article

  • How To Apply Online For New Passport Or Renewal Of Your Passport [Indian Websites]

    - by Gopinath
    Are you bored wasting time and energy in standing lengthy queues at Passport offices in India to apply a new passport or renew it? Indian Government Passport Office has an online portal that lets you apply for new passport or renew your expiring passport by filling details online. By filling the details online you can complete half of the required formalities sitting at home and the rest of tasks like submitting required proofs, paying money etc at your regional passport office. Saves lot of time. Advantages of Applying For Passport Online Ask anyone who already obtained a passport by visiting the passport office, they will narrate stories of spending long time in queues. In certain office, the length of queues may require you to stand 3 to 4 hours. And sometimes by the time your turn comes, the officers may break for lunch, coffee or the day if your time is very bad. The main advantage of applying for passport using this online portal is – we can skip the process of standing in long queues to obtain tokens for submitting tokens and also we get a pre booked appointment with passport issuing officer for submitting the proofs and paying fees. When you submit the application online, an appointment will be booked automatically for submitting the required documents and fees so that  you can just walk-in to passport office 15 minutes ahead of your appointment. List Of Passport Offices Accepting Online Application Forms I know that you are excited and all set to apply online, but hold on. The online Passport application submission is supported in 37 regional passport offices across India as I write this post. If you are residing in any of these cities, then only you can apply online – Ahemdabad,  Amritsar, Bareilly, Bhopal, Bhubneswar, Chennai, Cochin, Coimbatore, Dehradun, Delhi, Ghaziabad, Guwahati, Hyderabad, Jaipur, Jalandhar, Jammu, Kolkata, Kozhikode, Lucknow, Madurai, Malappuram, Mumbai, Nagpur, Panaji, Patna, Pune, Raipur, Ranchi, Shimla, Srinagar, Surat, Thane, Trichy, Trivandrum, Visakhapatnam. Others should approach the passport office directly. Government is trying to expand this to other locations, so please check if place accepting online registration by visiting registration page(link given below). Types Of Applications Accepted Online The online system accepts following types of passport applications Fresh Passport / Renewal New Passport in lieu of Damaged/Lost Passport Passport for Children up to 15 Years of Age Re-issue of Passport / Additional Booklet Indian Govt. Passport Office Website And Online Application URL To apply for passport online visit the url https://passport.gov.in/pms/Information.jsp using Internet Explorer browser. This site may not work on your Firefox, Chrome or other browsers as the site request users to use Internet Explorer. Here are few other links that will help you get more details on passport application Govt. Of India Passport Office Website Passport Application Fee Structure Information Passport Application Filling Guidelines Passport Application Check List URL For NRIs To Apply Online If you are an NRI then the above links and the list of supported Passport offices are not for you. NRIs should use the URL http://passport.gov.in/nri/OnlineRegistration.jsp for applying passport related services online. For more details you can visit special NRI section on Passport website. CC Image credit: LucasTheExperience This article titled,How To Apply Online For New Passport Or Renewal Of Your Passport [Indian Websites], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • PHP PSR-0 + several namespaces in one file and autoload

    - by Nemoden
    I've been thinking for a while about defining several namespaces in one php file and so, having several classes inside this file. Suppose, I want to implement something like Doctrine\ORM\Query\Expr: Expr.php Expr |-- Andx.php |-- Base.php |-- Comparison.php |-- Composite.php |-- From.php |-- Func.php |-- GroupBy.php |-- Join.php |-- Literal.php |-- Math.php |-- OrderBy.php |-- Orx.php `-- Select.php It would be nice if I had all of this in one file - Expr.php: namespace Doctrine\ORM\Query; class Expr { // code } namespace Doctrine\ORM\Query\Expr; class Func { // code } // etc... What I'm thinking of is directories naming convention and, unlike PSR-0 having several classes and namespaces in one file. It's best explained by the code: ls Doctrine/orm/query Expr.php that's it - only Expr.php Since Expr.php is somewhat I call a "meta-namespace" for Expr\Func, it make sense to place all the classes inside Expr.php (as shown above). So, the vendor name is still starts with an uppercased letter (Doctrine) and the other parts of namespace start with lowercased letter. We can write an autoload so it would respect this notion: function load_class($class) { if (class_exists($class)) { return true; } $tokenized_path = explode(array("_", "\\"), DIRECTORY_SEPARATOR, $class); // array('Doctrine', 'orm', 'query', 'Expr', 'Func'); // ^^^^ // first, we are looking for first uppercased namespace part // and if it's not last (not the class name), we use it as a filename // and wiping away the rest to compose a path to a file we need to include if (FALSE !== ($meta_class_index = find_meta_class($tokenized_path))) { $new_tokenized_path = array_slice($tokenized_path, 0, $meta_class_index); $path_to_class = implode(DIRECTORY_SEPARATOR, $new_tokenized_path); } else { // no meta class found $path_to_class = implode(DIRECTORY_SEPARATOR, $tokenized_path); } if (file_exists($path_to_class.'.php')) { require_once $path_to_class.'.php'; } return false; } Another reason to do so is to reduce a number of php files scattered among directories. Usually you check file existence before you require a file to fail gracefully: file_exists($path_to_class.'.php'); If you take a look at actual Doctrine\ORM\Query\Expr code, you'll see they use all of the "inner-classes", so you actually do: file_exists("/path/to/Doctrine/ORM/Query/Expr.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/AndX.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Base.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Comparison.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Composite.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/From.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Func.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/GroupBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Join.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Literal.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Math.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/OrderBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Orx.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Select.php"); in your autoload which causes quite a few I/O reads. Isn't it too much to check on each user's hit? I'm just putting this on a discussion. I want to hear from another PHP programmers what do they think of it. And, of course, if you have a silver bullet addressing this problems I've designated here, please share. I also have been thinking if my vogue question fits here and according to the FAQ it seems like this question addresses "software architecture" problem slash proposal. I'm sorry if my scribble may seem a bit clunky :) Thanks.

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Easily Add Facebook Chat to Pidgin

    - by Matthew Guay
    Want to keep in touch with your Facebook friends throughout the day?  Here we’ll show you how to easily add Facebook chat to the popular multi-protocol chat client Pidgin. Facebook has recently added support for XMPP chat, which means you can easily add it to popular chat clients such as Pidgin.  Previously you could only add Facebook chat to Pidgin through a plug-in that didn’t always work correctly.  Here we’ll walk you through setting up your Facebook account in Pidgin. Getting Started First, make sure you have a username for your Facebook account (link below).  This is a relatively new feature for Facebook, so if you’ve had your account for a while you may need to choose one.    If you already have one, you should see it listed instead. Now, open Pidgin, and click Manage Accounts. Click Add… Then select XMPP from the Protocol list. Now, enter your Facebook username without the facebook.com part (e.g your.facebook.username, not http://www.facebook.com/your.user.name).  Then, enter chat.facebook.com for the Domain, and enter your standard Facebook password.  You can check the “Remember password” box if you’d like Pidgin to automatically sign in to Facebook chat. Now, click on the Advanced tab, and uncheck the “Require SSL/TLS” box.  Also, make sure the Connect port is 5222.  Click Add, and your Facebook account is added to Pidgin. Now Facebook will show up in your list of accounts, with the username [email protected]. Your Facebook friends will show up directly in your Buddy list, complete with their full name and Facebook profile picture.  Any users that are not in a group will show under your standard list, while ones in a Facebook group will be shown in a separate group.  You can move which groups your Facebook friends show up in, just like you can with other chat contacts.   And no matter if your friend is logged in on the standard Facebook website or through another chat application, it will work the same as always.   This is a great way to keep in touch with your Facebook friends throughout the day.  If you like Facebook chat and already use Pidgin, now you can keep from switching between programs and just chat with all your friends from a central location. Links: Download Pidgin Set your Facebook username Similar Articles Productive Geek Tips The How-To Geek is No Longer on FacebookWin a Free iPod Touch in the How-To Geek Facebook Giveaway!Block Those Irritating Facebook Quiz & Application MessagesPut Your Pidgin Buddy List into the Windows Vista SidebarHow to Lock Down Your Facebook Account TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Making a Job Change That's Easy Why Not Try a Career Change

    - by david.talamelli
    A few nights ago I received a comment on one of our blog posts that reminded me of a statistic that I heard a while back. The statistic reflected the change in our views towards work and showed how while people in past generations would stay in one role for their working career - now with so much choice people not only change jobs often but also change careers 4-5 times in their working life. To differentiate between a job change and a career change: when I say job change this could be an IT Sales person moving from one IT Sales role to another IT Sales role. A Career change for example would be that same IT Sales person moving from IT Sales to something outside the scope of their industry - maybe to something like an Engineer or Scuba Dive Instructor. The reason for Career changes can be as varied as the people who make them. Someone's motivation could be to pursue a passion or maybe there is a change in their personal circumstances forcing the change or it could be any other number of reasons. I think it takes courage to make a Career change - it can be easy to stay in your comfort zone and do what you know, but to really push yourself sometimes you need to try something new, it is a matter of making that career transition as smooth as possible for yourself. The comment that was posted is here below (thanks Dean for the kind words they are appreciated). Hi David, I just wanted to let you know that I work for a company called Milestone Search in Melbourne, Victoria Australia. (www.mstone.com.au) We subscribe to your feed on a daily basis and find your blogs both interesting and insightful. Not to mention extremely entertaining. I wonder if you have missed out on getting in journalism as this seems to be something you'd be great at ?: ) Anyways back to my point about changing careers. This could be anything from going from I.T. to Journalism, Engineering to Teaching or any combination of career you can think of. I don't think there ever has been a time where we have had so many opportunities to do so many different things in our working life. While this idea sounds great in theory, putting it into practice would be much harder to do I think. First, in an increasingly competitive job market, employers tend to look for specialists in their field. You may want to make a change but your options may be limited by the number of employers willing to take a chance on someone new to an industry that will likely require a significant investment in time to get brought up to speed. Also, using myself as an example if I was given the opportunity to move into Journalism/Communication/Marketing career from my career as an IT Recruiter - realistically I would have to take a significant pay cut to make this change as my current salary reflects the expertise I have in my current career. I would not immediately be up to speed moving into a new career and would not be able to justify a similar salary. Yes there are transferable skills in any career change, but even though you may have transferable skills you must realise that you will also have a large amount of learning to do which would take time. These are two initial hurdles that I immediately think of, there may be more but nothing is insurmountable. If you work out what you want to do with your working career whatever that may be, you then need to just need to work out the steps to get to your end goal. This is where utilising the power of your networks and using Social Media can come in handy. If you are interested in working somewhere why not proactively take the opportunity to research the industry or company - find out who it is you need to speak to and get in touch with them. We spend so much time working, we should enjoy the work we do and not be afraid to try new things. Waiting for your dream job to fall into your lap or be handed to you on a silver platter is not likely going to happen, so if there is something you do want to do, work out a plan to make it happen and chase after it. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • I can't update my system properly, "no package header" error

    - by joel
    Every time I try to run sudo apt-get update or try running updates from the GUI interface I run into the following problem or something similar: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_restricted_binary-i386_Packages E: The package lists or status file could not be parsed or opened. I've tried purging using sudo rm -rf <filename> where <filename> is the listed file above, and then running sudo apt-get update to fix it (as listed elsewhere in this forum) and no luck, just keep getting this message. I'm running Ubuntu 12.04 and this is getting really frustrating... I just want a system that runs smoothly and doesn't require it's hand to be held when it comes to updates. Tried the solutions posted below and am still receiving the same errors, sample output: W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-amd64_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_restricted_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_multiverse_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_source_Sources Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_source_Sources Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_universe_binary-amd64_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_main_i18n_Translation-en Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_universe_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • How to Install Oracle Software on Remote Linux Server

    - by James Taylor
    It is becoming more common these days to install Oracle software on remote Linux servers. This issue has always existed but was generally resolved either by silent installs or by someone physically going to the server to install the software. This is becoming more difficult with the popular virtualisation and cloud deployment strategies. This post provides the steps involved to install Oracle Software using the GUI interface on a remote Linux server. There are many ways to achieve this, the way I resolve this issue is via Virtual Network Computing (VNC) as it is shipped with RedHat and OEL out of the box. For this post I’m using OEL 5 deployed on a OVM guest. If not already done so download and install a client version of VNC so you can connect to the server. There are many out there, for the purpose of this post I use UltraVNC. You can download a free version from http://www.uvnc.com/download/index.html By default VNC Server is installed in your RedHat and OEL OS, but it is not configured. The way VNC works is when started it creates a client instance for the user and binds it to a specific port. So if have an account on the Linux box you can setup a VNC Server session for that user, you don’t need to be root. For the purpose of this document I’m going to use oracle as the user to setup a VNC Session as this is the user I want use to install the software. However to start the VNC Service you must be root. As the root user run the following command: service vncserver start Starting VNC server: no displays configured                [  OK  ] Login to the Linux box as the user  you wan to install the Oracle software [oracle@lisa ~]$ Run the command to create a new VNC server instance for the oracle user: vncserver You will be ask to supply password information. This is what you will enter when connecting from your desktop client. This password is also independent of the actual Linux user password. The VNC Server is acting as a proxy to this instance. You will require a password to access your desktops. Password: Verify: xauth:  creating new authority file /home/oracle/.Xauthority New 'lisa.nz.oracle.com:1 (oracle)' desktop is lisa.nz.oracle.com:1 Creating default startup script /home/oracle/.vnc/xstartup Starting applications specified in /home/oracle/.vnc/xstartup Log file is /home/oracle/.vnc/lisa.nz.oracle.com:1.log As you can see a new instance lisa.nz.oracle.com:1 has been created. If you were to run the vncserver command again another instance lisa.nz.oracle.com:2 will be created. If you are going through a firewall you will need to ensure that the port 5901 (port 1) is open between your client desktop and the Linux Server. Depending on the options chosen at install time a firewall could be in place. The simplest way to disable this is using the command. You will need to be root. service iptables stop This will stop the firewall while you install. If you just want to add a port to the accepted lists use the firewall UI. You will need to be root. system-config-security-level Now you are ready to connect to the server via the VNC. Using the software installed in step one start the VNC Client. You should be prompted for the server and port. If connectivity is established, you will be prompted for the password entered in step 5. You should now be presented with a terminal screen ready to install software Go to the location of the oracle install software and start the Oracle Universal Installer

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Tough Decisions

    - by Johnm
    There was once a thriving business that employed two Database Administrators, Sam and Jim. Both DBAs were certified, educated and highly talented in their skill sets. During lunch breaks these two DBAs were often found together discussing best practices, troubleshooting techniques and the latest release notes for the upcoming version of SQL Server. They genuinely loved what they did. The maintenance of the first database was the responsibility of Sam. He was the architect of this server's setup and he was very meticulous in its configuration. He regularly monitored the health of the database, validated backup files and regularly adhered to the best practices that were advocated by well respected professionals. He was very proud of the fact that there was never a database that he managed that lost data or performed poorly. The maintenance of the second database was the responsibility of Jim. He too was the architect of this server's setup. At the time that he built this server, his understanding of the finer details of configuration were not as clear as they are today. The server was build on a shoestring budget and with very little time for testing and implementation. Jim often monitored the health of the database; but in more of a reactionary mode due to user complaints of slowness or failed transactions. Deadlocks abounded and the backup files were never validated. One day, the announcement was made that revealed that the business had hit financially hard times. Budgets were being cut, limitation on spending was implemented and the reduction in full-time staff was required. Since having two DBAs was regarded a luxury by many, this meant that either Sam or Jim were about to find themselves out of a job. Sam and Jim's boss, Frank, was faced with a very tough decision. Sam's performance was flawless. His techniques and practices were perfection. The databases he managed were reliable and efficient. His solutions are "by the book". When given a task it is certain that, while it may take a little longer, it will be done right the first time. Jim's techniques and practices were not perfect; but effective and responsive. He made mistakes regularly; but he shows that he learns from them and they often result in innovative solutions. When given a task it is certain that, while the results may require some tweaking, it will be done on time and under budget. You are Frank's best friend. He approaches you and presents this scenario. He must layoff one of his valued DBAs the very next morning. Frank asks you: "All else being equal, who would you let go? and Why?" Another pertinent question is raised: "Regardless of good times or bad, if you had to choose, which DBA would you want on your team when tough challenges arise?" Your response is. (This is where you enter a comment below)

    Read the article

  • How to use caching to increase render performance?

    - by Christian Ivicevic
    First of all I am going to cover the basic design of my 2d tile-based engine written with SDL in C++, then I will point out what I am up to and where I need some hints. Concept of my engine My engine uses the concept of GameScreens which are stored on a stack in the main game class. The main methods of a screen are usually LoadContent, Render, Update and InitMultithreading. (I use the last one because I am using v8 as a JavaScript bridge to the engine. The main game loop then renders the top screen on the stack (if there is one; otherwise, it exits the game) - actually it calls the render methods, but stores all items to be rendered in a list. After gathering all this information the methods like SDL_BlitSurface are called by my GameUIRenderer which draws the enqueued content and then draws some overlay. The code looks like this: while(Game is running) { Handle input if(Screens on stack == 0) exit Update timer etc. Clear the screen Peek the screen on the stack and collect information on what to render Actually render the enqueue screen stuff and some overlay etc. Flip the screen } The GameUIRenderer uses as hinted a std::vector<std::shared_ptr<ImageToRender>> to hold all necessary information described by this class: class ImageToRender { private: SDL_Surface* image; int x, y, w, h, xOffset, yOffset; }; This bunch of attributes is usually needed if I have a texture atlas with all tiles in one SDL_Surface and then the engine should crop one specific area and draw this to the screen. The GameUIRenderer::Render() method then just iterates over all elements and renders them something like this: std::for_each( this->m_vImageVector.begin(), this->m_vImageVector.end(), [this](std::shared_ptr<ImageToRender> pCurrentImage) { SDL_Rect rc = { pCurrentImage->x, pCurrentImage->y, 0, 0 }; // For the sake of simplicity ignore offsets... SDL_Rect srcRect = { 0, 0, pCurrentImage->w, pCurrentImage->h }; SDL_BlitSurface(pCurrentImage->pImage, &srcRect, g_pFramework->GetScreen(), &rc); } ); this->m_vImageVector.clear(); Current ideas which need to be reviewed The specified approach works really good and IMHO it is really has a good structure, however the performance could be definitely increased. I would like to know what do you suggest, how to implement efficient caching of surfaces etc so that there is no need to redraw the same scene over and over again? The map itself would be almost static, only when the player moves, we would need to move the map. Furthermore animated entities would either require updates of the whole map or updates of only the specific areas the entities are currently moving in. My first approaches were to include a flag IsTainted which should be used by the GameUIRenderer to decide whether to redraw everything or use cached version (or to not render anything so that we do not have to Clear the screen and let the last frame persist). However this seems to be quite messy if I have to manually handle in my Render method of the screen class if something has changed or not.

    Read the article

  • Knockout.js - Filtering, Sorting, and Paging

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/knockout.js---filtering-sorting-and-paging.aspxKnockout.js is fantastic! Maybe I missed it but it appears to be missing flexible filtering, sorting, and pagination of its grids. This is a summary of my attempt at creating this functionality which has been working out amazingly well for my purposes. Before you continue, this post is not intended to teach you the basics of Knockout. They have already created a fantastic tutorial for this purpose. You'd be wise to review this before you continue. http://learn.knockoutjs.com/ Please view the full source code and functional example on jsFiddle. Below you will find a brief explanation of some of the components. http://jsfiddle.net/JTimperley/pyCTN/13/ First we need to create a model to represent our records. This model is a simple container with defined and guaranteed members. function CustomerModel(data) { if (!data) { data = {}; } var self = this; self.id = data.id; self.name = data.name; self.status = data.status; } Next we need a model to represent the page as a whole with an array of the previously defined records. I have intentionally overlooked the filtering and sorting options for now. Note how the filtering, sorting, and pagination are chained together to accomplish all three goals. This strategy allows each of these pieces to be used selectively based on the page's needs. If you only need sorting, just sort, etc. function CustomerPageModel(data) { if (!data) { data = {}; } var self = this; self.customers = ExtractModels(self, data.customers, CustomerModel); var filters = […]; var sortOptions = […]; self.filter = new FilterModel(filters, self.customers); self.sorter = new SorterModel(sortOptions, self.filter.filteredRecords); self.pager = new PagerModel(self.sorter.orderedRecords); } The code currently supports text box and drop down filters. Text box filters require defining the current 'Value' and the 'RecordValue' function to retrieve the filterable value from the provided record. Drop downs allow defining all possible values, the current option, and the 'RecordValue' as before. Once defining these filters, they are automatically added to the screen and any changes to their values will automatically update the results, causing their sort and pagination to be re-evaluated. var filters = [ { Type: "text", Name: "Name", Value: ko.observable(""), RecordValue: function(record) { return record.name; } }, { Type: "select", Name: "Status", Options: [ GetOption("All", "All", null), GetOption("New", "New", true), GetOption("Recently Modified", "Recently Modified", false) ], CurrentOption: ko.observable(), RecordValue: function(record) { return record.status; } } ]; Sort options are more simplistic and are also automatically added to the screen. Simply provide each option's name and value for the sort drop down as well as function to allow defining how the records are compared. This mechanism can easily be adapted for using table headers as the sort triggers. That strategy hasn't crossed my functionality needs at this point. var sortOptions = [ { Name: "Name", Value: "Name", Sort: function(left, right) { return CompareCaseInsensitive(left.name, right.name); } } ]; Paging options are completely contained by the pager model. Because we will be chaining arrays between our filtering, sorting, and pagination models, the following utility method is used to prevent errors when handing an observable array to another observable array. function GetObservableArray(array) { if (typeof(array) == 'function') { return array; }   return ko.observableArray(array); }

    Read the article

  • Keepin’ It Simple with StorageTek SL150

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are your customers archive and data protection environments getting out of hand?  Are they looking for a little simplicity in their lives? How about some scalability? Or are they looking for a way to save on capital and operational expenses? If you answered yes to any of these, then  Oracle's new StorageTek SL150 Modular Tape Library is the product for you. It beats the competition in terms of simplicity, scalability and savings, and provides some seriously wallet friendly revenue opportunities for you. If the long-term service annuities on the SL150 aren’t convincing enough, then the resale margins, rebates and follow-on revenue from modular upgrades will be!  The SL150 simplifies StorageTek’s tape portfolio by replacing three products with one scalable solution that  provides an entry point for repeat business within accounts. The SL150 expands your potential storage customer base to smaller companies with low cost, simple upgrades and streamlined management that help alleviate key customer pain points. With the SL150, your customers will be able to simplify growth of their archive and data protection environments with small entry configurations and 10x growth, something that would require multiple box swaps across up to three product categories with competitive products. With the SL150, Oracle can help you provide greater customer satisfaction with  Simplicity, Scalability and Savings! We know you’re probably wondering how you can get started and sell this new and magnificent product… Well, look no further because the only thing you need to do is complete the SL150 Guided Learning Paths (GLPs). For some extra insight, watch the video below on the new StorageTek SL150 modular tape library, and don’t forget to ‘tweet’ this post, and share it on Facebook to spread the good news! Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Wishing you Simplicity, Scalability and Savings, The OPN Communications Team

    Read the article

  • Changing the action of a hyperlink in a Silverlight RichTextArea

    - by Marc Schluper
    The title of this post could also have been "Move over Hyperlink, here comes Actionlink" or "Creating interactive text in Silverlight." But alas, there can be only one. Hyperlinks are very useful. However, they are also limited because their action is fixed: browse to a URL. This may have been adequate at the start of the Internet, but nowadays, in web applications, the one thing we do not want to happen is a complete change of context. In applications we typically like a hyperlink selection to initiate an action that updates a part of the screen. For instance, if my application has a map displayed with some text next to it, the map would react to a selection of a hyperlink in the text, e.g. by zooming in on a location and displaying additional locational information in a popup. In this way, the text becomes interactive text. It is quite common that one company creates and maintains websites for many client companies. To keep maintenance cost low, it is important that the content of these websites can be updated by the client companies themselves, without the need to involve a software engineer. To accommodate this scenario, we want the author of the interactive text to configure all hyperlinks (without writing any code). In a Silverlight RichTextArea, the default action of a Hyperlink is the same as a traditional hyperlink, but it can be changed: if the Command property has a value then upon a click event this command is called with the value of the CommandParameter as parameter. How can we let the author of the text specify a command for each hyperlink in the text, and how can we let an application react properly to a hyperlink selection event? We are talking about any command here. Obviously, the application would recognize only a specific set of commands, with well defined parameters, but the approach we take here is generic in the sense that it pertains to the RichTextArea and any command. So what do we require? We wish that: As a text author, I can configure the action of a hyperlink in a (rich) text without writing code; As a text author, I can persist the action of a hyperlink with the text; As a reader of persisted text, I can click a hyperlink and the configured action will happen; As an application developer, I can configure a control to use my application specific commands. In an excellent introduction to the RichTextArea, John Papa shows (among other things) how to persist a text created using this control. To meet our requirements, we can create a subclass of RichTextArea that uses John's code and allows plugging in two command specific components: one to prompt for a command definition, and one to execute the command. Since both of these plugins are application specific, our RichTextArea subclass should not assume anything about them except their interface. public interface IDefineCommand { void Prompt(string content, // the link content Action<string, object> callback); // the method called to convey the link definition } public interface IPerformCommand : ICommand {} The IDefineCommand plugin receives the content of the link (the text visible to the reader) and displays some kind of control that allows the author to define the link. When that's done, this (possibly changed) content string is conveyed back to the RichTextArea, together with an object that defines the command to execute when the link is clicked by the reader of the published text. The IPerformCommand plugin simply implements System.Windows.Input.ICommand. Let's use MEF to load the proper plugins. In the example solution there is a project that contains rudimentary implementations of these. The IDefineCommand plugin simply prompts for a command string (cf. a command line or query string), and the IPerformCommand plugin displays a MessageBox showing this command string. An actual application using this extended RichTextArea would have its own set of commands, each having their own parameters, and hence would provide more user friendly application specific plugins. Nonetheless, in any case a command can be persisted as a string and hence the two interfaces defined above suffice. For a Visual Studio 2010 solution, see my article on The Code Project.

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >