Search Results

Search found 2041 results on 82 pages for 'detailed'.

Page 71/82 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • New Features and Changes in OIM11gR2

    - by Abhishek Tripathi
    WEB CONSOLEs in OIM 11gR2 ** In 11gR1 there were 3 Admin Web Consoles : ·         Self Service Console ·         Administration Console and ·         Advanced Administration Console accessible Whereas in OIM 11gR2 , Self Service and Administration Console have are now combined and now called as Identity Self Service Console http://host:port/identity  This console has 3 features in it for managing self profile (My Profile), Managing Requests like requesting for App Instances and Approving requests (Requests) and General Administration tasks of creating/managing users, roles, organization, attestation etc (Administration) ** In OIM 11gR2 – new console sysadmin has been added Administrators which includes some of the design console functions apart from general administrations features. http://host:port/sysadmin   Application Instances Application instance is the object that is to be provisioned to a user. Application Instances are checked out in the catalog and user can request for application instances via catalog. ·         In OIM 11gR2 resources and entitlements are bundled in Application Instance which user can select and request from catalog.  ·         Application instance is a combination of IT Resource and RO. So, you cannot create another App Instance with the same RO & IT Resource if it already exists for some other App Instance. One of these ( RO or IT Resource) must have a different name. ·         If you want that users of a particular Organization should be able to request for an Application instances through catalog then App Instances must be attached to that particular Organization. ·         Application instance can be associated with multiple organizations. ·         An application instance can also have entitlements associated with it. Entitlement can include Roles/Groups or Responsibility. ·         Application Instance are published to the catalog by a scheduled task “Catalog Synchronization Job” ·         Application Instance can have child/ parent application instance where child application instance inherits all attributes of parent application instance. Important point to remember with Application Instance If you delete the application Instance in OIM 11gR2 and create a new one with the same name, OIM will not allow doing so. It throws error saying Application Instance already exists with same Resource Object and IT resource. This is because there is still some reference that is not removed in OIM for deleted application Instance.  So to completely delete your application Instance from OIM, you must: 1. Delete the app Instance from sysadmin console. 2. Run the App Instance Post Delete Processing Job in Revoke/Delete mode. 3. Run the Catalog Synchronization job. Once done, you should be able to create a new App instance with the previous RO & IT Resouce name.   Catalog  Catalog allows users to request Roles, Application Instance, and Entitlements in an Application. Catalog Items – Roles, Application Instance and Entitlements that can be requested via catalog are called as catalog items. Detailed Information ( attributes of Catalog item)  Category – Each catalog item is associated with one and only one category. Catalog Administrators can provide a value for catalog item. ·         Tags – are search keywords helpful in searching Catalog. When users search the Catalog, the search is performed against the tags. To define a tag, go to Catalog->Search the resource-> select the resource-> update the tag field with custom search keyword. Tags are of three types: a) Auto-generated Tags: The Catalog synchronization process auto-tags the Catalog Item using the Item Type, Item Name and Item Display Name b) User-defined Tags: User-defined Tags are additional keywords entered by the Catalog Administrator. c) Arbitrary Tags: While defining a metadata if user has marked that metadata as searchable, then that will also be part of tags.   Sandbox  Sanbox is a new feature introduced in OIM11gR2. This serves as a temporary development environment for UI customizations so that they don’t affect other users before they are published and linked to existing OIM UI. All UI customizations should be done inside a sandbox, this ensures that your changes/modifications don’t affect other users until you have finalized the changes and customization is complete. Once UI customization is completed, the Sandbox must be published for the customizations to be merged into existing UI and available to other users. Creating and activating a sandbox is mandatory for customizing the UI by .Without an active sandbox, OIM does not allow to customize any page. a)      Before you perform any activity in OIM (like Create/Modify Forms, Custom Attribute, creating application instances, adding roles/attributes to catalog) you must create a Sand Box and activate it. b)      One can create multiple sandboxes in OIM but only one sandbox can be active at any given time. c)      You can export/import the sandbox to move the changes from one environment to the other. Creating Sandbox To create sandbox, login to identity manager self service (/identity) or System Administration (/sysadmin) and click on top right of link “Sandboxes” and then click on Create SandBox. Publishing Sandbox Before you publish a sandbox, it is recommended to backup MDS. Use /EM to backup MDS by following the steps below : Creating MDS Backup 1.      Login to Oracle Enterprise Manager as the administrator. 2.      On the landing page, click oracle.iam.console.identity.self-service.ear(V2.0). 3.      From the Application Deployment menu at the top, select MDS configuration. 4.      Under Export, select the Export metadata documents to an archive on the machine where this web browser is running option, and then click Export. All the metadata is exported in a ZIP file.   Creating Password Policy through Admin Console : In 11gR1 and previous versions password policies could be created & applied via OIM Design Console only. From OIM11gR2 onwards, Password Policies can be created and assigned using Admin Console as well.  

    Read the article

  • How to Use the Signature Editor in Outlook 2013

    - by Lori Kaufman
    The Signature Editor in Outlook 2013 allows you to create a custom signature from text, graphics, or business cards. We will show you how to use the various features of the Signature Editor to customize your signatures. To open the Signature Editor, click the File tab and select Options on the left side of the Account Information screen. Then, click Mail on the left side of the Options dialog box and click the Signatures button. For more details, refer to one of the articles mentioned above. Changing the font for your signature is pretty self-explanatory. Select the text for which you want to change the font and select the desired font from the drop-down list. You can also set the justification (left, center, right) for each line of text separately. The drop-down list that reads Automatic by default allows you to change the color of the selected text. Click OK to accept your changes and close the Signatures and Stationery dialog box. To see your signature in an email, click Mail on the Navigation Bar. Click New Email on the Home tab. The Message window displays and your default signature is inserted into the body of the email. NOTE: You shouldn’t use fonts that are not common in your signatures. In order for the recipient to see your signature as you intended, the font you choose also needs to be installed on the recipient’s computer. If the font is not installed, the recipient would see a different font, the wrong characters, or even placeholder characters, which are empty square boxes. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can save it as a draft if you want, but it’s not necessary. If you decide to use a font that is not common, a better way to do so would be to create a signature as an image, or logo. Create your image or logo in an image editing program making it the exact size you want to use in your signature. Save the image in a file size as small as possible. The .jpg format works well for pictures, the .png format works well for detailed graphics, and the .gif format works well for simple graphics. The .gif format generally produces the smallest files. To insert an image in your signature, open the Signatures and Stationery dialog box again. Either delete the text currently in the editor, if any, or create a new signature. Then, click the image button on the editor’s toolbar. On the Insert Picture dialog box, navigate to the location of your image, select the file, and click Insert. If you want to insert an image from the web, you must enter the full URL for the image in the File name edit box (instead of the local image filename). For example, http://www.somedomain.com/images/signaturepic.gif. If you want to link to the image at the specified URL, you must also select Link to File from the Insert drop-down list to maintain the URL reference. The image is inserted into the Edit signature box. Click OK to accept your changes and close the Signatures and Stationery dialog box. Create a new email message again. You’ll notice the image you inserted into the signature displays in the body of the message. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You may want to put a link to a webpage or an email link in your signature. To do this, open the Signatures and Stationery dialog box again. Enter the text to display for the link, highlight the text, and click the Hyperlink button on the editor’s toolbar. On the Insert Hyperlink dialog box, select the type of link from the list on the left and enter the webpage, email, or other type of address in the Address edit box. You can change the text that will display in the signature for the link in the Text to display edit box. Click OK to accept your changes and close the dialog box. The link displays in the editor with the default blue, underlined text. Click OK to accept your changes and close the Signatures and Stationery dialog box. Here’s an example of an email message with a link in the signature. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your contact information into your signature as a Business Card. To do so, click Business Card on the editor’s toolbar. On the Insert Business Card dialog box, select the contact you want to insert as a Business Card. Select a size for the Business Card image from the Size drop-down list. Click OK. The Business Card image displays in the Signature Editor. Click OK to accept your changes and close the Signatures and Stationery dialog box. When you insert a Business Card into your signature, the Business Card image displays in the body of the email message and a .vcf file containing your contact information is attached to the email. This .vcf file can be imported into programs like Outlook that support this format. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your Business Card into your signature without the image or without the .vcf file attached. If you want to provide recipients your contact info in a .vcf file, but don’t want to attach it to every email, you can upload the .vcf file to a location on the internet and add a link to the file, such as “Get my vCard,” in your signature. NOTE: If you want to edit your business card, such as applying a different template to it, you must select a different View other than People for your Contacts folder so you can open the full contact editing window.     

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • CodePlex Daily Summary for Friday, August 22, 2014

    CodePlex Daily Summary for Friday, August 22, 2014Popular ReleasesQuickMon: Version 3.22: This release add two important changes. 1. Config variables at the monitor pack level (global to entire monitor pack for all Collectors) 2. The QuickMon (Windows) service now automatically reloads monitor packs that have been changed since it was started. This means you don't have to restart the service for changes to take effect.SSIS ReportGeneratorTask: ReportGenerator Task 1.8: New version of the SSIS Report Generator Task that supports SQL Server 2008, 2012 and 2014. In addition to minor bug fixes Multi-Value Parameters and Execution Information were integrated. The complete variable and parameter assignment is now a string and can be set dynamically.Corefig for Windows Server 2012 Core and Hyper-V Server 2012: Corefig 1.1.2 ISO: FixesUpdated Hyper-V scripts to use version 2 of the WMI tree. Updated the Hyper-V check for saved VM to look for the proper identifier. Fixed text issues with the licensing tab (thanks to briangw for rooting this problem out). EnhancementsNew (and improved) version number in Corefig.psd1.Outlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;File Explorer for WPF: FileExplorer3_20August2014: Please see Aug14 Update.Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.ODBC Connect: v1.0: ODBC Connect executables for both 32bit and 64bit ODBC data sourcesMSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.New Projects1thManage: GDT for erevery oneCreateProjectOnCodePlex: This is the first project for CoderCamps.HEAD FIRST C# LAB 1 : A DAY AT THE RACES: This has been provided for educational purposes and general discussion to improve coding practices associated with the resources detailed within Head First C#.Introduce Audit logging to your EF application using Repository & Unit of Work: Introduce Auditing in your application that uses Entity Framework by utilizing the Repository and Unit of Work design patterns.License Registration (C++): Allow to create demo version, activate or not a module.MS Word SharepointWiki Plugin: Scope of the Plugin is to enable a Post to a Sharepoint Wiki from within MS Word with Formatted Text and Images.Send My Zip: This app will help you to send the files were zipped then send the email about password information. This project is currently in setup mode and only availablewinhttp: this is a project for http/https download.Wix Builder: WixBuilder focusses on easily generating a WiX script from a project ouput, compile and link it into msi installer using the WiX Toolset.XiamiSig: ????????。

    Read the article

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • Are Chromebooks the New Netbooks, and What Does That Mean?

    - by Chris Hoffman
    Netbooks — small, cheap, slow laptops — were once very popular. They fell out of favor — people bought them because they seemed cheap and portable, but the actual experience was lackluster. Most netbooks now sit unused. Windows netbooks have vanished from stores today, but there’s a new super-cheap laptop — the Chromebook. Chromebook sales numbers are impressive, but their usage statistics tell a different story. Are Chromebooks just the new netbook? The Problem With Netbooks Netbooks seemed appealing, especially in an age before tablets and lightweight ultrabooks. You could buy a netbook for $200 or so and have a portable device that let you get on the Internet. The name “netbook” spelled that out — it was a portable device for getting on the ‘net. They weren’t really that great. The original netbook was a lightweight Asus Eee PC that ran Linux alone and had a small amount of fast flash storage. Netbooks eventually ran heavier Windows XP operating systems — Windows Vista was out, but it was just too bloated to run on netbooks. Manufacturers added slow magnetic hard drives, bloatware, and even DVD drives! They couldn’t run most Windows software very well. The build quality was poor and their keyboards were tiny and cramped. People liked the idea of a lightweight device that let them get on the Internet and loved the cheap price, but the actual experience wasn’t great. Chromebook Sales Chromebook sales numbers seem surprisingly high. NPD reported that Chromebooks were 21% of all notebooks sold in the US in 2013. If you combine laptop and tablet sales into a single statistic, Chromebooks were 9.6% of all those devices sold. That’s 2/3 as many Chromebooks sold as iPads in the US! Of Amazon’s best-selling laptop computers, two of the top three are Chromebooks. These definitely look like successful products. Unlike netbooks, Chromebooks are taking off in a big way in the education market. Many schools are buying Chromebooks for their students instead of more expensive Windows laptops. They’re easier to manage and lock down than Windows laptops, but — more importantly for cash-strapped schools — they’re very cheap. Netbooks never had this sort of momentum in schools. Chromebook Usage Statistics Here’s where the rosy picture of Chromebooks starts to become more realistic. StatCounter’s browser usage statistics show how widely used different operating systems are. For example, Windows 7 has the highest share with 35.71% of web activity in April, 2014. The chart doesn’t even show Chrome OS at all, although there is an “Other” number near the bottom. Click the Download Data link to download a CSV file and we can view more detailed information. Chrome OS only accounted for 0.38% of web usage in April, 2014. Desktop Linux, which people often shrug at, accounted for 1.52% in the same month. To its credit, Chrome OS usage has increased. Chromebooks were widely mocked back in November, 2013 when the sales numbers came out. After all, they only accounted for 0.11% of web usage globally in November, 2013! But Chrome OS numbers have been improving: Nov, 2013: 0.11% Dec, 2013: 0.22% Jan, 2014: 0.31% Feb, 2014: 0.35% Mar, 2014: 0.36% Apr, 2014: 0.38% Chrome OS is climbing, but it’s definitely still in the “Other” category. It isn’t as high as we’d expect to see it with those types of sales numbers. Chromebooks vs. Netbooks Chromebooks are more limited devices than traditional PCs. You can do quite a few things, but you have to do it all using Chrome or Chrome apps. Most people won’t be enabling developer mode and installing a Linux desktop. You don’t have access to the powerful desktop software available for Windows and even Mac OS X. On the other hand, these Chromebooks are less compromised than netbooks in many ways. They come with a lightweight operating system designed for portable, mobile devices. They don’t come packed with any bloatware, like the bloatware you’ll find on competing Windows PCs and the original netbooks. They’re cheaper because the manufacturer doesn’t have to pay for a Windows license. There’s no need for antivirus software weighing the operating system down. They’re larger than the original netbooks, with many of them being 11.6-inches instead of the original 8-inch bodies many older netbooks came with. They have larger, more comfortable keyboards and fast solid-state storage. Really, Chromebooks are what netbooks wanted to be. People didn’t buy netbooks to use typical Windows software — they just wanted a lightweight PC. Of course, for many people, the real successor to netbooks is tablets. If all you want is a portable device to throw in a bag so you can get online, maybe a tablet is better. Where Does This Leave Chromebooks? So, are Chromebooks the new netbooks? It’s a bit early to answer that question. Chromebooks are definitely not out of the competition — their sales look good and their usage share is increasing. On the other hand, Chrome OS is still pretty far behind. They’re not catching fire like tablets did. Maybe netbooks were just before their time and Chromebooks were what they were always meant to be. Just as Microsoft’s Windows XP tablets failed, Windows XP netbooks also failed. Tablets took off with a more refined operating system on better hardware years later. “Netbooks” — or Chromebooks — are now taking off with a more purpose-built operating system on better hardware, too. It’s hard to count Chromebooks out because they provide a much better experience than netbooks ever did. If you’re one of the people who wants to use old Windows desktop apps on your portable laptop, you may think netbooks were better — but most people don’t want that. But maybe people either want a full desktop PC experience or a full mobile tablet experience. Is there a place for a laptop with a keyboard that can only view websites? We’ll have to wait and see. Image Credit: Kevin Jarret on Flickr, Clive Darra on Flickr, Sean Freese on Flickr

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • H1 Visa interview tips–What you must know before attending the interview?

    - by Gopinath
    USA’s H1 visa allows highly qualified professionals from other countries to work in America. Many IT professionals in India aspire to go to USA on H1 and work for their clients. Recently I had a chance to study H1 visa process to help one of my friends and I would like to share what I learned. With the assumption that your H1 petition is approved and you got an interview scheduled at US Embassy for your visa stamping, here are tips you must know before attending the interview Dress Code – Formals Say no to casuals or any fancy dress when you attend the interview. It’s not a party or friends home you are visiting. Consider H1 Visa interview as your job interview and dress up in formals. There is no option B for your, you must be in formals. A plain formal shirt with a matching pant is suggested for men. Tie and Suit would not be required, but if you are a professional at management level you can consider wearing suit. Women can wear either formal Salwar or formal pant-shirt. Avoid heavy jewellery, wear what is must as per your tradition or culture. Body Language -  Smile on your face Your body language reflects what you are and what’s going on in your mind. Don’t be nervous or restless, be relaxed and wear a beautiful smile on your face. A smile is a curve that sets everything straight. When you are called for the interview, greet the interviewer with a beautiful smile. Say Good Morning/Afternoon/Evening depending on time you are visiting them. Whenever appropriate say Thank You. Generally American professionals are very friendly people and they reciprocate for your greetings. Make sure that you make them comfortable to start the interview. Carry original documents in a separate folder I don’t want to talk much about the documents that are required for your H1B interview as it’s big subject on it’s own and it requires a separate post. I assume that your consultant or employer helped you in gathering all the required documents like – petition, DS 160 forms, education & job related documents, resume, interview call letters, client letters, etc. For all the documents you are going to submit at the interview make sure that you have originals in a separate folder.  If required interviewer may ask you show the originals of any of the document you submitted for visa processing. Don’t mix the original documents with the documents you need to submit for interview. Have a separate folder for them. For those who are going to stamping along with their spouse and children, they need to carry few extra original documents like – marriage certificate, marriage photos(30 numbers)/album, birth certificates, passports, education and profession related certificates of the spouse and children. Know your role & responsibilities The interviewer will ask you questions on your roles and responsibilities at client location. Be clear what is your day to day tasks at client place and prepared to face detailed questions on the same. When asked explain clearly and also make sure what you say is inline with what is mentioned in your petition and client invitation letter. At times they may ask you questions specific to the project/technology you are going to work. So doing some homework in this area will help you easily answer the questions. Failing to answer basic questions on your role & responsibilities may result in rejection. You work for your Employer at Client location but NOT FOR CLIENT One of the important things to keep in mind that you work for your employer and you are being deputed to client location on a work visa.  Your employer is going to be solely responsible for your salary, work, promotion, pay hikes or what so ever during your stay at USA. Your client will not be responsible for anything. Lets say you are employed with Company X in India and they are applying for H1B to work at your client(ex: Microsoft) in USA, you must keep in my mind that Microsoft is not your employer. Microsoft will not pay your salaries or responsible for any employment related activities. Company X will be solely responsible for all your employer related activities. If you don’t get this correctly and say to Visa interviewer that your client is responsible, then you may get into troubles. Know your client It’s always good to know the clients with whom you are going to work in USA and their business. If your client is a well know organisation then you may not get many questions from interviewer else you need to be well prepared to provide details like – nature of business, location, size of the organisation, etc.  Get to know the basic details about your client and be confident while providing those details to the interviewer. Also make sure that you never talk about any confidential details of your client projects and business. Revealing confidential details of your client may land your job itself in soup. Make sure that your spouse is also in sync with you If you’ve applied a H4 visa for your spouse along with your H1, make sure that spouse is in sync with you. Your spouse also should know the basic details of your job, your employer, client and location where you will be travelling. Your spouse should also be prepared to answers questions related to marriage, their profession(if working), kids, education, etc. Interviewers will try to asses your spouse communication skills, whereabouts while staying in USA and would they prefer to work USA or not. On H4, which is a dependent visa, your spouse is not allowed to work in USA and at any point your spouse should not show the intentions to search for work in USA. Less luggage more comfort You would have definitely heard that there are lot of restrictions on what you can carry along with you to an US Embassy while attending the interview. To be frank it’s not good to say there are many restrictions, but there are a hell a lot of restrictions. There are unbelievable restrictions and it’s for the safety of everyone. You are not allowed to carry mobile phones, CD/DVDs, USBs, bank cards, cameras, cosmetics, food(except baby food), water, wallets, backpacks, sealed covers, etc. Trust me most of the things we carry with us regularly every day are not allowed inside. As there are 100s of restrictions, it would be easier if you understand what you can carry along with you and just carry them alone. Ask your employer/consultant to provide you a checklist of items that you can carry. Most what you would require are H1B related documents provided by the employer/consultant Photographs All original documents supporting your H1B Passports Some cash for your travel expenses (avoid coins) Any important phone number / details written in a paper(like your cab driver number, etc.) If you carry restricted stuff then you will be stopped at security checks, you have to find people who can safely keep all the restricted items. Due to heavy restrictions in and around the US Embassy you will not find any  place to keep your luggage. So just carry the bare minimum things required so that you feel more comfortable. Useful Links THE U.S. NON IMMIGRANT VISA APPLICATION PROCESS U.S VISA SECURITY REGULATIONS GENERAL FAQS Hope this information is helpful to you and best of luck for your interview. Creative commons Image credit: Flickr/ alexfrance, vinothchandar. hughelectronic, architratan, striatic

    Read the article

  • Not attending the LUGM mini-meetup - 05. Oct 2013

    Not attending a meeting of the LUGM can be fun, too. It's getting a bit of a habit that Ish is organising small gatherings, aka mini-meetups, of the Linux User Group Mauritius/Meta (LUGM) almost every Saturday. There they mainly discuss and talk about various elements of using Linux as ones main operating systems and the possibilities you are going to have. On top of course, some tips & tricks about mastering the command line and initial steps in scripting or even writing HTML. In general, sounds like a good portion of fun and great spirit of community. Unfortunately, I'm usually quite busy with private and family matters during the weekend and so I already signalised that I wouldn't be around. Well, at least not physically... But this Saturday a couple of things worked out faster than expected and so I was hanging out on my machine. I made virtual contact with one of Pawan's messages over on Facebook... And somehow that kicked off some kind of an online game fun on basic configuration of Apache HTTPd 2.2.x, PHP 5.x and how to improve the overall performance of a newly installed blog based on WordPress. Default configuration files Nitin's website finally came alive and despite the dark theme and the hidden Apple 'fanboy' advertisement I was more interested in the technical situation. As with any new installation there is usually quite some adjustment to be done. And Nitin's page was no exception. Unfortunately, out of the box installations of Apache httpd and PHP are too verbose and expose too much information under the hood. You might think that this isn't really a problem at all, well, think about it again after completely reading this article. First, I checked the HTTP response headers - using either Chrome Developer Tools or Firefox Web Developer extension - of Nitin's page and based on that I advised him to lower the noise levels a little bit. It's not really necessary that detailed information about web server software and scripting language has to be published in every response made. Quite a number of script kiddies and exploits actually check for version specifics prior to an attack. So, removing at least version details hardens the system a little bit. In particular, I'm talking about these response values: Server X-Powered-By How to achieve that? By tweaking the configuration files... Namely, we are going to look into the following ones: apache2.conf httpd.conf .htaccess php.ini The above list contains some additional files, I'm talking about in the next paragraphs. Anyway, those are the ones involved. Tweaking Apache Open your favourite text editor and start to modify the apache2.conf. Eventually, you might like to have a quick peak at the file to see whether it is necessary to adjust it or not. Following is a handy combination of commands to get an overview of your active directives: # sudo grep -v '#' /etc/apache2/apache2.conf | grep -v '^$' | less There you keep an eye on those two Apache directives: ServerSignature Off ServerTokens Prod If that's not the case, change them as highlighted above. In order to activate your modifications you have to restart Apache httpd server. On Debian and Ubuntu you might use apache2ctl for that, on other distributions you might have to use service or run the init-scripts again: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Refresh your website and check the HTTP response header. Tweaking PHP5 (a little bit) Next, check your php.ini file with the following statement: # sudo grep -v ';' /etc/php5/apache2/php.ini | grep -v '^$' | less And check the value of expose_php = Off Again, if it's not as highlighted, change it... Some more Apache love Okay, back to Apache it might also be interesting to improve the situation about browser caching and removing more obsolete information. When you run your website against the usual performance checks like Google Page Speed and Yahoo YSlow you might see those check points with bad grades on a standard, default configuration. Well, this can be done easily. Configure entity tags (ETags) ETags are only interesting when you run your websites on a farm of multiple web servers. Removing this data for your static resources is very simple in Apache. As we are going to deal with the HTTP response header information you have to ensure that Apache is capable to manipulate them. First, check your enabled modules: # sudo ls -al /etc/apache2/mods-enabled/ | grep headers And in case that the 'headers' module is not listed, you have to enable it from the available ones: # sudo a2enmod headers Second, check your httpd.conf file (in case it exists): # sudo grep -v '#' /etc/apache2/httpd.conf | grep -v '^$' | less In newer (better said fresh) installations you might have to create a new configuration file below your conf.d folder with your favourite text editor like so: # sudo nano /etc/apache2/conf.d/headers.conf Then, in order to tweak your HTTP responses either check for those lines or add them: Header unset ETagFileETag None In case that your file doesn't exist or those lines are missing, feel free to create/add them. Afterwards, check your Apache configuration syntax and restart your running instances as already shown above: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Add Expires headers To improve the loading performance of your website, you should take some care into the proper configuration of how to leverage the browser's ability to cache certain resources and files. This is done by adding an Expires: value to the HTTP response header. Generally speaking it is advised that you specify a near-future, read: 1 week or a little bit more, for your static content like JavaScript files or Cascading Style Sheets. One solution to adjust this is to put some instructions into the .htaccess file in the root folder of your web site. Of course, this could also be placed into a more generic location of your Apache installation but honestly, I'd like to keep this at the web site level. Following some adjustments I'm currently using on this blog site: # Turn on Expires and set default to 0ExpiresActive OnExpiresDefault A0 # Set up caching on media files for 1 year (forever?)<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">ExpiresDefault A29030400Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 1 week<FilesMatch "\.(js|css)$">ExpiresDefault A604800Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 31 days<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">ExpiresDefault A2678400Header append Cache-Control "public"</FilesMatch> As we are editing the .htaccess files, it is not necessary to restart Apache. In case that your web site doesn't load anymore or you're experiencing an error while trying to restart your httpd, check that the 'expires' module is actually an enabled module: # ls -al /etc/apache2/mods-enabled/ | grep expires# sudo a2enmod expires Of course, the instructions above a re not feature complete but I hope that they might provide a better default configuration for your LAMP stack. Resume of the day Within a couple of hours, and while being occupied with an eLearning course on SQL Server 2012, I had some good fun in helping and assisting other LUGM members while they were some kilometers away at Bagatelle. According to other blog articles it seems that Nitin had quite some moments of desperation. Just for the records: At no time it was my intention to either kick his butt or pull a leg on him. Simply, providing some input based on the lessons I've learned over the last couple of years configuring Apache HTTPd and PHP. Check out the other blogs, too: LUGM mini-meetup... Epic! Superb Saturday Linux Meetup And last but not least, the man himself: The end of a new beginning Cheers, and happy community'ing! Updates Due to our weekly Code & Coffee sessions in the MSCC community, I had a chance to talk to Nitin directly and he showed me the problems directly on his machine. This led to update this article hence the paragraphs on enabling the modules 'headers' and 'expires'.

    Read the article

  • CodePlex Daily Summary for Monday, May 26, 2014

    CodePlex Daily Summary for Monday, May 26, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedSimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesSimple Connect To Db: SimpleConnectToDb_v1: SimpleConnectToDb_v1CRM 2011 / CRM 2013 Form Helper: v2014.05.25: v2014.05.25 Added PhoneFormat & PhoneFormatAreaCode v2014.05.24 Initial ReleaseCreate Word documents without MS Word: Release 3.0: Add support for Sections, Sections Headers and Footers and right to left languages.Corporate News App for SharePoint 2013: CorporateNewsApp v1.6.2.0: Important note This version contains a major bug fix about the generic error "Request failed. Unexpected response data from server null" This error occurs on SharePoint Online only, following an update of the Javascript API after May 2014. If you have installed this application manually in your applications company catalog, you can download the CorporateNewsApp.app file in the zip archive and update it manually. If you have installed this application directly from the SharePoint Store, it ...DevOS: DevOS: Plugin-system added Including:DevOS.exe DevOS API.dll Files must be in the some folderTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...ConEmu - Windows console with tabs: ConEmu 140523 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.New ProjectsASP.Net MCV4 Simplified Code Samples: This project intended to simplify the same. In this project each task is implemented with minimum lines of code to reduces complicity.Calvin: net???CodeLatino by Latinosoft: A Modified version for codeShow -- Probably taking more than a month.freeasyBackup: A free and easy to use Backup Tool for everyone. Without any cloud restrictions. freeasyExplorer: A free and easy to use File Explorer for everyone.openPDFspeedreader: #spritz #pdfreader #speedreader PDF Editor to Edit PDF Files in your ASP.NET Applications: This sample application allows the users to edit PDF files online using Aspose.Pdf for .NET.SharePoint World Cup 2013: world cup 2014SSAS Long Running Query Performance Helper: This utility helps investigate long running multidimensional or mining queries in discovery, de-parameterization and re-parameterization back to source format.

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • PASS: The Budget Process

    - by Bill Graziano
    Every fiscal year PASS creates a detailed budget.  This helps us set priorities and communicate to our members what we’re going to do in the upcoming year.  You can review the current budget on the PASS Governance page.  That page currently requires you to login but I’m talking with HQ to see if there are any legal issues with opening that up. The Accounting Team The PASS accounting team is two people.  The Executive Vice-President of Finance (“EVP”) and the PASS Accounting Manager.  Sandy Cherry is the accounting manager and works at PASS HQ.  Sandy has been with PASS since we switched management companies in 2007.  Throughout this document when I talk about any actual work related to the budget that’s all Sandy :)  She’s the glue that gets us through this process.  Last year we went through 32 iterations of the budget before the Board approved so it’s a pretty busy time for her us – well, mostly her. Fiscal Year The PASS fiscal year runs from July 1st through June 30th the following year.  Right now we’re in fiscal year 2011.  Our 2010 Summit actually occurred in FY2011.  We switched to this schedule from a calendar year in 2006.  Our goal was to have the Summit occur early in our fiscal year.  That gives us the rest of the year to handle any significant financial impact from the Summit.  If registrations are down we can reduce spending.  If registrations are up we can decide how much to increase our reserves and how much to spend.  Keep in mind that the Summit is budgeted to generate 82% of our revenue this year.  How it performs has a significant impact on our financials.  The other benefit of this fiscal year is that it matches the Microsoft fiscal year.  We sign an annual sponsorship agreement with Microsoft and it’s very helpful that our fiscal years match. This year our budget process will probably start in earnest in March or April.  I’d like to be done in early June so we can publish before July 1st.  I was late publishing it this year and I’m trying not to repeat that. Our Budget Our actual budget is an Excel spreadsheet with 36 sheets.  We remove some of those when we publish it since they include salary information.  The budget is broken up into various portfolios or departments.  We have 20 portfolios.  They include chapters, marketing, virtual chapters, marketing, etc.  Ideally each portfolio is assigned to a Board member.  Each portfolio also typically has a staff person assigned to it.  Portfolios that aren’t assigned to a Board member are monitored by HQ and the ExecVP-Finance (me).  These are typically smaller portfolios such as deferred membership or Summit futures.  (More on those in a later post.)  All portfolios are reviewed by all Board members during the budget approval process, when interim financials are released internally and at year-end. The Process Our first step is to budget revenues.  The Board determines a target attendee number.  We have formulas based on historical performance that convert that to an overall attendee revenue number.  Other revenue projections (such as vendor sponsorships) come from different parts of the organization.  I hope to have another post with more details on how we project revenues. The next step is to budget expenses.  Board members fill out a sample spreadsheet with their budget for the year.  They can add line items and notes describing what the amounts are for.  Each Board portfolio typically has from 10 to 30 line items.  Any new initiatives they want to pursue needs to be budgeted.  The Summit operations budget is managed by HQ.  It includes the cost for food, electrical, internet, etc.  Most of these come from our estimate of attendees and our contract with the convention center.  During this process the Board can ask for more or less to be spent on various line items.  For example, if we weren’t happy with the Internet at the last Summit we can ask them to look into different options and/or increasing the budget.  HQ will also make adjustments to these numbers based on what they see at the events and the feedback we receive on the surveys. After we have all the initial estimates we start reviewing the entire budget.  It is sent out to the Board and we can see what each portfolio requested and what the overall profit and loss number is.  We usually start with too much in expenses and need to cut.  In years past the Board started haggling over these numbers as a group.  This past year they decided I should take a first cut and present them with a reasonable budget and a list of what I changed.  That worked well and I think we’ll continue to do that in the future. We go through a number of iterations on the budget.  If I remember correctly, we went through 32 iterations before we passed the budget.  At each iteration various revenue and expense numbers can change.  Keep in mind that the PASS budget has 200+ line items spread over 20 portfolios.  Many of these depend on other numbers.  For example, if we decide increase the projected attendees that cascades through our budget.  At each iteration we list what changed and the impact.  Ideally these discussions will take place at a face-to-face Board meeting.  Many of them also take place over the phone.  Board members explain any increase they are asking for while performing due diligence on other budget requests.  Eventually a budget emerges and is passed. Publishing After the budget is passed we create a version without the formulas and salaries for posting on the web site.  Sandy also creates some charts to help our members understand the budget.  The EVP writes a nice little letter describing some of the changes from last year’s budget.  You can see my letter and our budget on the PASS Governance page. And then, eight months later, we start all over again.

    Read the article

  • Finding the groups of a user in WLS with OPSS

    - by user12587121
    How to find the group memberships for a user from a web application running in Weblogic server ?  This is useful for building up the profile of the user for security purposes for example. WLS as a container offers an identity store service which applications can access to query and manage identities known to the container.  This article for example shows how to recover the groups of the current user, but how can we find the same information for an arbitrary user ? It is the Oracle Platform for Securtiy Services (OPSS) that looks after the identity store in WLS and so it is in the OPSS APIs that we can find the way to recover this information. This is explained in the following documents.  Starting from the FMW 11.1.1.5 book list, with the Security Overview document we can see how WLS uses OPSS: Proceeding to the more detailed Application Security document, we find this list of useful references for security in FMW. We can follow on into the User/Role API javadoc. The Application Security document explains how to ensure that the identity store is configured appropriately to allow the OPSS APIs to work.  We must verify that the jps-config.xml file where the application  is deployed has it's identity store configured--look for the following elements in that file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> The document contains a code sample for using the identity store here. Once we have the identity store reference we can recover the user's group memberships using the RoleManager interface:             RoleManager roleManager = idStore.getRoleManager();            SearchResponse grantedRoles = null;            try{                System.out.println("Retrieving granted WLS roles for user " + userPrincipal.getName());                grantedRoles = roleManager.getGrantedRoles(userPrincipal, false);                while( grantedRoles.hasNext()){                      Identity id = grantedRoles.next();                      System.out.println("  disp name=" + id.getDisplayName() +                                  " Name=" + id.getName() +                                  " Principal=" + id.getPrincipal() +                                  "Unique Name=" + id.getUniqueName());                     // Here, we must use WLSGroupImpl() to build the Principal otherwise                     // OES does not recognize it.                      retSubject.getPrincipals().add(new WLSGroupImpl(id.getPrincipal().getName()));                 }            }catch(Exception ex) {                System.out.println("Error getting roles for user " + ex.getMessage());                ex.printStackTrace();            }        }catch(Exception ex) {            System.out.println("OESGateway: Got exception instantiating idstore reference");        } This small JDeveloper project has a simple servlet that executes a request for the user weblogic's roles on executing a get on the default URL.  The full code to recover a user's goups is in the getSubjectWithRoles() method in the project.

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • collision detection problems - Javascript/canvas game

    - by Tom Burman
    Ok here is a more detailed version of my question. What i want to do: i simply want the have a 2d array to represent my game map. i want a player sprite and i want that sprite to be able to move around my map freely using the keyboard and also have collisions with certain tiles of my map array. i want to use very large maps so i need a viewport. What i have: I have a loop to load the tile images into an array: /Loop to load tile images into an array var mapTiles = []; for (x = 0; x <= 256; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "images/prototype/"+x+".jpg"; mapTiles.push(imageObj); } I have a 2d array for my game map: //Array to hold map data var board = [ [1,2,3,4,3,4,3,4,5,6,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [17,18,19,20,19,20,19,20,21,22,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [33,34,35,36,35,36,35,36,37,38,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [49,50,51,52,51,52,51,52,53,54,1,1,1,1,1,1,1,1,1,1,1,1,1,197,198,199,1,1,1,1], [65,66,67,68,146,147,67,68,69,70,1,1,1,1,1,1,1,1,216,217,1,1,1,213,214,215,1,1,1,1], [81,82,83,161,162,163,164,84,85,86,1,1,1,1,1,1,1,1,232,233,1,1,1,229,230,231,1,1,1,1], [97,98,99,177,178,179,180,100,101,102,1,1,1,1,59,1,1,1,248,249,1,1,1,245,246,247,1,1,1,1], [1,1,238,1,1,1,1,239,240,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [216,217,254,1,1,1,1,255,256,1,204,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [232,233,1,1,1,117,118,1,1,1,220,1,1,119,120,1,1,1,1,1,1,1,1,1,1,1,119,120,1,1], [248,249,1,1,1,133,134,1,1,1,1,1,1,135,136,1,1,1,1,1,1,59,1,1,1,1,135,136,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,216,217,1,1,1,1,1,1,60,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,232,233,1,1,1,1,1,1,1,1,1,1,1,1,1,1,204,1,1,1,1,1,1,1,1,1,1,1], [1,1,248,249,1,1,1,1,1,1,1,1,1,1,1,1,1,1,220,1,1,1,1,1,1,216,217,1,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,149,150,151,1,1,1,1,1,1,1,1,1,1,232,233,1,1,1], [12,12,12,12,12,12,12,13,1,1,1,1,165,166,167,1,1,1,1,1,1,119,120,1,1,248,249,1,1,1], [28,28,28,28,28,28,28,29,1,1,1,1,181,182,183,1,1,1,1,1,1,135,136,1,1,1,1,1,1,1], [44,44,44,44,44,15,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,59,1,1,197,198,199,1,1,1,1,119,120,1], [1,1,1,1,1,27,28,29,1,1,216,217,1,1,1,1,1,1,1,1,213,214,215,1,1,1,1,135,136,1], [1,1,1,1,1,27,28,29,1,1,232,233,1,1,1,1,1,1,1,1,229,230,231,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,248,249,1,1,1,1,1,1,1,1,245,246,247,1,1,1,1,1,1,1], [1,1,1,197,198,199,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,213,214,215,28,29,1,1,1,1,1,60,1,1,1,1,204,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,229,230,231,28,29,1,1,1,1,1,1,1,1,1,1,220,1,1,1,1,119,120,1,1,1,1,1], [1,1,1,245,246,247,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,135,136,1,1,60,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1] ]; I have my loop to place the correct tile sin the correct positions: //Loop to place tiles onto screen in correct position for (x = 0; x <= viewWidth; x++){ for (y = 0; y <= viewHeight; y++){ var width = 32; var height = 32; context.drawImage(mapTiles[board[y+viewY][x+viewX]],x*width, y*height); } } I Have my player object : //Place player object context.drawImage(playerImg, (playerX-viewX)*32,(playerY-viewY)*32, 32, 32); I have my viewport setup: //Set viewport pos viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; I have my player movement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerY > 0) playerY--; break; case 38: // Up if (playerX > 0) playerX--; break; case 39: // Right if (playerY < worldWidth) playerY++; break; case 40: // Down if (playerX < worldHeight) playerX++; break; } My Problem: I have my map loading an it looks fine, but my player position thinks it's on a different tile to what it actually is. So for instance, i know that if my player moves left 1 tile, the value of that tile should be 2, but if i print out the value it should be moving to (2), it comes up with a different value. How ive tried to solve the problem: I have tried swap X and Y values for the initialization of my player, for when my map prints. If i swap the x and y values in this part of my code: context.drawImage(mapTiles[board[y+viewY][x+viewX]],x*width, y*height); The map doesnt get draw correctly at all and tiles are placed all in random positions or orientations IF i sway the x and y values for my player in this line : context.drawImage(playerImg, (playerX-viewX)*32,(playerY-viewY)*32, 32, 32); The players movements are inversed, so up and down keys move my player left and right viceversa. My question: Where am i going wrong in my code, and how do i solve it so i have my map looking like it should and my player moving as it should as well as my player returning the correct tileID it is standing on or moving too. Thanks Again ALSO Here is a link to my whole code: prototype

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • LINQ – SequenceEqual() method

    - by nmarun
    I have been looking at LINQ extension methods and have blogged about what I learned from them in my blog space. Next in line is the SequenceEqual() method. Here’s the description about this method: “Determines whether two sequences are equal by comparing the elements by using the default equality comparer for their type.” Let’s play with some code: 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2: // int[] numbersCopy = numbers; 3: int[] numbersCopy = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 4:  5: Console.WriteLine(numbers.SequenceEqual(numbersCopy)); This gives an output of ‘True’ – basically compares each of the elements in the two arrays and returns true in this case. The result is same even if you uncomment line 2 and comment line 3 (I didn’t need to say that now did I?). So then what happens for custom types? For this, I created a Product class with the following definition: 1: class Product 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8: } 9:  10: public enum Status 11: { 12: Active = 1, 13: InActive = 2, 14: OffShelf = 3, 15: } In my calling code, I’m just adding a few product items: 1: private static List<Product> GetProducts() 2: { 3: return new List<Product> 4: { 5: new Product 6: { 7: ProductId = 1, 8: Name = "Laptop", 9: Category = "Computer", 10: MfgDate = new DateTime(2003, 4, 3), 11: Status = Status.Active, 12: }, 13: new Product 14: { 15: ProductId = 2, 16: Name = "Compact Disc", 17: Category = "Water Sport", 18: MfgDate = new DateTime(2009, 12, 3), 19: Status = Status.InActive, 20: }, 21: new Product 22: { 23: ProductId = 3, 24: Name = "Floppy", 25: Category = "Computer", 26: MfgDate = new DateTime(1993, 3, 7), 27: Status = Status.OffShelf, 28: }, 29: }; 30: } Now for the actual check: 1: List<Product> products1 = GetProducts(); 2: List<Product> products2 = GetProducts(); 3:  4: Console.WriteLine(products1.SequenceEqual(products2)); This one returns ‘False’ and the reason is simple – this one checks for reference equality and the products in the both the lists get different ‘memory addresses’ (sounds like I’m talking in ‘C’). In order to modify this behavior and return a ‘True’ result, we need to modify the Product class as follows: 1: class Product : IEquatable<Product> 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8:  9: public override bool Equals(object obj) 10: { 11: return Equals(obj as Product); 12: } 13:  14: public bool Equals(Product other) 15: { 16: //Check whether the compared object is null. 17: if (ReferenceEquals(other, null)) return false; 18:  19: //Check whether the compared object references the same data. 20: if (ReferenceEquals(this, other)) return true; 21:  22: //Check whether the products' properties are equal. 23: return ProductId.Equals(other.ProductId) 24: && Name.Equals(other.Name) 25: && Category.Equals(other.Category) 26: && MfgDate.Equals(other.MfgDate) 27: && Status.Equals(other.Status); 28: } 29:  30: // If Equals() returns true for a pair of objects 31: // then GetHashCode() must return the same value for these objects. 32: // read why in the following articles: 33: // http://geekswithblogs.net/akraus1/archive/2010/02/28/138234.aspx 34: // http://stackoverflow.com/questions/371328/why-is-it-important-to-override-gethashcode-when-equals-method-is-overriden-in-c 35: public override int GetHashCode() 36: { 37: //Get hash code for the ProductId field. 38: int hashProductId = ProductId.GetHashCode(); 39:  40: //Get hash code for the Name field if it is not null. 41: int hashName = Name == null ? 0 : Name.GetHashCode(); 42:  43: //Get hash code for the ProductId field. 44: int hashCategory = Category.GetHashCode(); 45:  46: //Get hash code for the ProductId field. 47: int hashMfgDate = MfgDate.GetHashCode(); 48:  49: //Get hash code for the ProductId field. 50: int hashStatus = Status.GetHashCode(); 51: //Calculate the hash code for the product. 52: return hashProductId ^ hashName ^ hashCategory & hashMfgDate & hashStatus; 53: } 54:  55: public static bool operator ==(Product a, Product b) 56: { 57: // Enable a == b for null references to return the right value 58: if (ReferenceEquals(a, b)) 59: { 60: return true; 61: } 62: // If one is null and the other not. Remember a==null will lead to Stackoverflow! 63: if (ReferenceEquals(a, null)) 64: { 65: return false; 66: } 67: return a.Equals((object)b); 68: } 69:  70: public static bool operator !=(Product a, Product b) 71: { 72: return !(a == b); 73: } 74: } Now THAT kinda looks overwhelming. But lets take one simple step at a time. Ok first thing you’ve noticed is that the class implements IEquatable<Product> interface – the key step towards achieving our goal. This interface provides us with an ‘Equals’ method to perform the test for equality with another Product object, in this case. This method is called in the following situations: when you do a ProductInstance.Equals(AnotherProductInstance) and when you perform actions like Contains<T>, IndexOf() or Remove() on your collection Coming to the Equals method defined line 14 onwards. The two ‘if’ blocks check for null and referential equality using the ReferenceEquals() method defined in the Object class. Line 23 is where I’m doing the actual check on the properties of the Product instances. This is what returns the ‘True’ for us when we run the application. I have also overridden the Object.Equals() method which calls the Equals() method of the interface. One thing to remember is that anytime you override the Equals() method, its’ a good practice to override the GetHashCode() method and overload the ‘==’ and the ‘!=’ operators. For detailed information on this, please read this and this. Since we’ve overloaded the operators as well, we get ‘True’ when we do actions like: 1: Console.WriteLine(products1.Contains(products2[0])); 2: Console.WriteLine(products1[0] == products2[0]); This completes the full circle on the SequenceEqual() method. See the code used in the article here.

    Read the article

  • How to Configure Windows Machine to Allow File Sharing with DNS Alias

    - by Michael Ferrante
    I have not seen a single article posted anywhere online that brings together all the settings one would need to do to make this work properly on Windows, so I thought I would post it here. To facilitate failover schemes, a common technique is to use DNS CNAME records (DNS Aliases) for different machine roles. Then instead of changing the Windows computername of the actual machine name, one can switch a DNS record to point to a new host. This can work on Microsoft Windows machines, but to make it work with file sharing the following configuration steps need to be taken. Outline The Problem The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) Providing browse capabilities for multiple NetBIOS names (OptionalNames) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) References 1. The Problem On Windows machines, file sharing can work via the computer name, with or without full qualification, or by the IP Address. By default, however, filesharing will not work with arbitrary DNS aliases. To enable filesharing and other Windows services to work with DNS aliases, you must make registry changes as detailed below and reboot the machine. 2. The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) This change alone will allow other machines on the network to connect to the machine using any arbitrary hostname. (However this change will not allow a machine to connect to itself via a hostname, see BackConnectionHostNames below). Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value DisableStrictNameChecking of type DWORD set to 1. Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) This change is necessary for a DNS alias to work with filesharing from a machine to find itself. This creates the Local Security Authority host names that can be referenced in an NTLM authentication request. To do this, follow these steps for all the nodes on the client computer: To the registry subkey HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0, add new Multi-String Value BackConnectionHostNames In the Value data box, type the CNAME or the DNS alias, that is used for the local shares on the computer, and then click OK. Note: Type each host name on a separate line. Providing browse capabilities for multiple NetBIOS names (OptionalNames) Allows ability to see the network alias in the network browse list. Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value OptionalNames of type Multi-String Add in a newline delimited list of names that should be registered under the NetBIOS browse entries Names should match NetBIOS conventions (i.e. not FQDN, just hostname) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) NOTE: Should not need to do this for basic functions to work, documented here for completeness. We had one situation in which the DNS alias was not working because there was an old SPN record interfering, so if other steps aren't working check if there are any stray SPN records. You must register the Kerberos service principal names (SPNs), the host name, and the fully-qualified domain name (FQDN) for all the new DNS alias (CNAME) records. If you do not do this, a Kerberos ticket request for a DNS alias (CNAME) record may fail and return the error code KDC_ERR_S_SPRINCIPAL_UNKNOWN. To view the Kerberos SPNs for the new DNS alias records, use the Setspn command-line tool (setspn.exe). The Setspn tool is included in Windows Server 2003 Support Tools. You can install Windows Server 2003 Support Tools from the Support\Tools folder of the Windows Server 2003 startup disk. How to use the tool to list all records for a computername: setspn -L computername To register the SPN for the DNS alias (CNAME) records, use the Setspn tool with the following syntax: setspn -A host/your_ALIAS_name computername setspn -A host/your_ALIAS_name.company.com computername 3. References All the Microsoft references work via: http://support.microsoft.com/kb/ Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name Covers the basics of making file sharing work properly with DNS alias records from other computers to the server computer. KB281308 Error message when you try to access a server locally by using its FQDN or its CNAME alias after you install Windows Server 2003 Service Pack 1: "Access denied" or "No network provider accepted the given network path" Covers how to make the DNS alias work with file sharing from the file server itself. KB926642 How to consolidate print servers by using DNS alias (CNAME) records in Windows Server 2003 and in Windows 2000 Server Covers more complex scenarios in which records in Active Directory may need to be updated for certain services to work properly and for browsing for such services to work properly, how to register the Kerberos service principal names (SPNs). KB870911 Distributed File System update to support consolidation roots in Windows Server 2003 Covers even more complex scenarios with DFS (discusses OptionalNames). KB829885

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • heimdal kerberos in openldap issue

    - by Brian
    I think I posted this on the wrong 'sister site', so here it is. I'm having a bit of trouble getting Kerberos (Heimdal version) to work nicely with OpenLDAP. The kerberos database is being stored in LDAP itself. The KDC uses SASL EXTERNAL authentication as root to access the container ou. I created the database in LDAP fine using kadmin -l, but it won't let me use kadmin without the -l flag: root@rds0:~# kadmin -l kadmin> list * krbtgt/REALM kadmin/changepw kadmin/admin changepw/kerberos kadmin/hprop WELLKNOWN/ANONYMOUS WELLKNOWN/org.h5l.fast-cookie@WELLKNOWN:ORG.H5L default brian.empson brian.empson/admin host/rds0.example.net ldap/rds0.example.net host/localhost kadmin> exit root@rds0:~# kadmin kadmin> list * brian.empson/admin@REALM's Password: <----- With right password kadmin: kadm5_get_principals: Key table entry not found kadmin> list * brian.empson/admin@REALM's Password: <------ With wrong password kadmin: kadm5_get_principals: Already tried ENC-TS-info, looping kadmin> I can get tickets without a problem: root@rds0:~# klist Credentials cache: FILE:/tmp/krb5cc_0 Principal: brian.empson@REALM Issued Expires Principal Nov 11 14:14:40 2012 Nov 12 00:14:37 2012 krbtgt/REALM@REALM Nov 11 14:40:35 2012 Nov 12 00:14:37 2012 ldap/rds0.example.net@REALM But I can't seem to change my own password without kadmin -l: root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Right password New password: Verify password - New password: Auth error : Authentication failed root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Wrong password kpasswd: krb5_get_init_creds: Already tried ENC-TS-info, looping kadmin's logs are not helpful at all: 2012-11-11T13:48:33 krb5_recvauth: Key table entry not found 2012-11-11T13:51:18 krb5_recvauth: Key table entry not found 2012-11-11T13:53:02 krb5_recvauth: Key table entry not found 2012-11-11T14:16:34 krb5_recvauth: Key table entry not found 2012-11-11T14:20:24 krb5_recvauth: Key table entry not found 2012-11-11T14:20:44 krb5_recvauth: Key table entry not found 2012-11-11T14:21:29 krb5_recvauth: Key table entry not found 2012-11-11T14:21:46 krb5_recvauth: Key table entry not found 2012-11-11T14:23:09 krb5_recvauth: Key table entry not found 2012-11-11T14:45:39 krb5_recvauth: Key table entry not found The KDC reports that both accounts succeed in authenticating: 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:48:03 sending 294 bytes to IPv4:192.168.72.10 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 ENC-TS Pre-authentication succeeded -- brian.empson@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 ENC-TS pre-authentication succeeded -- brian.empson@REALM 2012-11-11T14:48:03 AS-REQ authtime: 2012-11-11T14:48:03 starttime: unset endtime: 2012-11-11T14:53:00 renew till: unset 2012-11-11T14:48:03 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 sending 704 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:45:39 sending 303 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 ENC-TS Pre-authentication succeeded -- brian.empson/admin@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 ENC-TS pre-authentication succeeded -- brian.empson/admin@REALM 2012-11-11T14:45:39 AS-REQ authtime: 2012-11-11T14:45:39 starttime: unset endtime: 2012-11-11T15:45:39 renew till: unset 2012-11-11T14:45:39 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 sending 717 bytes to IPv4:192.168.72.10 I wish I had more detailed logging messages, running kadmind in debug mode seems to almost work but it just kicks me back to the shell when I type in the correct password. GSSAPI via LDAP doesn't work either, but I suspect it's because some parts of kerberos aren't working either: root@rds0:~# ldapsearch -Y GSSAPI -H ldaps:/// -b "o=mybase" o=mybase SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () root@rds0:~# ldapsearch -Y EXTERNAL -H ldapi:/// -b "o=mybase" o=mybase SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 # extended LDIF <snip> Would anyone be able to point me in the right direction?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >