Search Results

Search found 5059 results on 203 pages for 'james sun'.

Page 198/203 | < Previous Page | 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • CodePlex Daily Summary for Tuesday, December 04, 2012

    CodePlex Daily Summary for Tuesday, December 04, 2012Popular ReleasesAcDown?????: AcDown????? v4.3.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ?? v4.3.2?? ?????????????????? ??Acfun??????? ??Bilibili?????? ??Bilibili???????????? ??Bilibili????????? ??????????????? ???? ??Bilibili??????? ????32??64? Windows XP/...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.2: ??FineUI ?? ExtJS ??? ASP.NET 2.0 ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License 2.0 (Apache) ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ ???? +2012-12-03 v3.2.2 -?????????????,?????button/button_menu.aspx(????)。 +?Window????Plain??;?ToolbarPosition??Footer??;?????FooterBarAlign??。 -????win...Player Framework by Microsoft: Player Framework for Windows Phone 8: This is a brand new version of the Player Framework for Windows Phone, available exclusively for Windows Phone 8, and now based upon the Player Framework for Windows 8. While this new version is not backward compatible with Windows Phone 7 (get that http://smf.codeplex.com/releases/view/88970), it does offer the same great feature set plus dozens of new features such as advertising, localization support, and improved skinning. Click here for more information about what's new in the Windows P...SSH.NET Library: 2012.12.3: New feature(s): + SynchronizeDirectoriesmenu4web: menu4web 1.1 - free javascript menu: menu4web 1.1 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 22 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Quest: Quest 5.3 Beta: New features in Quest 5.3 include: Grid-based map (sponsored by Phillip Zolla) Changable POV (sponsored by Phillip Zolla) Game log (sponsored by Phillip Zolla) Customisable object link colour (sponsored by Phillip Zolla) More room description options (by James Gregory) More mathematical functions now available to expressions Desktop Player uses the same UI as WebPlayer - this will make it much easier to implement customisation options New sorting functions: ObjectListSort(list,...Chinook Database: Chinook Database 1.4: Chinook Database 1.4 This is a sample database available in multiple formats: SQL scripts for multiple database vendors, embeded database files, and XML format. The Chinook data model is available here. ChinookDatabase1.4_CompleteVersion.zip is a complete package for all supported databases/data sources. There are also packages for each specific data source. Supported Database ServersDB2 EffiProz MySQL Oracle PostgreSQL SQL Server SQL Server Compact SQLite Issues Resolved293...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.34: changes FIXED: Thanks Function when "Download each post in it's own folder" is disabled FIXED: "PixHub.eu" linksD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300????????API for .Net SDK: SDK for .Net ??? Release 5: 2012?11?30??? ?OAuth?????????????????????SDK OAuth oauth = new OAuth("<AppKey>", "<AppSecret>", "<????>"); WebProxy proxy = new WebProxy(); proxy.Address = new Uri("http://proxy.domain.com:3128");//??????????? proxy.Credentials = new NetworkCredential("<??>", "<??>");//???????,??? oauth.Proxy = proxy; //??????,?~ DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...NHook - A debugger API: NHook 1.0: x86 debugger Resolve symbol from MS Public server Resolve RVA from executable's image Add breakpoints Assemble / Disassemble target process assembly More information here, you can also check unit tests that are real sample code.PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.New Projects.Net Assembly Checker: This is the tool which can help you solve the issues like "exceptions on assembly loading."Abide - Halo Map Editor: Abide is used for modification of Halo 2 maps, and other Blam engine games.AX-OData Queries: The focus of this project, is to provide a utility that will allow someone to understand all Queries within an instance of AX 2012, that can be OData Feeds.Channel 9 Apps: The Channel 9 Apps project aims to bring Channel 9 content natively to all smart devices in the form of native applications.Edutainment: Eine App für Windows mit dem Ziel, eine Schülern ein bestimmtes Thema beizubringen.Generic Windows Service Host: *Release Coming Soon* - This is a plug-in based Windows Service that will automatically execute assemblies found in a specified directory on start-up.Good Diary: Good Diary Applicationhedefgrup: Source code for hedef-grupInIReader: It's a small and simple C# based InIReader. Read, Load and write easily with InIFiles.jsGotoDefinition: jsGotoDefinition is a VS2010 plugin that, amongst other things, allows developers to right-click and navigate to a Javascript function's definition, giving them same sort of experience available with C#.JSON-RPC RT: Json-RPC implements bidirectional JSON-RPC protocol using WinRT asynchronous async / await methods.LearnByteartRetail: The reason that I host the project in codeplext is let me to work same source code at home and company~MIX++: A C++ implementation of an emulator for the MIX processor and a MIXAL assembler as defined in 'The Art of Computer Programming'Nant SVN Extension: This library extends NAnt with SVN client tasks.POM Tools: Résumé du projetQuotesDaily: A Website to Display QuotesRSS Feed Reader for Windows 8: La aplicación RSS Feed Viewer for Windows 8 implementa un lector de RSS para una única fuente de datos que se integra con los servicios de Windows 8.SharePoint 2010 App Model: This Project has the purpose of create a set of component that emulate the new functionality of SharePoint 2013 that is the new App Model for SharePoint 2010.SiLL: SiLL is a framework for rapid development of custom CMS solutions. It was designed to offer unprecedented flexibility with no overhead present in other frameworSolid Edge SDK: Solid Edge SDKTestForCodePlex: this is a personal project for the codeplex test, public user could *not* jion in the project please.verse: Verse is a simple dynamic language and runtime with a focus on parallelism and pattern matching.VivendoByte Toolkit: A toolkit useful to build Windows Store apps. It contains helpers to bind commands in MVVM, user-controls and converters. It's built using Galasoft MVVM Light.Windows 8 Store Video App Framework: Dieses Framework zeigt die Erstellung einer eigenen Video App für Windows 8. Lokale Videos, gestreamt aus dem Netz und sogar von YouTube - alles kein Problem!WJ's Windows Phone User Controls: This project contains list of commonly used Windows Phone user controls in my daily projects. Those user controls are built on Windows Phone OS 7.1.WorkingWU: Project for photo designer web layoutxGui: a Direct3D GUI, support Direct X 9/10/11.YDNoteOpenAPI4N: a .net library for ydNote open API! ????OPENAPI?.NET??! ??????: ??????!

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • Windows Phone Developer Spotlight: Nikolai Joukov

    - by Lori Lalonde
    Originally posted on: http://geekswithblogs.net/lorilalonde/archive/2014/06/04/windows-phone-developer-spotlight-nikolai-joukov.aspxAs part of an ongoing series, I plan to include a spotlight post on people within the community that are stars in their field and area of expertise. For my first spotlight post, I interviewed Nikolai Joukov, who is a regular attendee at my local area .NET User Group (CTTDNUG), and has also participated in many of the Mobile and Cloud workshops we have hosted over the past few years. Nikolai stood out immediately, because of his passion for developing mobile apps, his interest in continuous learning, and his drive to publish quality apps that people will find useful and entertaining. Background: Nikolai immigrated to Canada in 1995, and has been working in IT since 1997. He moved on to become an independent contractor in 2005, and has worked at various large scale organizations over the course of his career, including BMO, Enbridge, Economical Insurance, Equitable Life, Manulife and Sun Life. Nikolai is an accomplished Windows Phone and Windows Store publisher, with 11 published Windows Phone apps, and 8 published Windows Store apps. He has almost 6000 downloads and favourable reviews. Q & A with Nikolai How many years have you been developing Windows Phone apps? 2 years When did you develop your very first Windows Phone app, and what was it about? Actually, the very first app I wrote was for the Microsoft “Smart Phone” back in 2004. This phone was given to me by Microsoft during the Developers Days Conference in Toronto. It was some kind of experimental model named Smart Phone, but you had to use VB 3 to develop the applications. Needless to say, this was not very successful at that time. My app was a Stock Trades Calculator. Very primitive, but it was working for me. The phone was heavy and the battery barely lasted 4 hours. Microsoft stopped supporting it few months later and the phone stopped working shortly after, but I still have it as a souvenir. For Windows Phone, my first app was “Trip Packing Assistant”. This is a simple trip packing check list that allows you to list items by category, set required quantity of items, and mark off the item in the list when it is packed. I designed it for me and my wife Galina, since we love to travel and this program helps manage our list for us. How did you get started in Windows Phone development? I have to say thanks to our .NET User Group for introducing me to Windows Phone development. I was intrigued and decided to give it a try. In October 2012 during a 2 day training event that ObjectSharp hosted in London, I met Bruce Johnson. On his advice, I registered for Developer Movement, and it is was a good push to actually complete the apps that I started. You have a great series of travel guide apps both for Windows Phone and Windows Store. Tell us about how you came up with the idea to develop those apps and what process you went through to put it all together. Like I said earlier, my wife and I love to travel. Before I created Trip Packing Assistant, every time we were planning to travel somewhere new, Galina would spend 3-4 weeks doing research. She would create a Word document with all of the information. We didn’t want to have to carry our laptop with us all the time, so we printed out the Word document she created, and would take it with us. After we returned from the trip, we would bring back tons of pictures and materials. Then our friends started to ask us about our materials before they planned their trips to the same places we had visited. So I decided to give it a try and started making apps for Windows Phone and for Windows 8. I hope these applications will help people who are planning to travel. So, all of the pictures used in the travel apps you created were actually taken by you during these amazing trips? Yes Do you have another Windows Store/Windows Phone project in development right now? If so, can you give us a hint at what it will be about? I want to stay with travel apps for now. But this time I will try to write an app for us (Galina and I). Usually we go on the trip, then I write the apps after we have all this beautiful pictures in our hands. We are planning a trip to Rome. This app will not have the pictures, but I want to add a map with points of interest and all information that can be useful for us. Then we will go on our trip and test it on location. As well I am planning to work on my existing apps to make them better. What learning resources would you recommend for other developers that want to get started in Window Store and Windows Phone development? I would start with dev.windowsphone.com to get all tools and samples, also links to training materials. I like MVA (Microsoft Virtual Academy). Their videos are really useful and it is free. Pluralsight is good too but it is not free and I do not have a subscription anymore. Our .NET User Group meetings give good insights too. I went to all meetings and full day training events. When you start to develop your app, you need to do research for specific questions that arise during development. The Developer Portal and Nokia Developer are good resources too. Wrap Up Thanks Nikolai for participating in my first Spotlight blog post! Shown below is Nikolai’s publisher page in the Windows Phone Store and his publisher page in the Windows Store. Simply click on it to be taken to there to check out his portfolio of apps. Be sure to download his apps and try them out! They are all free! Nikolai’s Windows Phone apps   Nikolai’s Windows Store Apps

    Read the article

  • CodePlex Daily Summary for Saturday, December 08, 2012

    CodePlex Daily Summary for Saturday, December 08, 2012Popular ReleasesYnote Classic: Ynote Classic version 1.0: Ynote Classic is a text editor made by SS Corporation. It can help you write code by providing you with different codes for creation of html or batch files. You can also create C/C++ /Java files with SS Ynote Classic. Author of Ynote Classic is Samarjeet Singh. Ynote Classic is available with different themes and skins. It can also compile *.bat files into an executable file. It also has a calculator built within it. 1st version released of 6-12-12 by Samarjeet Singh. Please contact on http:...Http Explorer: httpExplorer-1.1: httpExplorer now has the ability to connect to http server via web proxies. The proxy may be explicitly specified by hostname or IP address. Or it may be specified via the Internet Options settings of Windows. You may also specify credentials to pass to the proxy if the proxy requires them. These credentials may be NTLM or basic authentication (clear text username and password).Bee OPOA Platform: Bee OPOA Demo V1.0.001: Initial version.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.78: Fix for issue #18924 - using -pretty option left in ///#DEBUG blocks. Fix for issue #18980 - bad += optimization caused bug in resulting code. Optimization has been removed pending further review.Torrents-List Organizer: Torrents-list organizer v 0.5.0.0: ????? ? ?????? 0.5.0.0: 1) ????? ? ?????? ?????? ??????????? ???????? ?????????? ? ?????? ????????. ??????????????, ????????? ???????, ??? ????????? ???????? ?????? ("???????? ?? ??????"). 2) ????????? ???????? ???????? ???????-??????. ????? ??????, ?????? ??? ?????? ?????????????? ??? ???????-?????. 3) ?????? ??? ??????? ??????? ?????? ? ????? ????????????? ?????????? ???????????, ? ?????????????, ??????? ?????? ????? ?????? ??????????, ?????????? ???? ??????? ???? ?????? ???????? ??????? ? ...fastJSON: v2.0.11: - bug fix single char number json - added UseEscapedUnicode parameter for controlling string output in \uxxxx for unicode/utf8 format - bug fix null and generic ToObject<>() - bug fix List<> of custom typesMedia Companion: MediaCompanion3.508b: Recommended Download - Fixes IMDB title scrape bug and several mc_com movie and actor cache bugs.Measure It: MeasureIt v0.2.1: Updated with lots of bug fixes and support for interactive measurements in LinqPadPeriodic.Net: 0.8: Whats new for Periodic.Net 0.8: New Element Info Dialog New Website MenuItem Minor Bug Fix's, improvements and speed upsYahoo! UI Library: YUI Compressor for .Net: Version 2.2.0.0 - Epee: New : Web Optimization package! Cleaned up the nuget packages BugFix: minifying lots of files will now be faster because of a recent regression in some code. (We were instantiating something far too many times).DtPad - .NET Framework text editor: DtPad 2.9.0.40: http://dtpad.diariotraduttore.com/files/images/flag-eng.png English + A new built-in editor for the management of CSV files, including the edit of cells, deleting and adding new rows, replacement of delimiter character and much more (issue #1137) + The limit of rows allowed before the decommissioning of their side panel has been raised (new default: 1.000) (issue #1155, only partially solved) + Pressing CTRL+TAB now DtPad opens a screen that shows the list of opened tabs (issue #1143) + Note...AvalonDock: AvalonDock 2.0.1746: Welcome to the new release of AvalonDock 2.0 This release contains a lot (lot) of bug fixes and some great improvements: Views Caching: Content of Documents and Anchorables is no more recreated everytime user move it. Autohide pane opens really fast now. Two new themes Expression (Dark and Light) and Metro (both of them still in experimental stage). If you already use AD 2.0 or plan to integrate it in your future projects, I'm interested in your ideas for new features: http://avalondock...AcDown?????: AcDown????? v4.3.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ?? v4.3.2?? ?????????????????? ??Acfun??????? ??Bilibili?????? ??Bilibili???????????? ??Bilibili????????? ??????????????? ???? ??Bilibili??????? ????32??64? Windows XP/...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.2: ??FineUI ?? ExtJS ??? ASP.NET 2.0 ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License 2.0 (Apache) ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ ???? +2012-12-03 v3.2.2 -?????????????,?????button/button_menu.aspx(????)。 +?Window????Plain??;?ToolbarPosition??Footer??;?????FooterBarAlign??。 -????win...Player Framework by Microsoft: Player Framework for Windows Phone 8: This is a brand new version of the Player Framework for Windows Phone, available exclusively for Windows Phone 8, and now based upon the Player Framework for Windows 8. While this new version is not backward compatible with Windows Phone 7 (get that http://smf.codeplex.com/releases/view/88970), it does offer the same great feature set plus dozens of new features such as advertising, localization support, and improved skinning. Click here for more information about what's new in the Windows P...SSH.NET Library: 2012.12.3: New feature(s): + SynchronizeDirectoriesQuest: Quest 5.3 Beta: New features in Quest 5.3 include: Grid-based map (sponsored by Phillip Zolla) Changable POV (sponsored by Phillip Zolla) Game log (sponsored by Phillip Zolla) Customisable object link colour (sponsored by Phillip Zolla) More room description options (by James Gregory) More mathematical functions now available to expressions Desktop Player uses the same UI as WebPlayer - this will make it much easier to implement customisation options New sorting functions: ObjectListSort(list,...Chinook Database: Chinook Database 1.4: Chinook Database 1.4 This is a sample database available in multiple formats: SQL scripts for multiple database vendors, embeded database files, and XML format. The Chinook data model is available here. ChinookDatabase1.4_CompleteVersion.zip is a complete package for all supported databases/data sources. There are also packages for each specific data source. Supported Database ServersDB2 EffiProz MySQL Oracle PostgreSQL SQL Server SQL Server Compact SQLite Issues Resolved293...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.34: changes FIXED: Thanks Function when "Download each post in it's own folder" is disabled FIXED: "PixHub.eu" linksD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300New ProjectsAqui Estoy ( IP announcing Tool): Aqui Estoy its a tool that once installed on a computer announces its IP to another computer, so it can be localized, it can be used to find computer that have a dinamic IP. Developed with vb.net and sockets technology. Bing Maps for Windows Store Apps Training Kit: This training kit consists of a power point slide deck which gives an overview of how to create a Windows Store App that uses Bing Maps. BTFramework: Beauty Code FrameworkCAML Builder: Small and simple API which allows you to easily write CAML queries, in a declarative way.Easy sound: Easy sound is a .net tool for managing audio stream in the memory. It joins existing wav streams into single one using different method.EBusiness: Little Prototype of an education ProjectField Validator: Silverlight 3 Field Validator finalproject: Web 2.0 project about Real Madrid Community.fossilGui: Gui for fossil (http://www.fossil-scm.org)HubLog: Tool for application administrator: collects and joins logs from multiple instances of an application, and displays them. Example of usage in C#: the telnet protocol, dynamically compiled functions as elements of configuration.Implementing Google Protocol Buffers using C#: Demonstration for Implementing Google Protocol Buffers using C# Jarvis PSO Course Project: Project for the course Programmazione di Sistema taught @ Politecnico di TorinoL3374tw: Translates text into L337log4net Dynamics CRM 2011 Appender: log4net Dynamics CRM 2011 AppenderNote Garden: Note Garden is my extension to NodeGarden (http://alphalabs.codeplex.com) for Windows Phone 8, using Silverlight. NV2: This library will support playback of v2m files.Paste As Plugin For Windows Live Writer: The Paste As Plugin for Windows Live Writer is a simple plugin which steamlines the pasting of text and HTML into a WLW post.PMS_LSI: PMS LSIPoker Calculator: Monte Poker is poker utility which calculates probabilities of handspostleitzahlensuche: Postleitzahlen Suche C# WPF Applikation; Suche nach Postleitzahlen oder Orten => Liefert als Ergebnis eine Liste von übereinstimmenden Orten mit deren PLZsPythagorean Theorem in WPF: This project shows off using the Pythagorean Theorem in a simple drawing WPF Application. Measuring the distance between 2 objects in WPF can easily be achieved using this approach.Resource management language: This project is a try to create an automata-based resource management language in C#. Resource management means the work with computational resources (possibly threads, processors) in order to execute a program.Resource-Based Economy: A .Net framework to help with the implementation of a global Resource-Based Economy.Sinbiota 2.0 prototype: SinBiota 2.0 prototype is an open source biodiversity information system, which allows the presentation of biodiversity data through an enhanced GIS framework.Snapword: Work in progress.UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application interface (classically display child controls inside application window).UnitFundProfitability: This program helps to calculate real profitability of your investments in unit funds. It knows about markdown rate, markup rate, taxes and other payments, which decrease declaring profitability.Z3 Test Suite: Test suite for the Z3 theorem prover.

    Read the article

  • Securing WebSocket applications on Glassfish

    - by Pavel Bucek
    Today we are going to cover deploying secured WebSocket applications on Glassfish and access to these services using WebSocket Client API. WebSocket server application setup Our server endpoint might look as simple as this: @ServerEndpoint("/echo") public class EchoEndpoint { @OnMessage   public String echo(String message) {     return message + " (from your server)";   } } Everything else must be configured on container level. We can start with enabling SSL, which will require web.xml to be added to your project. For starters, it might look as following: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee">   <security-constraint>     <web-resource-collection>       <web-resource-name>Protected resource</web-resource-name>       <url-pattern>/*</url-pattern>       <http-method>GET</http-method>     </web-resource-collection>     <!-- https -->     <user-data-constraint>       <transport-guarantee>CONFIDENTIAL</transport-guarantee>     </user-data-constraint>   </security-constraint> </web-app> This is minimal web.xml for this task - web-resource-collection just defines URL pattern and HTTP method(s) we want to put a constraint on and user-data-constraint defines that constraint, which is in our case transport-guarantee. More information about these properties and security settings for web application can be found in Oracle Java EE 7 Tutorial. I have some simple webpage attached as well, so I can test my endpoint right away. You can find it (along with complete project) in Tyrus workspace: [webpage] [whole project]. After deploying this application to Glassfish Application Server, you should be able to hit it using your favorite browser. URL where my application resides is https://localhost:8181/sample-echo-https/ (may be different, depends on other configuration). My browser warns me about untrusted certificate (I use what freshly built Glassfish provides - self signed certificates) and after adding an exception for this site, I can see my webpage and I am able to securely connect to wss://localhost:8181/sample-echo-https/echo. WebSocket client Already mentioned demo application also contains test client, but execution of this is skipped for normal build. Reason for this is that Glassfish uses these self-signed "random" untrusted certificates and you are (in most cases) not able to connect to these services without any additional settings. Creating test WebSocket client is actually quite similar to server side, only difference is that you have to somewhere create client container and invoke connect with some additional info. Java API for WebSocket allows you to use annotated and programmatic way to construct endpoints. Server side shows the annotated case, so let's see how the programmatic approach will look. final WebSocketContainer client = ContainerProvider.getWebSocketContainer(); client.connectToServer(new Endpoint() {   @Override   public void onOpen(Session session, EndpointConfig EndpointConfig) {     try {       // register message handler - will just print out the       // received message on standard output.       session.addMessageHandler(new MessageHandler.Whole<String>() {       @Override         public void onMessage(String message) {          System.out.println("### Received: " + message);         }       });       // send a message       session.getBasicRemote().sendText("Do or do not, there is no try.");     } catch (IOException e) {       // do nothing     }   } }, ClientEndpointConfig.Builder.create().build(),    URI.create("wss://localhost:8181/sample-echo-https/echo")); This client should work with some secured endpoint with valid certificated signed by some trusted certificate authority (you can try that with wss://echo.websocket.org). Accessing our Glassfish instance will require some additional settings. You can tell Java which certificated you trust by adding -Djavax.net.ssl.trustStore property (and few others in case you are using linked sample). Complete command line when you are testing your service might need to look somewhat like: mvn clean test -Djavax.net.ssl.trustStore=$AS_MAIN/domains/domain1/config/cacerts.jks\ -Djavax.net.ssl.trustStorePassword=changeit -Dtyrus.test.host=localhost\ -DskipTests=false Where AS_MAIN points to your Glassfish instance. Note: you might need to setup keyStore and trustStore per client instead of per JVM; there is a way how to do it, but it is Tyrus proprietary feature: http://tyrus.java.net/documentation/1.2.1/user-guide.html#d0e1128. And that's it! Now nobody is able to "hear" what you are sending to or receiving from your WebSocket endpoint. There is always room for improvement, so the next step you might want to take is introduce some authentication mechanism (like HTTP Basic or Digest). This topic is more about container configuration so I'm not going to go into details, but there is one thing worth mentioning: to access services which require authorization, you might need to put this additional information to HTTP headers of first (Upgrade) request (there is not (yet) any direct support even for these fundamental mechanisms, user need to register Configurator and add headers in beforeRequest method invocation). I filed related feature request as TYRUS-228; feel free to comment/vote if you need this functionality.

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Community Outreach - Where Should I Go

    - by Roger Brinkley
    A few days ago I was talking to person new to community development and they asked me what guidelines I used to determine the worthiness of a particular event. After our conversation was over I thought about it a little bit more and figured out there are three ways to determine if any event (be it conference, blog, podcast or other social medias) is worth doing: Transferability, Multiplication, and Impact. Transferability - Is what I have to say useful to the people that are going to hear it. For instance, consider a company that has product offering that can connect up using a number of languages like Scala, Grovey or Java. Sending a Scala expert to talk about Scala and the product is not transferable to a Java User Group, but a Java expert doing the same talk with a Java slant is. Similarly, talking about JavaFX to any Java User Group meeting in Brazil was pretty much a wasted effort until it was open sourced. Once it was open sourced it was well received. You can also look at transferability in relation to the subject matter that you're dealing with. How transferable is a presentation that I create. Can I, or a technical writer on the staff, turn it into some technical document. Could it be converted into some type of screen cast. If we have a regular podcast can we make a reference to the document, catch the high points or turn it into a interview. Is there a way of using this in the sales group. In other words is the document purely one dimensional or can it be re-purposed in other forms. Multiplication - On every trip I'm looking for 2 to 5 solid connections that I can make with developers. These are long term connections, because I know that once that relationship is established it will lead to another 2 - 5 from that connection and within a couple of years were talking about some 100 connections from just one developer. For instance, when I was working on JavaHelp in 2000 I hired a science teacher with a programming background. We've developed a very tight relationship over the year though we rarely see each other more than once a year. But at this JavaOne, one of his employees came up to me and said, "Richard (Rick Hard in Czech) told me to tell you that he couldn't make it to JavaOne this year but if I saw you to tell you hi". Another example is from my Mobile & Embedded days in Brasil. On our very first FISL trip about 5 years ago there were two university students that had created a project called "Marge". Marge was a Bluetooth framework that made connecting bluetooth devices easier. I invited them to a "Sun" dinner that evening. Originally they were planning on leaving that afternoon, but they changed their plans recognizing the opportunity. Their eyes were as big a saucers when they realized the level of engineers at the meeting. They went home started a JUG in Florianoplis that we've visited more than a couple of times. One of them went to work for Brazilian government lab like Berkley Labs, MIT Lab, John Hopkins Applied Physicas Labs or Lincoln Labs in the US. That presented us with an opportunity to show Embedded Java as a possibility for some of the work they were doing there. Impact - The final criteria is how life changing is what I'm going to say be to the individuals I'm reaching. A t-shirt is just a token, but when I reach down and tug at their developer hearts then I know I've succeeded. I'll never forget one time we flew all night to reach Joan Pasoa in Northern Brazil. We arrived at 2am went immediately to our hotel only to be woken up at 6 am to travel 2 hours by car to the presentation hall. When we arrived we were totally exhausted. Outside the facility there were 500 people lined up to hear 6 speakers for the day. That itself was uplifting.  I delivered one of my favorite talks on "I have passion". It was a talk on golf and embedded java development, "Find your passion". When we finished a couple of first year students came up to me and said how much my talk had inspired them. FISL is another great example. I had been about 4 years in a row. FISL is a very young group of developers so capturing their attention is important. Several of the students will come back 2 or 3 years later and ask me questions about research or jobs. And then there's Louis. Louis is one my favorite Brazilians. I can only describe him as a big Brazilian teddy bear. I see him every year at FISL. He works primarily in Java EE but he's attended every single one of my talks over the last 4 years. I can't tell you why, but he always greets me and gives me a hug. For some reason I've had a real impact. And of course when it comes to impact you don't just measure a presentation but every single interaction you have at an event. It's the hall way conversations, the booth conversations, but more importantly it's the conversations at dinner tables or in the cars when you're getting transported to an event. There's a good story that illustrates this. Last year in the spring I was traveling to Goiânia in Brazil. I've been there many times and leaders there no me well. One young man has picked me up at the airport on more than one occasion. We were going out to dinner one evening and he brought his girl friend along. One thing let to another and I eventually asked him, in front of her, "Why haven't you asked her to marry you?" There were all kinds of excuses and she just looked at him and smiled. When I came back in December for JavaOne he came and sought me. "I just want to tell you that I thought a lot about what you said, and I asked her to marry me. We're getting married next Spring." Sometimes just one presentation is all it takes to make an impact. Other times it takes years. Some impacts are directly related to the company and some are more personal in nature. It doesn't matter which it is because it's having the impact that matters.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • OS8- AK8- The bad news...

    - by Steve Tunstall
    Ok I told you I would give you the bad news of AK8 to go along with all the cool new stuff, so here it is. It's not that bad, really, just things you need to be aware of. First, the 2013.1 code is being called OS8, AK8 and 2013.1 by different people. I mean different people INSIDE Oracle!! It was supposed to be easy, but it never is. So for the rest of this blog entry, I'm calling it AK8. AK8 is not compatible with the 7x10 series. Ever. The 7x10 series is not supported with AK8, and if you try to upgrade one, it will fail at the healthcheck. All 7x20 series, all of them regardless of age, are supported with AK8. Drive trays. Let's talk about drive trays and SAS cards. The older drive trays for the 7x20 series were called the "Riverwalk 2" or "DS2" trays. They were technically the "J4410" series JBODs that Sun used to sell a la carte before we stopped selling JBODs. Don't get me started on that, it still makes me mad. We used these for many years, and you can still buy them right now until December 15th, 2013, when they will no longer be sold. The DS2 tray only came as a 4u, 24 drive shelf. It held 3.5" drives, and you had a choice of 2TB, 3TB, 300GB or 600GB drives. The SAS HBA in the 7x20 series was called a "Thebe" card, with a part # of 7105394. The 7420, for example, came standard with two of these "Thebe" cards for connecting to the disk trays. Two Thebe cards could handle up to 12 trays, so one would add two more cards to go to 24 trays, or have up to six Thebe cards to handle 36 trays. This card was for external SAS only. It did not connect to the internal OS drives or the Readzillas, both of which used the internal SCSI controller of the server. These Riverwalk 2 trays ARE supported with AK8. You can upgrade your older 7420 or 7320, no problem, as-is. The much older Riverwalk 1 trays or J4400 trays are NOT supported by AK8. However, they were only used by the 7x10 series, and we already said that the 7x10 series was not supported. Here's where it gets tricky. Since last January, we have been selling the new style disk trays. We call them the "DE2-24P" and the "DE2-24C" trays. The "C" tray is for capacity drives, which are 3.5" 3TB or 4TB drives. The "P" trays are for performance drives, which are 2.5" 300GB and 900GB drives. These trays are NOT Riverwalk 2 trays, even though the "C" series may kind of look like it. Different manufacturer and different firmware. They are not new. Like I said, we've been selling them with the 7x20 series since last January. They are the only disk trays we will be selling going forward. Of course, AK8 supports them. So what's the problem? The problem is going to be for people who have to mix drive trays. Remember, your older 7x20 series has Thebe SAS2 HBAs. These have 2 SAS ports per card.  The new ZS3-2 and ZS3-4 systems, however, have the new "Thebe2" SAS2 HBAs. These Thebe2 cards have 4 ports per card. This is very cool, as we can now do more SAS channels with less cards. Instead of needing 4 SAS cards to grow to 24 trays like we did with the old Thebe cards, I can now do 24 trays with only 2 Thebe2 cards. This means more IO slots for fun things like Infiniband and 10G. So far, so good, right? These Thebe2 cards work with any disk tray. You can even mix older DS2 trays with the newer DE2 trays in the same system, as long as you have Thebe2 cards. Ah, there's your problem. You don't have Thebe2 cards in your old 7420, do you? Well, I told you the bad news wasn't that bad, right? We can take out your Thebe cards and replace them with Thebe2. You can then plug your older DS2 trays right back in, and also now get newer DE2 trays going forward. However, it's important that the trays are on different SAS channels. You can mix them in the same system, but not on the same channel. Ask your local SC if you need help with the new cable layout. By the way, the new ZS3-2 and ZS3-4 systems also include a new IO card called "Erie" cards. These are for INTERNAL SAS to the OS drives and the Readzillas. So those are now SAS2 instead of SATA like the older models. Yes, the Erie card uses an IO slot, but that's OK, because the Thebe2 cards allow us to use less SAS HBAs to grow the system, right? That's it. Not too much bad news and really not that bad. AK8 does not support the 7x10 series, and you may need new Thebe2 cards in your older systems if you want to add on newer DE2 trays. I think we can all agree that there are worse things out there. Like our Congress.   Next up.... More good news and cool AK8 tricks. Such as virtual NICS. 

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

  • Vlc And other Multi Media Problems

    - by MrMagu
    ive tried numerous Free Multimedia Programs on ubuntu and i get the same result when i go to watch one of my movies off my other hardrive it allways ends up full screening with a stuck image in the middle and vlc ends up looking pitch black. ive look all over the web for this issue im wondering if anyone is experinceing the same problems I Dont know if it has to do with the duel monitors ive tried adding and re adding the repositorys but it seems to be doing the same repetive thing for over a month now do i need to reinstall the whole system or what idk anyhelp please would be much appreciated Ty MrMagu Xorg Conf File nvidia-settings: X configuration file generated by nvidia-settings nvidia-settings: version 304.37 (buildd@allspice) Sun Sep 9 05:59:26 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-1: nvidia-auto-select +1920+0; DFP-0: 1920x1080 +0+0, DFP-1: 1920x1080 +1920+0" Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1080 +0+0; DFP-0: 1920x1080_60 +0+0; DFP-0: 1680x1050 +0+0; DFP-0: 1680x1050_60 +0+0; DFP-0: 1440x900 +0+0; DFP-0: 1440x900_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1152x864_75 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_75 +0+0; DFP-0: 1024x768_70 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_75 +0+0; DFP-0: 800x600_72 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_75 +0+0; DFP-0: 640x480_72 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select @1920x720 +0+0" Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1080 +0+0; DFP-0: 1920x1080_60 +0+0; DFP-0: 1680x1050 +0+0; DFP-0: 1680x1050_60 +0+0; DFP-0: 1440x900 +0+0; DFP-0: 1440x900_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1152x864_75 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_75 +0+0; DFP-0: 1024x768_70 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_75 +0+0; DFP-0: 800x600_72 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_75 +0+0; DFP-0: 640x480_72 +0+0; DFP-0: 640x480_60 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-1: 1920x1080 +1920+0; DFP-0: 1920x1080 +0+0, DFP-1: nvidia-auto-select +1920+0; DFP-0: 1920x1080_60 +0+0; DFP-0: 1680x1050 +0+0; DFP-0: 1680x1050_60 +0+0; DFP-0: 1440x900 +0+0; DFP-0: 1440x900_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1152x864_75 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_75 +0+0; DFP-0: 1024x768_70 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_75 +0+0; DFP-0: 800x600_72 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_75 +0+0; DFP-0: 640x480_72 +0+0; DFP-0: 640x480_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-1: 1920x1080 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "InputClass" Identifier "Mouse Remap" MatchProduct "Saitek Cyborg R.A.T.7 Mouse" MatchDevicePath "/dev/input/event*" Option "ButtonMapping" "1 2 3 4 5 6 7 8 9 0 0 0 0 0 0" EndSection Pictures http://tinypic.com/r/34o7j1h/6 http://tinypic.com/r/3rn20/6 Ty so Much Mr Magu

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • PPTP ping client to client error

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables are stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Added splash screen code to my package

    - by Youssef
    Please i need support to added splash screen code to my package /* * T24_Transformer_FormView.java */ package t24_transformer_form; import org.jdesktop.application.Action; import org.jdesktop.application.ResourceMap; import org.jdesktop.application.SingleFrameApplication; import org.jdesktop.application.FrameView; import org.jdesktop.application.TaskMonitor; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.filechooser.FileNameExtensionFilter; import javax.swing.filechooser.FileFilter; // old T24 Transformer imports import java.io.File; import java.io.FileWriter; import java.io.StringWriter; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.Iterator; //import java.util.Properties; import java.util.StringTokenizer; import javax.swing.; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Result; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; import org.apache.log4j.Logger; import org.apache.log4j.PropertyConfigurator; import org.w3c.dom.Document; import org.w3c.dom.DocumentFragment; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; import com.ejada.alinma.edh.xsdtransform.util.ConfigKeys; import com.ejada.alinma.edh.xsdtransform.util.XSDElement; import com.sun.org.apache.xml.internal.serialize.OutputFormat; import com.sun.org.apache.xml.internal.serialize.XMLSerializer; /* * The application's main frame. */ public class T24_Transformer_FormView extends FrameView { /**} * static holders for application-level utilities * { */ //private static Properties appProps; private static Logger appLogger; /** * */ private StringBuffer columnsCSV = null; private ArrayList<String> singleValueTableColumns = null; private HashMap<String, String> multiValueTablesSQL = null; private HashMap<Object, HashMap<String, Object>> groupAttrs = null; private ArrayList<XSDElement> xsdElementsList = null; /** * initialization */ private void init() /*throws Exception*/ { // init the properties object //FileReader in = new FileReader(appConfigPropsPath); //appProps.load(in); // log4j.properties constant String PROP_LOG4J_CONFIG_FILE = "log4j.properties"; // init the logger if ((PROP_LOG4J_CONFIG_FILE != null) && (!PROP_LOG4J_CONFIG_FILE.equals(""))) { PropertyConfigurator.configure(PROP_LOG4J_CONFIG_FILE); if (appLogger == null) { appLogger = Logger.getLogger(T24_Transformer_FormView.class.getName()); } appLogger.info("Application initialization successful."); } columnsCSV = new StringBuffer(ConfigKeys.FIELD_TAG + "," + ConfigKeys.FIELD_NUMBER + "," + ConfigKeys.FIELD_DATA_TYPE + "," + ConfigKeys.FIELD_FMT + "," + ConfigKeys.FIELD_LEN + "," + ConfigKeys.FIELD_INPUT_LEN + "," + ConfigKeys.FIELD_GROUP_NUMBER + "," + ConfigKeys.FIELD_MV_GROUP_NUMBER + "," + ConfigKeys.FIELD_SHORT_NAME + "," + ConfigKeys.FIELD_NAME + "," + ConfigKeys.FIELD_COLUMN_NAME + "," + ConfigKeys.FIELD_GROUP_NAME + "," + ConfigKeys.FIELD_MV_GROUP_NAME + "," + ConfigKeys.FIELD_JUSTIFICATION + "," + ConfigKeys.FIELD_TYPE + "," + ConfigKeys.FIELD_SINGLE_OR_MULTI + System.getProperty("line.separator")); singleValueTableColumns = new ArrayList<String>(); singleValueTableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); multiValueTablesSQL = new HashMap<String, String>(); groupAttrs = new HashMap<Object, HashMap<String, Object>>(); xsdElementsList = new ArrayList<XSDElement>(); } /** * initialize the <code>DocumentBuilder</code> and read the XSD file * * @param docPath * @return the <code>Document</code> object representing the read XSD file */ private Document retrieveDoc(String docPath) { Document xsdDoc = null; File file = new File(docPath); try { DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); xsdDoc = builder.parse(file); } catch (Exception e) { appLogger.error(e.getMessage()); } return xsdDoc; } /** * perform the iteration/modification on the document * iterate to the level which contains all the elements (Single-Value, and Groups) and start processing each * * @param xsdDoc * @return */ private Document processDoc(Document xsdDoc) { ArrayList<Object> newElementsList = new ArrayList<Object>(); HashMap<String, Object> docAttrMap = new HashMap<String, Object>(); Element sequenceElement = null; Element schemaElement = null; // get document's root element NodeList nodes = xsdDoc.getChildNodes(); for (int i = 0; i < nodes.getLength(); i++) { if (ConfigKeys.TAG_SCHEMA.equals(nodes.item(i).getNodeName())) { schemaElement = (Element) nodes.item(i); break; } } // process the document (change single-value elements, collect list of new elements to be added) for (int i1 = 0; i1 < schemaElement.getChildNodes().getLength(); i1++) { Node childLevel1 = (Node) schemaElement.getChildNodes().item(i1); // <ComplexType> element if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // first, get the main attributes and put it in the csv file for (int i6 = 0; i6 < childLevel1.getChildNodes().getLength(); i6++) { Node child6 = childLevel1.getChildNodes().item(i6); if (ConfigKeys.TAG_ATTRIBUTE.equals(child6.getNodeName())) { if (child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { String attrName = child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); if (((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { Node simpleTypeElement = ((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE) .item(0); if (((Element) simpleTypeElement).getElementsByTagName(ConfigKeys.TAG_RESTRICTION).getLength() != 0) { Node restrictionElement = ((Element) simpleTypeElement).getElementsByTagName( ConfigKeys.TAG_RESTRICTION).item(0); if (((Element) restrictionElement).getElementsByTagName(ConfigKeys.TAG_MAX_LENGTH).getLength() != 0) { Node maxLengthElement = ((Element) restrictionElement).getElementsByTagName( ConfigKeys.TAG_MAX_LENGTH).item(0); HashMap<String, String> elementProperties = new HashMap<String, String>(); elementProperties.put(ConfigKeys.FIELD_TAG, attrName); elementProperties.put(ConfigKeys.FIELD_NUMBER, "0"); elementProperties.put(ConfigKeys.FIELD_DATA_TYPE, ConfigKeys.DATA_TYPE_XSD_STRING); elementProperties.put(ConfigKeys.FIELD_FMT, ""); elementProperties.put(ConfigKeys.FIELD_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SHORT_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_COLUMN_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); elementProperties.put(ConfigKeys.FIELD_LEN, maxLengthElement.getAttributes().getNamedItem( ConfigKeys.ATTR_VALUE).getNodeValue()); elementProperties.put(ConfigKeys.FIELD_INPUT_LEN, maxLengthElement.getAttributes() .getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); constructElementRow(elementProperties); // add the attribute as a column in the single-value table singleValueTableColumns.add(attrName + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + maxLengthElement.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); // add the attribute as an element in the elements list addToElementsList(attrName, attrName); appLogger.debug("added attribute: " + attrName); } } } } } } // now, loop on the elements and process them for (int i2 = 0; i2 < childLevel1.getChildNodes().getLength(); i2++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(i2); // <Sequence> element if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { sequenceElement = (Element) childLevel2; for (int i3 = 0; i3 < childLevel2.getChildNodes().getLength(); i3++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(i3); // <Element> element if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { processGroup(childLevel3, true, null, null, docAttrMap, xsdDoc, newElementsList); // insert a new comment node with the contents of the group tag sequenceElement.insertBefore(xsdDoc.createComment(serialize(childLevel3)), childLevel3); // remove the group tag sequenceElement.removeChild(childLevel3); } else { processElement(childLevel3); } } } } } } } // add new elements // this step should be after finishing processing the whole document. when you add new elements to the document // while you are working on it, those new elements will be included in the processing. We don't need that! for (int i = 0; i < newElementsList.size(); i++) { sequenceElement.appendChild((Element) newElementsList.get(i)); } // write the new required attributes to the schema element Iterator<String> attrIter = docAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) docAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("appending attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_NAME, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_TYPE, attr.getAttribute(ConfigKeys.ATTR_TYPE)); schemaElement.appendChild(newAttrElement); } return xsdDoc; } /** * add a new <code>XSDElement</code> with the given <code>name</code> and <code>businessName</code> to * the elements list * * @param name * @param businessName */ private void addToElementsList(String name, String businessName) { xsdElementsList.add(new XSDElement(name, businessName)); } /** * add the given <code>XSDElement</code> to the elements list * * @param element */ private void addToElementsList(XSDElement element) { xsdElementsList.add(element); } /** * check if the <code>element</code> sent is single-value element or group * element. the comparison depends on the children of the element. if found one of type * <code>ComplexType</code> then it's a group element, and if of type * <code>SimpleType</code> then it's a single-value element * * @param element * @return <code>true</code> if the element is a group element, * <code>false</code> otherwise */ private boolean isGroup(Node element) { for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node child = (Node) element.getChildNodes().item(i); if (child.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // found a ComplexType child (Group element) return true; } else if (child.getNodeName().equals(ConfigKeys.TAG_SIMPLE_TYPE)) { // found a SimpleType child (Single-Value element) return false; } } return false; /* String attrName = null; if (element.getAttributes() != null) { Node attribute = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); } } if (attrName.startsWith("g")) { // group element return true; } else { // single element return false; } */ } /** * process a group element. recursively, process groups till no more group elements are found * * @param element * @param isFirstLevelGroup * @param attrMap * @param docAttrMap * @param xsdDoc * @param newElementsList */ private void processGroup(Node element, boolean isFirstLevelGroup, Node parentGroup, XSDElement parentGroupElement, HashMap<String, Object> docAttrMap, Document xsdDoc, ArrayList<Object> newElementsList) { String elementName = null; HashMap<String, Object> groupAttrMap = new HashMap<String, Object>(); HashMap<String, Object> parentGroupAttrMap = new HashMap<String, Object>(); XSDElement groupElement = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing group [" + elementName + "]..."); groupElement = new XSDElement(elementName, elementName); // get the attributes if a non-first-level-group // attributes are: groups's own attributes + parent group's attributes if (!isFirstLevelGroup) { // get the current element (group) attributes for (int i1 = 0; i1 < element.getChildNodes().getLength(); i1++) { if (ConfigKeys.TAG_COMPLEX_TYPE.equals(element.getChildNodes().item(i1).getNodeName())) { Node complexTypeNode = element.getChildNodes().item(i1); for (int i2 = 0; i2 < complexTypeNode.getChildNodes().getLength(); i2++) { if (ConfigKeys.TAG_ATTRIBUTE.equals(complexTypeNode.getChildNodes().item(i2).getNodeName())) { appLogger.debug("add group attr: " + ((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME)); groupAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); docAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); } } } } // now, get the parent's attributes parentGroupAttrMap = groupAttrs.get(parentGroup); if (parentGroupAttrMap != null) { Iterator<String> iter = parentGroupAttrMap.keySet().iterator(); while (iter.hasNext()) { String attrName = iter.next(); groupAttrMap.put(attrName, parentGroupAttrMap.get(attrName)); } } // add the attributes to the group element that will be added to the elements list Iterator<String> itr = groupAttrMap.keySet().iterator(); while(itr.hasNext()) { groupElement.addAttribute(itr.next()); } // put the attributes in the attributes map groupAttrs.put(element, groupAttrMap); } for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { // another group element.. // unfortunately, a recursion is // needed here!!! :-( processGroup(childLevel3, false, element, groupElement, docAttrMap, xsdDoc, newElementsList); } else { // reached a single-value element.. copy it under the // main sequence and apply the name<>shorname replacement processGroupElement(childLevel3, element, groupElement, isFirstLevelGroup, xsdDoc, newElementsList); } } } } } } } if (isFirstLevelGroup) { addToElementsList(groupElement); } else { parentGroupElement.addChild(groupElement); } appLogger.debug("finished processing group [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. replace the <code>name</code> attribute with the <code>shortname</code>. * * @param element */ private void processElement(Node element) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } // replace the name attribute with the shortname if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); constructElementRow(elementProperties); singleValueTableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // add the element to elements list addToElementsList(fieldShortName, fieldColumnName); appLogger.debug("finished processing element [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. copy the element under the main sequence * 2. replace the <code>name</code> attribute with the <code>shortname</code>. * 3. add the attributes of the parent groups (if non-first-level-group) * * @param element */ private void processGroupElement(Node element, Node parentGroup, XSDElement parentGroupElement, boolean isFirstLevelGroup, Document xsdDoc, ArrayList<Object> newElementsList) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; Element newElement = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); ArrayList<String> tableColumns = new ArrayList<String>(); HashMap<String, Object> groupAttrMap = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); // 1. copy the element newElement = (Element) element.cloneNode(true); newElement.setAttribute(ConfigKeys.ATTR_MAX_OCCURS, "unbounded"); // 2. if non-first-level-group, replace the element's SimpleType tag with a ComplexType tag if (!isFirstLevelGroup) { if (((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { // there should be only one tag of SimpleType Node simpleTypeNode = ((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).item(0); // create the new ComplexType element Element complexTypeNode = xsdDoc.createElement(ConfigKeys.TAG_COMPLEX_TYPE); complexTypeNode.setAttribute(ConfigKeys.ATTR_MIXED, "true"); // get the list of attributes for the parent group groupAttrMap = groupAttrs.get(parentGroup); Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("adding attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_REF, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_USE, "optional"); complexTypeNode.appendChild(newAttrElement); } // replace the old SimpleType node with the new ComplexType node newElement.replaceChild(complexTypeNode, simpleTypeNode); } } // 3. replace the name with the shortname in the new element for (int i = 0; i < newElement.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) newElement.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } if (newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } // 4. save the new element to be added to the sequence list newElementsList.add(newElement); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "M"); constructElementRow(elementProperties); // create the MULTI-VALUE table // 0. Primary Key tableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.COLUMN_XPK_ROW_LENGTH); // 1. foreign key tableColumns.add(ConfigKeys.COLUMN_FK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); // 2. field value tableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // 3. attributes if (groupAttrMap != null) { Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while (attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); tableColumns.add(attr.getAttribute(ConfigKeys.ATTR_NAME) + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); } } multiValueTablesSQL.put(sub_table_prefix.getText() + fieldShortName, constructMultiValueTableSQL( sub_table_prefix.getText() + fieldShortName, tableColumns)); // add the element to it's parent group children parentGroupElement.addChild(new XSDElement(fieldShortName, fieldColumnName)); appLogger.debug("finished processing element [" + elementName + "]."); } /** * write resulted files * * @param xsdDoc * @param docPath */ private void writeResults(Document xsdDoc, String resultsDir, String newXSDFileName, String csvFileName) { String rsDir = resultsDir + File.separator + new SimpleDateFormat("yyyyMMdd-HHmm").format(new Date()); try { File resultsDirFile = new File(rsDir); if (!resultsDirFile.exists()) { resultsDirFile.mkdirs(); } // write the XSD doc appLogger.info("writing the transformed XSD..."); Source source = new DOMSource(xsdDoc); Result result = new StreamResult(rsDir + File.separator + newXSDFileName); Transformer xformer = TransformerFactory.newInstance().newTransformer(); // xformer.setOutputProperty("indent", "yes"); xformer.transform(source, result); appLogger.info("finished writing the transformed XSD."); // write the CSV columns file appLogger.info("writing the CSV file..."); FileWriter csvWriter = new FileWriter(rsDir + File.separator + csvFileName); csvWriter.write(columnsCSV.toString()); csvWriter.close(); appLogger.info("finished writing the CSV file."); // write the master single-value table appLogger.info("writing the creation script for master table (single-values)..."); FileWriter masterTableWriter = new FileWriter(rsDir + File.separator + main_edh_table_name.getText() + ".sql"); masterTableWriter.write(constructSingleValueTableSQL(main_edh_table_name.getText(), singleValueTableColumns)); masterTableWriter.close(); appLogger.info("finished writing the creation script for master table (single-values)."); // write the multi-value tables sql appLogger.info("writing the creation script for slave tables (multi-values)..."); Iterator<String> iter = multiValueTablesSQL.keySet().iterator(); while (iter.hasNext()) { String tableName = iter.next(); String sql = multiValueTablesSQL.get(tableName); FileWriter tableSQLWriter = new FileWriter(rsDir + File.separator + tableName + ".sql"); tableSQLWriter.write(sql); tableSQLWriter.close(); } appLogger.info("finished writing the creation script for slave tables (multi-values)."); // write the single-value view appLogger.info("writing the creation script for single-value selection view..."); FileWriter singleValueViewWriter = new FileWriter(rsDir + File.separator + view_name_single.getText() + ".sql"); singleValueViewWriter.write(constructViewSQL(ConfigKeys.SQL_VIEW_SINGLE)); singleValueViewWriter.close(); appLogger.info("finished writing the creation script for single-value selection view."); // debug for (int i = 0; i < xsdElementsList.size(); i++) { getMultiView(xsdElementsList.get(i)); /*// if (xsdElementsList.get(i).getAllDescendants() != null) { // for (int j = 0; j < xsdElementsList.get(i).getAllDescendants().size(); j++) { // appLogger.debug(main_edh_table_name.getText() + "." + ConfigKeys.COLUMN_XPK_ROW // + "=" + xsdElementsList.get(i).getAllDescendants().get(j).getName() + "." + ConfigKeys.COLUMN_FK_ROW); // } // } */ } } catch (Exception e) { appLogger.error(e.getMessage()); } } private String getMultiView(XSDElement element)

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Mozilla Weave can't sync Firefox. What's wrong?

    - by Mehper C. Palavuzlar
    For the last few days, Mozilla Weave can't sync. Below is the activity log. Any ideas? 2010-05-02 20:47:15 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-02 20:47:15 Service.Main CONFIG Starting backoff, next sync at:Sun May 02 2010 21:16:09 GMT+0300 (GTB Yaz Saati) 2010-05-02 20:47:15 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-02 21:16:09 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 21:16:30 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 21:16:30 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 21:26:50 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 21:26:50 Engine.Clients INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Clients INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Clients DEBUG Total (ms): sync 6, processIncoming 3, uploadOutgoing 0, syncStartup 3, syncFinish 0 2010-05-02 21:26:50 Engine.Bookmarks INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Bookmarks INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Bookmarks DEBUG Total (ms): sync 13, processIncoming 5, uploadOutgoing 0, syncStartup 3, syncFinish 3 2010-05-02 21:26:50 Engine.Forms INFO 1 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Forms INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Forms INFO Uploading all of 1 records 2010-05-02 21:26:50 Collection DEBUG POST Length: 388 2010-05-02 21:27:06 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/forms 2010-05-02 21:27:06 Engine.Forms DEBUG Total (ms): sync 15924, processIncoming 3, uploadOutgoing 15918, syncStartup 3, syncFinish 0, createRecord 1 2010-05-02 21:27:06 Engine.History INFO 55 outgoing items pre-reconciliation 2010-05-02 21:27:06 Engine.History INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:09 Engine.History INFO Uploading all of 55 records 2010-05-02 21:27:09 Collection DEBUG POST Length: 35337 2010-05-02 21:27:32 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/history 2010-05-02 21:27:32 Engine.History DEBUG Total (ms): sync 25588, processIncoming 4, uploadOutgoing 25580, syncStartup 3, syncFinish 0, createRecord 2540 2010-05-02 21:27:32 Engine.Passwords INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Passwords INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Passwords DEBUG Total (ms): sync 8, processIncoming 4, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Prefs INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Prefs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Prefs DEBUG Total (ms): sync 8, processIncoming 3, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Tabs INFO 1 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Tabs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Tabs INFO Uploading all of 1 records 2010-05-02 21:27:32 Collection DEBUG POST Length: 393 2010-05-02 21:27:54 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/tabs 2010-05-02 21:27:54 Engine.Tabs DEBUG Total (ms): sync 21943, processIncoming 3, uploadOutgoing 21936, syncStartup 3, syncFinish 0, createRecord 8 2010-05-02 21:27:54 Service.Main INFO Sync completed successfully 2010-05-02 22:27:53 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 22:28:14 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 22:28:14 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 22:28:16 Net.Resource DEBUG GET fail 503 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 22:28:16 Service.Main DEBUG Exception: aborting sync, failed to get collections No traceback available 2010-05-02 23:28:15 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-03 00:26:42 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 00:31:03 RecordMgr DEBUG Failed to import record: App. Quitting JS Stack trace: Res__request(...)@resource.js:208 < Res_get()@resource.js:271 < RecordMgr_import("https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global")@wbo.js:119 < WeaveSvc__remoteSetup()@service.js:824 < ()@service.js:1187 < WrappedNotify()@util.js:114 < WrappedLock()@util.js:86 < WrappedCatch()@util.js:65 < sync(false)@service.js:1146 < ([object Object])@service.js:414 < notify([object XPCWrappedNative_NoHelper])@util.js:629 2010-05-03 00:31:03 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2010-05-03 00:31:03 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-03 00:31:03 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-03 17:26:25 Service.Main INFO Loading Weave 1.2.3 2010-05-03 17:26:25 Engine.Bookmarks DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Forms DEBUG Engine initialized 2010-05-03 17:26:25 Engine.History DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Passwords DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Prefs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Resetting tabs last sync time 2010-05-03 17:26:25 Service.Main INFO Mozilla/5.0 (Windows; U; Windows NT 6.1; tr; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) 2010-05-03 17:26:26 Service.Main DEBUG Caching URLs under storage user base: https://sj-weave03.services.mozilla.com/1.0/mehper/ 2010-05-03 17:26:30 Service.Main DEBUG Autoconnecting in 3 seconds 2010-05-03 17:26:36 Service.Main INFO Logging in user mehper 2010-05-03 17:45:46 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 17:53:18 Service.Main DEBUG Exception: Could not acquire lock No traceback available

    Read the article

  • Windows 7 client can't connect to CentOS PPTP VPN

    - by Chris
    Have a Macintosh (10.8.2) that connects just fine to a CentOS 6.0 virtual private server (OpenVZ, with PPP added by the host) via PPTP. A Windows 7 Home Premium client (virtualized in Sun's Virtual Box), on the same computer, using the same Ethernet connection, cannot connect to the Linux VPN server. I have iptables disabled (for testing) on the Linux box. I have the Windows firewall turned off. /var/log/messages looks like this, for a Windows connection: Oct 12 18:44:30 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:44:30 production pptpd[1880]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:44:30 production pppd[1881]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:44:30 production pppd[1881]: pptpd-logwtmp: $Version$ Oct 12 18:44:30 production pppd[1881]: pppd options in effect: Oct 12 18:44:30 production pppd[1881]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: 115200#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: local#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pppd 2.4.5 started by root, uid 0 Oct 12 18:44:30 production pppd[1881]: Using interface ppp0 Oct 12 18:44:30 production pppd[1881]: Connect: ppp0 <--> /dev/pts/1 (At this point the Windows machine displays a dialog, reading: "Verifying user name and password...") Oct 12 18:45:00 production pppd[1881]: LCP: timeout sending Config-Requests Oct 12 18:45:00 production pppd[1881]: Connection terminated. Oct 12 18:45:00 production pppd[1881]: Modem hangup Oct 12 18:45:00 production pppd[1881]: Exit. Oct 12 18:45:00 production pptpd[1880]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Oct 12 18:45:00 production pptpd[1880]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Oct 12 18:45:00 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection finished The Macintosh connecting looks like this in /var/log/messages: Oct 12 18:50:49 production pptpd[1920]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:50:49 production pptpd[1920]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:50:49 production pppd[1921]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:50:49 production pppd[1921]: pptpd-logwtmp: $Version$ Oct 12 18:50:49 production pppd[1921]: pppd options in effect: Oct 12 18:50:49 production pppd[1921]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: 115200#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: local#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pppd 2.4.5 started by root, uid 0 Oct 12 18:50:49 production pppd[1921]: Using interface ppp0 Oct 12 18:50:49 production pppd[1921]: Connect: ppp0 <--> /dev/pts/1 Oct 12 18:50:52 production pppd[1921]: MPPE 128-bit stateless compression enabled Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'IPv6 Control Protocol' (0x8057) received Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'Apple Client Server Protocol Control' (0x8235) received Oct 12 18:50:52 production pppd[1921]: Cannot determine ethernet address for proxy ARP Oct 12 18:50:52 production pppd[1921]: local IP address 192.168.97.1 Oct 12 18:50:52 production pppd[1921]: remote IP address 192.168.97.10 Oct 12 18:50:52 production pppd[1921]: pptpd-logwtmp.so ip-up ppp0 chris 66.104.246.168 I'm baffled...

    Read the article

  • Centos 6.3 vsftp unable to upload file to apache webserver

    - by user148648
    I am new to Centos, I did work with Sun Solaris and upload files to Apache web server before. I create an end user account and manage to ftp using command prompt to the server, error message is '226 Transfer Done (but failed to open directory). Content of my vsftpd.conf as below # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # ** may need to comment it back # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 local_umask=077 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # *** maybe to comment it back!!! # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # ** may need to comment it back!!! # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Warning, only for authorize login. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list local_root=/var/www # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

  • ssh login successful, but scp password gives me "Permission denied"

    - by YANewb
    I'm trying to get some blogging software up on an organizational remote server. I tried to set up a SSH Key but was having problems and decided that getting the blog up and running was more important than dealing with the SSH Key issue, so I ssh-keygen -R remoteserver.com. Now I can successfully login with ssh -v [email protected] and the correct password. Once logged in I can move around and read any file and directory that I should be able to read. But when I try to edit an existing -rw-r--r-- file with VIM, it shows up as read-only, if I try to edit permissions I get chmod: file.ext: Operation not permitted, and if I try to scp a new file from my local machine I'm prompted for the remote user's password, and then get scp: /home/path/to/file.ext: Permission denied. Since I didn't have any of these problems before I tried to set up the ssh key, I suspect these anomalies are a side effect of that, but I don't know how to troubleshoot this. So what does a foolish server-newb, such as myself, need to do to get edit capability back as a remote user? Addendum 1: My userids are different between my local machine and the remote server. For ssh I ssh -v [email protected]. if I whoami I get remoteuser For scp I scp file.ext [email protected]:/path/to/file.ext from the local directory with file.ext while logged in as the local user. if I whoami I get localuser The ls -l for two different files I've tried scp: -rw-r--r--@ 1 localuser localgroup 20 Feb 11 21:03 phpinfo.php -rw-r--r-- 1 root localgroup 4 Feb 11 22:32 test.txt The ls -l for the file I've tried to VIM: -rw-r--r-- 1 remoteuser remotegroup 76 Jul 27 2009 info.txt Addendum 2: In the past I've set up ssh-keys for git repositories. I don't want to completely destroy them, so in an attempt to follow a deer's train of thinking I renamed my ~/.ssh/ to ~/.ssh-bak/, then tested the different types of access. The abridged version of the terminal commands and results is below; I think everything is working until the 8th line from the end. localcomputer:~ localuser$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY The authenticity of host 'remoteserver.com (###.###.###.###)' can't be established. RSA key fingerprint is ##:##:##:##:##:##:##:##:##:##:##:##:##:##:##:##. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'remoteserver.com,###.###.###.###' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Last login: Sun Feb 12 18:00:54 2012 from 68.69.164.123 FreeBSD 6.4-RELEASE-p8 (VKERN) #1 r101746: Mon Aug 30 10:34:40 MDT 2010 [remoteuser@remoteserver /home]$ ls -l total ### -rw-r--r-- 1 remoteuser remotegroup 76 Aug 12 2009 info.txt [remoteuser@remoteserver /home]$ vim info.txt ~ {at the bottom of the VIM screen it tells me it's [read only]} [remoteuser@remoteserver /home]$ whoami remoteuser [remoteuser@remoteserver /home]$ logout debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to remoteserver.com closed. Transferred: sent 3872, received 12496 bytes, in 107.4 seconds Bytes per second: sent 36.1, received 116.4 debug1: Exit status 0 localcomputer:localdirectory name$ scp -v phpinfo.php [email protected]:/home/www/remotedirectory/phpinfo.php Executing: program /usr/bin/ssh host remoteserver.com, user remoteuser, command scp -v -t /home/www/remotedirectory/phpinfo.php OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'remoteserver.com' is known and matches the RSA host key. debug1: Found key in /Users/localuser/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending command: scp -v -t /home/www/remotedirectory/phpinfo.php Sending file modes: C0644 20 phpinfo.php Sink: C0644 20 phpinfo.php scp: /home/www/remotedirectory/phpinfo.php: Permission denied debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK Transferred: sent 1456, received 2160 bytes, in 0.6 seconds Bytes per second: sent 2322.3, received 3445.1 debug1: Exit status 1

    Read the article

  • Cross-platform, human-readable, du on root partition that truly ignores other filesystems

    - by nice_line
    I hate this so much: Linux builtsowell 2.6.18-274.7.1.el5 #1 SMP Mon Oct 17 11:57:14 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux df -kh Filesystem Size Used Avail Use% Mounted on /dev/mapper/mpath0p2 8.8G 8.7G 90M 99% / /dev/mapper/mpath0p6 2.0G 37M 1.9G 2% /tmp /dev/mapper/mpath0p3 5.9G 670M 4.9G 12% /var /dev/mapper/mpath0p1 494M 86M 384M 19% /boot /dev/mapper/mpath0p7 7.3G 187M 6.7G 3% /home tmpfs 48G 6.2G 42G 14% /dev/shm /dev/mapper/o10g.bin 25G 7.4G 17G 32% /app/SIP/logs /dev/mapper/o11g.bin 25G 11G 14G 43% /o11g tmpfs 4.0K 0 4.0K 0% /dev/vx lunmonster1q:/vol/oradb_backup/epmxs1q1 686G 507G 180G 74% /rpmqa/backup lunmonster1q:/vol/oradb_redo/bisxs1q1 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl1 lunmonster1q:/vol/oradb_backup/bisxs1q1 686G 507G 180G 74% /bisxs1q/backup lunmonster1q:/vol/oradb_exp/bisxs1q1 2.0T 1.1T 984G 52% /bisxs1q/exp lunmonster2q:/vol/oradb_home/bisxs1q1 10G 174M 9.9G 2% /bisxs1q/home lunmonster2q:/vol/oradb_data/bisxs1q1 52G 5.2G 47G 10% /bisxs1q/oradata lunmonster1q:/vol/oradb_redo/bisxs1q2 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl2 ip-address1:/vol/oradb_home/cspxs1q1 10G 184M 9.9G 2% /cspxs1q/home ip-address2:/vol/oradb_backup/cspxs1q1 674G 314G 360G 47% /cspxs1q/backup ip-address2:/vol/oradb_redo/cspxs1q1 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl1 ip-address2:/vol/oradb_exp/cspxs1q1 4.1T 1.5T 2.6T 37% /cspxs1q/exp ip-address2:/vol/oradb_redo/cspxs1q2 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl2 ip-address1:/vol/oradb_data/cspxs1q1 160G 23G 138G 15% /cspxs1q/oradata lunmonster1q:/vol/oradb_exp/epmxs1q1 2.0T 1.1T 984G 52% /epmxs1q/exp lunmonster2q:/vol/oradb_home/epmxs1q1 10G 80M 10G 1% /epmxs1q/home lunmonster2q:/vol/oradb_data/epmxs1q1 330G 249G 82G 76% /epmxs1q/oradata lunmonster1q:/vol/oradb_redo/epmxs1q2 5.0G 609M 4.5G 12% /epmxs1q/rdoctl2 lunmonster1q:/vol/oradb_redo/epmxs1q1 5.0G 609M 4.5G 12% /epmxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol1 183G 17G 157G 10% /slaxs1q/backup /dev/vx/dsk/slaxs1q/slaxs1q-vol4 173G 58G 106G 36% /slaxs1q/oradata /dev/vx/dsk/slaxs1q/slaxs1q-vol5 75G 952M 71G 2% /slaxs1q/exp /dev/vx/dsk/slaxs1q/slaxs1q-vol2 9.8G 381M 8.9G 5% /slaxs1q/home /dev/vx/dsk/slaxs1q/slaxs1q-vol6 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol3 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl2 /dev/mapper/appoem 30G 1.3G 27G 5% /app/em Yet, I equally, if not quite a bit more, also hate this: SunOS solarious 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise Filesystem size used avail capacity Mounted on kiddie001Q_rpool/ROOT/s10s_u8wos_08a 8G 7.7G 1.3G 96% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 15G 1.8M 15G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd kiddie001Q_rpool/ROOT/s10s_u8wos_08a/var 31G 8.3G 6.6G 56% /var swap 512M 4.6M 507M 1% /tmp swap 15G 88K 15G 1% /var/run swap 15G 0K 15G 0% /dev/vx/dmp swap 15G 0K 15G 0% /dev/vx/rdmp /dev/dsk/c3t4d4s0 3 20G 279G 41G 88% /fs_storage /dev/vx/dsk/oracle/ora10g-vol1 292G 214G 73G 75% /o10g /dev/vx/dsk/oec/oec-vol1 64G 33G 31G 52% /oec/runway /dev/vx/dsk/oracle/ora9i-vol1 64G 33G 31G 59% /o9i /dev/vx/dsk/home 23G 18G 4.7G 80% /export/home /dev/vx/dsk/dbwork/dbwork-vol1 292G 214G 73G 92% /db03/wk01 /dev/vx/dsk/oradg/ebusredovol 2.0G 475M 1.5G 24% /u21 /dev/vx/dsk/oradg/ebusbckupvol 200G 32G 166G 17% /u31 /dev/vx/dsk/oradg/ebuscrtlvol 2.0G 475M 1.5G 24% /u20 kiddie001Q_rpool 31G 97K 6.6G 1% /kiddie001Q_rpool monsterfiler002q:/vol/ebiz_patches_nfs/NSA0304 203G 173G 29G 86% /oracle/patches /dev/odm 0K 0K 0K 0% /dev/odm The people with the authority don't rotate logs or delete packages after install in my environment. Standards, remediation, cohesion...all fancy foreign words to me. ============== How am I supposed to deal with / filesystem full issues across multiple platforms that have a devastating number of mounts? On Red Hat el5, du -x apparently avoids traversal into other filesystems. While this may be so, it does not appear to do anything if run from the / directory. On Solaris 10, the equivalent flag is du -d, which apparently packs no surprises, allowing Sun to uphold its legacy of inconvenience effortlessly. (I'm hoping I've just been doing it wrong.) I offer up for sacrifice my Frankenstein's monster. Tell me how ugly it is. Tell me I should download forbidden 3rd party software. Tell me I should perform unauthorized coreutils updates, piecemeal, across 2000 systems, with no single sign-on, no authorized keys, and no network update capability. Then, please help me make this bastard better: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shx | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" My biggest failure and regret is that it still requires a single character edit for Solaris: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shd | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" This will exclude all non / filesystems in a du search from the / directory by basically munging an egrepped df from a second pipe-delimited egrep regex subshell exclusion that is naturally further excluded upon by a third egrep in what I would like to refer to as "the whale." The munge-fest frantically escalates into some xargs du recycling where -x/-d is actually useful, and a final, gratuitous egrep spits out a list of directories that almost feels like an accomplishment: Linux: 54M etc/gconf 61M opt/quest 77M opt 118M usr/ ##===\ 149M etc 154M root 303M lib/modules 313M usr/java ##====\ 331M lib 357M usr/lib64 ##=====\ 433M usr/lib ##========\ 1.1G usr/share ##=======\ 3.2G usr/local ##========\ 5.4G usr ##<=============Ascending order to parent 94M app/SIP ##<==\ 94M app ##<=======Were reported as 7gb and then corrected by second du with -x. Solaris: 63M etc 490M bb 570M root/cores.ric.20100415 1.7G oec/archive 1.1G root/packages 2.2G root 1.7G oec Guess what? It's really slow. Edit: Are there any bash one-liner heroes out there than can turn my bloated abomination into divine intervention, or at least something resembling gingerly copypasta?

    Read the article

  • Mac won't boot into safe mode

    - by Stephen
    Mac boots fine normally, except when in safe mode. Holding down shift when booting gets me to the progress bar on the grey screen. Progress bar gets about half way before mac reboots. I modified nvram boot-args to get a better look: sudo nvram boot-args="-x -v" It definitely gets through fsck, skips loading kernel extensions (since it's in safe mode), does something with the network interfaces, then this is the last thing it wips through... Aug 22 11:56:21 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'com.apple.ServiceManagement.daemons.modify' by client '/usr/libexec/UserEventAgent' [10] for authorization created by '/usr/libexec/UserEventAgent' [10] (100012,0) Aug 22 11:56:22 Crockpot fseventsd[37]: event logs in /.fseventsd out of sync with volume. destroying old logs. (1 174 330) Aug 22 11:56:22 Crockpot fseventsd[37]: log dir: /.fseventsd getting new uuid: 5C379650-26FA-428F-B81F-4FE4349D50B3 Aug 22 11:56:23 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) starting OSXVers 12 Aug 22 11:56:23 Crockpot systemkeychain[35]: done file: /var/run/systemkeychaincheck.done Aug 22 11:56:23 Crockpot configd[17]: network changed: DNS* Aug 22 11:56:24 --- last message repeated 1 time --- Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2D_IPC: Loaded Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2DInitialize succeeded Aug 22 11:56:24 Crockpot mDNSResponder[39]: Adding registration domain 273025955.members.btmm.icloud.com. Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe MAC AUTH succeeded Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe Unsolicited Auth Aug 22 11:56:24 Crockpot kernel[0]: wlEvent: en1 en1 Link UP virtIf = 0 Aug 22 11:56:24 Crockpot kernel[0]: AirPort: Link Up on en1 Aug 22 11:56:24 Crockpot kernel[0]: en1: BSSID changed to 00:23:69:35:dc:fe Aug 22 11:56:24 Crockpot kernel[0]: en1::IO80211Interface::postMessage bssid changed Aug 22 11:56:24 Crockpot kernel[0]: AirPort: RSN handshake complete on en1 Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences failed to read preferences data. Errno was 21 Aug 22 11:56:25 --- last message repeated 1 time --- Aug 22 11:56:25 Crockpot airportd[30]: _doAutoJoin: Already associated to “burnum”. Bailing on auto-join. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Can't load IOBluetoothSerialManager.kext - ineligible during safe boot. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Load com.apple.iokit.IOBluetoothSerialManager failed; removing personalities from kernel. Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences: error renaming file blued.plist.HXuEmQn to blued.plist. Aug 22 11:56:27 Crockpot awacsd[52]: Starting awacsd connectivity-77 (Jun 20 2012 15:40:49) Aug 22 11:56:27 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'system.services.systemconfiguration.network' by client '/System/Library/Frameworks/SystemConfiguration.framework/Versions/A/Resources/SCHelper' [54] for authorization created by '/usr/sbin/awacsd' [52] (100003,0) Aug 22 11:56:27 --- last message repeated 1 time --- Aug 22 11:56:27 Crockpot awacsd[52]: Configuring lazy AWACS client: 273025955.p04.members.btmm.icloud.com. Aug 22 11:56:28 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:32 --- last message repeated 1 time --- Aug 22 11:56:32 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:38 --- last message repeated 1 time --- Aug 22 11:56:38 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:47 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:49 Crockpot configd[17]: subnet_route: write routing socket failed, Network is unreachable Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1+:169.254.80.161) DNS* Proxy+ SMB Aug 22 11:56:51 Crockpot UserEventAgent[10]: Captive: en1: Not probing 'burnum' (protected network) Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1:169.254.80.161) DNS Proxy SMB Aug 22 11:57:07 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:57:23 Crockpot fseventsd[37]: Logging disabled completely for device:1: /Volumes/Recovery HD Aug 22 11:57:25 Crockpot kernel[0]: Kext loading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kext unloading now disabled. Aug 22 11:57:25 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) stopping Aug 22 11:57:25 Crockpot com.apple.SecurityServer[15]: Killing auth hosts Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot configd[17]: dnssd_clientstub read_all(26) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: [0x7fb025119ff0] SCNetworkReachability _llq_callback w/error=-65563 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2D_IPC: Terminated Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2DTerminate succeeded Aug 22 11:57:25 Crockpot awacsd[52]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 2 times --- Aug 22 11:57:25 Crockpot apsd[55]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: SCNC: stop, triggered by configd, type PPPSerial, reason Terminated All Aug 22 11:57:25 Crockpot configd[17]: _d2dCallback: D2D connection to mDNSResponder lost Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 4 times --- Aug 22 11:57:25 Crockpot kernel[0]: Kext autounloading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kernel requests now disabled. ... before rebooting in the middle of the safe mode startup sequence. Aug 22 12:01:10 localhost bootlog[0]: BOOT_TIME 1345662070 0 Aug 22 12:01:32 localhost kernel[0]: PMAP: PCID enabled Aug 22 12:01:32 localhost kernel[0]: Darwin Kernel Version 12.0.0: Sun Jun 24 23:00:16 PDT 2012; root:xnu-2050.7.9~1/RELEASE_X86_64 Any ideas what's causing the safe mode boot to fail? System Info MacBook Pro 8,2 2.2 Ghz Core i7 4 GM Ram Mountain Lion 10.8 500GB TOSHIBA MK5065GSXF Serial-ATA rotational disk

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203  | Next Page >