Search Results

Search found 10033 results on 402 pages for 'fuzz testing'.

Page 377/402 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • CodePlex Daily Summary for Monday, September 03, 2012

    CodePlex Daily Summary for Monday, September 03, 2012Popular ReleasesMetodología General Ajustada - MGA: 03.01.03: Cambios Aury: Ajuste del margen del reporte. Visualización de la columna de Supuestos en la parte del módulo de Decisión. Cambios John: Integración de código con cambios enviados por Aury Niño. Generación de instaladores. Soporte técnico por correo electrónico y telefónico.Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...Thisismyusername's codeplex page.: HTML5 Mulititouch Fruit Ninja Proof of Concept: This is an example of how you could create a game such as Fruit Ninja using HTML5's multitouch capabilities. Sorry this example doesn't have great graphics. If I had my own webpage, I could store some graphics and upload the game there and it might look halfway decent, but since I'm only using a Codeplex page and most mobile devices can't open .zip files, the fruits are just circles. I hope you enjoy reading the source code anyway.GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.TSQL Code Smells Finder: POC 1.01: Proof of concept 1.01 TSQLDomTest.ps1 and Errors.Txt are requiredConfuser: Confuser build 76542: This is a build of changeset 76542.Reactive State Machine: ReactiveStateMachine-beta: TouchStateMachine now supports Microsoft Surface 2.0 SDK. The TouchStateMachine is an extension to the Reactive State Machine. Reactive State Machine uses NuGet for dependency managementSharePoint Column & View Permission: SharePoint Column and View Permission v1.2: Version 1.2 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerMihmojsos OS: Mihmojsos OS 3 (Smart Rabbit): !Mihmojsos OS 3 Smart Rabbit Mihmojsos Smart Rabbit is now availableDotNetNuke Translator: 01.00.00 Beta: First release of the project.YNA: YNA 0.2 alpha: Wath's new since 0.1 alpha ? A lot of changes but there are the most interresting : StateManager is now better and faster Mouse events for all YnObjects (Sprites, Images, texts) A really big improvement for YnGroup Gamepad support And the news : Tiled Map support (need refactoring) Isometric tiled map support (need refactoring) Transition effect like "FadeIn" and "FadeOut" (YnTransition) Timers (YnTimer) Path management (YnPath, need more refactoring) Downloads All downloads...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.2: fixed several issues with streaming modeUrlPager: UrlPager 1.2: Fixed bug in which url parameters will lost after paging; ????????url???bug;Sofire Suite: Sofire v1.5.0.0: Sofire v1.5.0.0 ?? ???????? ?????: 1、?? 2、????EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...New ProjectsBPVote4PPT: BPVote For PowerPointCosmo OS: La semplicità in un OSFinancial Analytic Tools: C#.Net Financial Analytic ToolsGeminiMVC: An Open Source CMS written in ASP.net MVC 4 with speed, extensibility, and ease-of-us in mind.JQuery SharePoint Autocomplete People Picker: This JQUery bundle provides an autocomplete people picker based on SharePoint profiles. It can be hosted on the SharePoint itself or on remote applications.Kerbal Space Program PartModule Library: This project is designed to add various functionalities to custom parts for the space program simulation game Kerbal Space Program.KeyboardRemapper: This tool to remaps keys in the keyboard. If you have more than one keyboard or an additional keypad, you can remap the keys of the each keyboard independentlyKHStudent: ??????Localized DataAnnotations with T4 templates: Simplified DataAnnotations localization using T4 templates.MfcLightToolkit: Supports development for small and simple MFC application. Provides asynchronous programming model like .NET, file download, easy control resizing, and so on.Müslüm ÖZTÜRK Code Lib: Test amaçli olusturulan projemdirPolska: Testproject in how a polish grammerprogram can look like.QueueLessApp: Here is the codeRusIS.CMS: aaaSGPS: Projeto de controle de produtos e serviçosStemmersNet: Stemmers pack for .Net FrameworkTrabajo Final de Ingenieria - Javier Vallejos: Tesis Final de la carrera de Ingenieria - Universidad Abierta Interamericana.TSQL Code Smells Finder: TSQL 'smells' findersXNA and Data Driven Design: This project includes links for XNA and Data Driven DesignXNA and System Testing: This project includes code for XNA and System TestingYUGI-AR Project: an open source project for yugioh based augmented reality???????? ? ?????????????: ???? ??????? ??????? ?????????????? ??????????? ?????????? ??? ? ????? ?????? ? ? ??? ??? ????? ? ??? ?????????? ????????????.

    Read the article

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • How to follow object on CatmullRomSplines at constant speed (e.g. train and train carriage)?

    - by Simon
    I have a CatmullRomSpline, and using the very good example at https://github.com/libgdx/libgdx/wiki/Path-interface-%26-Splines I have my object moving at an even pace over the spline. Using a simple train and carriage example, I now want to have the carriage follow the train at the same speed as the train (not jolting along as it does with my code below). This leads into my main questions: How can I make the carriage have the same constant speed as the train and make it non jerky (it has something to do with the derivative I think, I don't understand how that part works)? Why do I need to divide by the line length to convert to metres per second, and is that correct? It wasn't done in the linked examples? I have used the example I linked to above, and modified for my specific example: private void process(CatmullRomSpline catmullRomSpline) { // Render path with precision of 1000 points renderPath(catmullRomSpline, 1000); float length = catmullRomSpline.approxLength(catmullRomSpline.spanCount * 1000); // Render the "train" Vector2 trainDerivative = new Vector2(); Vector2 trainLocation = new Vector2(); catmullRomSpline.derivativeAt(trainDerivative, current); // For some reason need to divide by length to convert from pixel speed to metres per second but I do not // really understand why I need it, it wasn't done in the examples??????? current += (Gdx.graphics.getDeltaTime() * speed / length) / trainDerivative.len(); catmullRomSpline.valueAt(trainLocation, current); renderCircleAtLocation(trainLocation); if (current >= 1) { current -= 1; } // Render the "carriage" Vector2 carriageLocation = new Vector2(); float carriagePercentageCovered = (((current * length) - 1f) / length); // I would like it to follow at 1 metre behind carriagePercentageCovered = Math.max(carriagePercentageCovered, 0); catmullRomSpline.valueAt(carriageLocation, carriagePercentageCovered); renderCircleAtLocation(carriageLocation); } private void renderPath(CatmullRomSpline catmullRomSpline, int k) { // catMulPoints would normally be cached when initialising, but for sake of example... Vector2[] catMulPoints = new Vector2[k]; for (int i = 0; i < k; ++i) { catMulPoints[i] = new Vector2(); catmullRomSpline.valueAt(catMulPoints[i], ((float) i) / ((float) k - 1)); } SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Line); SHAPE_RENDERER.setColor(Color.NAVY); for (int i = 0; i < k - 1; ++i) { SHAPE_RENDERER.line((Vector2) catMulPoints[i], (Vector2) catMulPoints[i + 1]); } SHAPE_RENDERER.end(); } private void renderCircleAtLocation(Vector2 location) { SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Filled); SHAPE_RENDERER.setColor(Color.YELLOW); SHAPE_RENDERER.circle(location.x, location.y, .5f); SHAPE_RENDERER.end(); } To create a decent sized CatmullRomSpline for testing this out: Vector2[] controlPoints = makeControlPointsArray(); CatmullRomSpline myCatmull = new CatmullRomSpline(controlPoints, false); .... private Vector2[] makeControlPointsArray() { Vector2[] pointsArray = new Vector2[78]; pointsArray[0] = new Vector2(1.681817f, 10.379999f); pointsArray[1] = new Vector2(2.045455f, 10.379999f); pointsArray[2] = new Vector2(2.663636f, 10.479999f); pointsArray[3] = new Vector2(3.027272f, 10.700000f); pointsArray[4] = new Vector2(3.663636f, 10.939999f); pointsArray[5] = new Vector2(4.245455f, 10.899999f); pointsArray[6] = new Vector2(4.736363f, 10.720000f); pointsArray[7] = new Vector2(4.754545f, 10.339999f); pointsArray[8] = new Vector2(4.518181f, 9.860000f); pointsArray[9] = new Vector2(3.790908f, 9.340000f); pointsArray[10] = new Vector2(3.172727f, 8.739999f); pointsArray[11] = new Vector2(3.300000f, 8.340000f); pointsArray[12] = new Vector2(3.700000f, 8.159999f); pointsArray[13] = new Vector2(4.227272f, 8.520000f); pointsArray[14] = new Vector2(4.681818f, 8.819999f); pointsArray[15] = new Vector2(5.081817f, 9.200000f); pointsArray[16] = new Vector2(5.463636f, 9.460000f); pointsArray[17] = new Vector2(5.972727f, 9.300000f); pointsArray[18] = new Vector2(6.063636f, 8.780000f); pointsArray[19] = new Vector2(6.027272f, 8.259999f); pointsArray[20] = new Vector2(5.700000f, 7.739999f); pointsArray[21] = new Vector2(5.300000f, 7.440000f); pointsArray[22] = new Vector2(4.645454f, 7.179999f); pointsArray[23] = new Vector2(4.136363f, 6.940000f); pointsArray[24] = new Vector2(3.427272f, 6.720000f); pointsArray[25] = new Vector2(2.572727f, 6.559999f); pointsArray[26] = new Vector2(1.900000f, 7.100000f); pointsArray[27] = new Vector2(2.336362f, 7.440000f); pointsArray[28] = new Vector2(2.590908f, 7.940000f); pointsArray[29] = new Vector2(2.318181f, 8.500000f); pointsArray[30] = new Vector2(1.663636f, 8.599999f); pointsArray[31] = new Vector2(1.209090f, 8.299999f); pointsArray[32] = new Vector2(1.118181f, 7.700000f); pointsArray[33] = new Vector2(1.045455f, 6.880000f); pointsArray[34] = new Vector2(1.154545f, 6.100000f); pointsArray[35] = new Vector2(1.281817f, 5.580000f); pointsArray[36] = new Vector2(1.700000f, 5.320000f); pointsArray[37] = new Vector2(2.190908f, 5.199999f); pointsArray[38] = new Vector2(2.900000f, 5.100000f); pointsArray[39] = new Vector2(3.700000f, 5.100000f); pointsArray[40] = new Vector2(4.372727f, 5.220000f); pointsArray[41] = new Vector2(4.827272f, 5.220000f); pointsArray[42] = new Vector2(5.463636f, 5.160000f); pointsArray[43] = new Vector2(5.554545f, 4.700000f); pointsArray[44] = new Vector2(5.245453f, 4.340000f); pointsArray[45] = new Vector2(4.445455f, 4.280000f); pointsArray[46] = new Vector2(3.609091f, 4.260000f); pointsArray[47] = new Vector2(2.718181f, 4.160000f); pointsArray[48] = new Vector2(1.990908f, 4.140000f); pointsArray[49] = new Vector2(1.427272f, 3.980000f); pointsArray[50] = new Vector2(1.609090f, 3.580000f); pointsArray[51] = new Vector2(2.136363f, 3.440000f); pointsArray[52] = new Vector2(3.227272f, 3.280000f); pointsArray[53] = new Vector2(3.972727f, 3.340000f); pointsArray[54] = new Vector2(5.027272f, 3.360000f); pointsArray[55] = new Vector2(5.718181f, 3.460000f); pointsArray[56] = new Vector2(6.100000f, 4.240000f); pointsArray[57] = new Vector2(6.209091f, 4.500000f); pointsArray[58] = new Vector2(6.118181f, 5.320000f); pointsArray[59] = new Vector2(5.772727f, 5.920000f); pointsArray[60] = new Vector2(4.881817f, 6.140000f); pointsArray[61] = new Vector2(5.318181f, 6.580000f); pointsArray[62] = new Vector2(6.263636f, 7.020000f); pointsArray[63] = new Vector2(6.645453f, 7.420000f); pointsArray[64] = new Vector2(6.681817f, 8.179999f); pointsArray[65] = new Vector2(6.627272f, 9.080000f); pointsArray[66] = new Vector2(6.572727f, 9.699999f); pointsArray[67] = new Vector2(6.263636f, 10.820000f); pointsArray[68] = new Vector2(5.754546f, 11.479999f); pointsArray[69] = new Vector2(4.536363f, 11.599998f); pointsArray[70] = new Vector2(3.572727f, 11.700000f); pointsArray[71] = new Vector2(2.809090f, 11.660000f); pointsArray[72] = new Vector2(1.445455f, 11.559999f); pointsArray[73] = new Vector2(0.936363f, 11.280000f); pointsArray[74] = new Vector2(0.754545f, 10.879999f); pointsArray[75] = new Vector2(0.700000f, 9.939999f); pointsArray[76] = new Vector2(0.918181f, 9.620000f); pointsArray[77] = new Vector2(1.463636f, 9.600000f); return pointsArray; } Disclaimer: My math is very rusty, so please explain in lay mans terms....

    Read the article

  • Notifications for Expiring DBSNMP Passwords

    - by Courtney Llamas
    Most user accounts these days have a password profile on them that automatically expires the password after a set number of days.   Depending on your company’s security requirements, this may be as little as 30 days or as long as 365 days, although typically it falls between 60-90 days. For a normal user, this can cause a small interruption in your day as you have to go get your password reset by an admin. When this happens to privileged accounts, such as the DBSNMP account that is responsible for monitoring database availability, it can cause bigger problems. In Oracle Enterprise Manager 12c you may notice the error message “ORA-28002: the password will expire within 5 days” when you connect to a target, or worse you may get “ORA-28001: the password has expired". If you wait too long, your monitoring will fail because the password is locked out. Wouldn’t it be nice if we could get an alert 10 days before our DBSNMP password expired? Thanks to Oracle Enterprise Manager 12c Metric Extensions (ME), you can! See the Oracle Enterprise Manager Cloud Control Administrator’s Guide for more information on Metric Extensions. To create a metric extension, select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor.  If you have both RAC and Single instance you may need to create one for each. In this example we will create a Cluster Database metric.  Enter a Name for the ME and a Display Name. Then select SQL for the Adapter.  Adjust the Collection Schedule as desired, for this example we will collect this metric every 1 day. Notice for metric collected every day, we can determine the exact time we want to collect. On the Adapter page, enter the query that you wish to execute.  In this example we will use the query below that specifically checks for the DBSNMP user that is expiring within 10 days. Of course, you can adjust this query to alert for any user that can cause an outage such as an application account or service account such as RMAN. select username, account_status, trunc(expiry_date-sysdate) days_to_expirefrom dba_userswhere username = 'DBSNMP'and expiry_date is not null; The next step is to create the columns to store the data returned from the query.  Click Add and add a column for each of the fields in the same order that data is returned.  The table below will help you complete the column additions. Name Display Name Column Type Value Type Metric Category Unit Username User Name Key String Security AccountStatus Account Status Data String Security DaysToExpire Days Until Expiration Data Number Security Days When creating the DaysToExpire column, you can add a default threshold here for Warning and Critical (say < 10 and 5).  When all columns have been added, click Next. On the Credentials page, you can choose to use the default monitoring credentials or specify new credentials.  We will use the default credentials established for our target (dbsnmp). The next step is to test your Metric Extension.  Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s). We can see in the example below that the Metric Extension has executed and returned a value of 68 days to expire. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status.  Select the metric, click Actions and select Deployable Draft. You can do this once more to move to Published. Finally, we want to apply this metric to a target. When managing many targets, it’s best to add your metric to a template, for details on adding a Metric Extension to a template see the Administrator’s Guide. For this example, we will deploy this to a target directly. Select Actions / Deploy to Targets. Click Add and select the target you wish to deploy to and click Submit.  Once deployment is complete, we can go to the target and view the Metric & Collection Settings to see the new metric and its thresholds.   After some time, you will find the metric has collected and the days to expiration for DBSNMP user can be seen in the All Metrics view.   For metrics collected once per day, you may have to wait up to 24 hours to see the metric and current severity. In the example below, the current severity is Clear (green check) as it is not scheduled to expire within 10 days. To test the notification, we can edit the thresholds for the new metric so they trigger an alert.  Our password expires in 139 days, so we’ll change our Warning to 140 and leave Critical at 5, in our example we also changed the collection time to every 5 minutes.  At the next collection, you’ll find that the current severity changes to a Warning and any related Incident Rules would be triggered to create an Incident or Notification as desired. Now that you get a notification that your DBSNMP passwords is about to expire, you can use OEM Command Line Interface (EM CLI) verb update_db_password to change it at both the database target and the OEM target in one step.  The caveat is you must know the existing password to use the update_db_password command.  To learn more about EM CLI, see the Oracle Enterprise Manager Command Line Interface Guide.  Below is an example of changing the password with the update_db_password verb.  $ ./emcli update_db_password -target_name=emrep -target_type=oracle_database -user_name=dbsnmp -change_at_target=yes -change_all_references=yes Enter value for old_password :Enter value for new_password :Enter value for retype_new_password :Successfully submitted a job to change the password in Enterprise Manager and on the target database: "emrep"Execute "emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84" to check the status of the job.Search for job name "CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84" on the Jobs home page to check job execution details. The subsequent job created will typically run quickly enough that a blackout is not needed, however if you submit a script with many targets to change, your job may run slower so adding a blackout to the script is recommended. $ ./emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84 Name Type Job ID Execution ID Scheduled Completed TZ Offset Status Status ID Owner Target Type Target Name CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84 ChangePassword FA66C1C4D663297FE0437656F20ACC84 FA66C1C4D665297FE0437656F20ACC84 2014-05-28 09:39:12 2014-05-28 09:39:18 GMT-07:00 Succeeded 5 SYSMAN oracle_database emrep After implementing the above Metric Extension and using the EM CLI update_db_password verb, you will be able to stay on top of your DBSNMP password changes without experiencing an unplanned monitoring outage.  

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • Box2D blocky map. Body, Fixtures a huge map and performance

    - by Solom
    Right now I'm still in the planning phase of a my very first game. I'm creating a "Minecraft"-like game in 2D that features blocks that can be destroyed as well as players moving around the map. For creating the map I chose a 2D-Array of Integers that represent the Block ID. For testing purposes I created a huge map (16348 * 256) and in my prototype that didn't use Box2D everything worked like a charm. I only rendered those blocks that where within the bounds of my camera and got 60 fps straight. The problem started when I decided to use an existing physics-solution rather than implementing my own one. What I had was basically simple hitboxes around the blocks and then I had to manually check if the player collided with any of those in his neighborhood. For more advanced physics as well as the collision detection I want to switch over to Box2D. The problem I have right now is ... how to go about the bodies? I mean, the blocks are of a static bodytype. They don't move on their own, they just are there to be collided with. But as far as I can see it, every block needs his own body with a rectangular fixture attached to it, so as to be destroyable. But for a huge map such as mine, this turns out to be a real performance bottle-neck. (In fact even a rather small map [compared to the other] of 1024*256 is unplayable.) I mean I create thousands of thousands of blocks. Even if I just render those that are in my immediate neighborhood there are hundreds of them and (at least with the debugRenderer) I drop to 1 fps really quickly (on my own "monster machine"). I thought about strategies like creating just one body, attaching multiple fixtures and only if a fixture got hit, separate it from the body, create a new one and destroy it, but this didn't turn out quite as successful as hoped. (In fact the core just dumps. Ah hello C! I really missed you :X) Here is the code: public class Box2DGameScreen implements Screen { private World world; private Box2DDebugRenderer debugRenderer; private OrthographicCamera camera; private final float TIMESTEP = 1 / 60f; // 1/60 of a second -> 1 frame per second private final int VELOCITYITERATIONS = 8; private final int POSITIONITERATIONS = 3; private Map map; private BodyDef blockBodyDef; private FixtureDef blockFixtureDef; private BodyDef groundDef; private Body ground; private PolygonShape rectangleShape; @Override public void show() { world = new World(new Vector2(0, -9.81f), true); debugRenderer = new Box2DDebugRenderer(); camera = new OrthographicCamera(); // Pixel:Meter = 16:1 // Body definition BodyDef ballDef = new BodyDef(); ballDef.type = BodyDef.BodyType.DynamicBody; ballDef.position.set(0, 1); // Fixture definition FixtureDef ballFixtureDef = new FixtureDef(); ballFixtureDef.shape = new CircleShape(); ballFixtureDef.shape.setRadius(.5f); // 0,5 meter ballFixtureDef.restitution = 0.75f; // between 0 (not jumping up at all) and 1 (jumping up the same amount as it fell down) ballFixtureDef.density = 2.5f; // kg / m² ballFixtureDef.friction = 0.25f; // between 0 (sliding like ice) and 1 (not sliding) // world.createBody(ballDef).createFixture(ballFixtureDef); groundDef = new BodyDef(); groundDef.type = BodyDef.BodyType.StaticBody; groundDef.position.set(0, 0); ground = world.createBody(groundDef); this.map = new Map(20, 20); rectangleShape = new PolygonShape(); // rectangleShape.setAsBox(1, 1); blockFixtureDef = new FixtureDef(); // blockFixtureDef.shape = rectangleShape; blockFixtureDef.restitution = 0.1f; blockFixtureDef.density = 10f; blockFixtureDef.friction = 0.9f; } @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); debugRenderer.render(world, camera.combined); drawMap(); world.step(TIMESTEP, VELOCITYITERATIONS, POSITIONITERATIONS); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { /* if(camera.position.y - (camera.viewportHeight/2) > a) continue; if(camera.position.y - (camera.viewportHeight/2) < a) break; */ for(int b = 0; b < map.getWidth(); b++) { /* if(camera.position.x - (camera.viewportWidth/2) > b) continue; if(camera.position.x - (camera.viewportWidth/2) < b) break; */ /* blockBodyDef = new BodyDef(); blockBodyDef.type = BodyDef.BodyType.StaticBody; blockBodyDef.position.set(b, a); world.createBody(blockBodyDef).createFixture(blockFixtureDef); */ PolygonShape rectangleShape = new PolygonShape(); rectangleShape.setAsBox(1, 1, new Vector2(b, a), 0); blockFixtureDef.shape = rectangleShape; ground.createFixture(blockFixtureDef); rectangleShape.dispose(); } } } @Override public void resize(int width, int height) { camera.viewportWidth = width / 16; camera.viewportHeight = height / 16; camera.update(); } @Override public void hide() { dispose(); } @Override public void pause() { } @Override public void resume() { } @Override public void dispose() { world.dispose(); debugRenderer.dispose(); } } As you can see I'm facing multiple problems here. I'm not quite sure how to check for the bounds but also if the map is bigger than 24*24 like 1024*256 Java just crashes -.-. And with 24*24 I get like 9 fps. So I'm doing something really terrible here, it seems and I assume that there most be a (much more performant) way, even with Box2D's awesome physics. Any other ideas? Thanks in advance!

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • Chart Filtering

    - by Tim Dexter
    Interesting question from a colleague this week. Can you add a filter to a chart to just show a specific set of data? In an RTF template, you need to do a little finagling in the chart definition. In an online template, a couple of clicks and you're done. RTF Build your chart as you would normally to include all the data to start with. Now flip to the Advanced tab to see the code behind the chart. Its not very pretty but with a little effort you can get it looking a little more friendly. Here's my chart showing employees and their salaries. <Graph depthAngle="50" depthRadius="8" seriesEffect="SE_AUTO_GRADIENT"> <LegendArea visible="true"/>  <Title text="Executive Department Only" visible="true" horizontalAlignment="CENTER"/>  <LocalGridData colCount="{count(.//G_2)}" rowCount="1">   <RowLabels>    <Label>SALARY</Label>   </RowLabels>   <ColLabels>    <xsl:for-each select=".//G_2">     <Label><xsl:value-of select="EMP_NAME"/></Label>    </xsl:for-each>   </ColLabels>   <DataValues>    <RowData>     <xsl:for-each select=".//G_2">      <Cell><xsl:value-of select="SALARY"/></Cell>     </xsl:for-each>    </RowData>   </DataValues>  </LocalGridData> </Graph> Note the emboldened text. Its currently grabbing all values in the G_2 level of the data. We can use an XPATH expression to filter the data to the set we want to see. In my case I want to only see the employees that are in the Executive department. My  data is structured thus:   <DATA_DS>     <G_1>         <DEPARTMENT_NAME>Accounting</DEPARTMENT_NAME>         <G_2>             <MANAGER>Higgins</MANAGER>             <EMPLOYEE_ID>206</EMPLOYEE_ID>             <HIRE_DATE>2002-06-07T00:00:00.000-04:00</HIRE_DATE>             <SALARY>8300</SALARY>             <JOB_TITLE>Public Accountant</JOB_TITLE>             <PARAS>11000</PARAS>             <EMP_NAME>William Gietz</EMP_NAME>         </G_2> So the XPATH expression Im going to use to limit the data to the Executive department would be .//G_2[../DEPARTMENT_NAME='Executive'] Note the ../ moves the parser up the XML tree to be able to test the DEPARTMENT_NAME value. I added this XPATH expression to the three instances that need it ColCount, ColLabels and RowData. Its simple enough to do. Testing your XPATH expression is easier to do using a table of data. Please note, as soon as you make changes to the chart code. Going back to the Builder tab, you'll find that everything is grayed out. I recommend you make all the changes you can via the chart dialog before updating the code. Online Template Implementing the filter is much simpler, there is a dialog box to help you out. Add you chart and fill out the various data points you want to show. then hit the Filter item in the ribbon above the chart. That will pop the filter dialog box where you can then add a filter to the chart.   You can add multiple filters if needed and of course you can use the Manage Filters button to re-open and edit the filters. Pretty straightforward stuff!

    Read the article

  • Nokia Windows Phone 8 App Collection

    - by Tim Murphy
    I recently upgraded to a Nokia Lumia 920.  Along with it came the availability of a number of Nokia developed apps or apps that Nokia has made available from other developers.  Below is a summary of some of the ones that I have used to this point.  There are quite a few of them so I won’t be covering everything that is available. Nokia Maps I am quite pleased with the accuracy of Nokia Maps and not having to tap the screen for each turn any more.  The information on the screen is quite good as well.  The couple of improvements I would like to see are for the voice directions to include which street or exit you need to use and improve the search accuracy.  Bing maps had much better search results in my opinion. Nokia Drive This one really had me confused when I first setup the phone.  I was driving down the road and suddenly I am getting notification tones, but there were no visual notifications on the phone.  It seems that in their infinite wisdom Nokia thinks I don’t know when I am going over the speed limit and need to be told. ESPN I really liked my ESPN app on Windows Phone 7.5, but I am not getting the type of experience I was looking for out of this app.  While it allows me to pick my favorite teams, but there isn’t a pivot page or panorama page that shows a summary of my favorite teams.  I have also found that the live tile don’t update very often.  Over all I am rather disappointed compared app produced by ESPN. Smart Shoot I really need to get the kids to let me use this on.  I like the concept, but I need to spend more time with it.  The idea how running the camera through a continuous shooting mode and then picking the best is something that I have done with my DSLR and am glad to see it available here. Cinemagraph Here is a fun filter.  It doesn’t have the most accurate editing features, but it is fun to stop certain parts of a scene and let other parts move.  As a test I stopped the traffic on the highway and let the traffic on the frontage road flow.  It makes for a fun effect.  If nothing else it could be great for sending prank animations to your friends. YouSendIt I have only briefly touched this application.  What I don’t understand is why it is needed.  Most of the functionality seems to be similar to SkyDrive and it gives you less storage.  They only feature that seems to differentiate the app is the signature capability. Creative Studio This app has some nice quick edits, but it is not very comprehensive.  I am also not to thrilled with the user experience.  It puts you though an initial color cast series that I’m not sure why it is there.  Discovery of the remaining adjustments isn’t that great.  In the end I found myself wanting Thumbia back. Panorama This is one of the apps that I like.  I found it easy to use as it guides you with a target circle that you center for it to take the next pictures.  It also stitches the images with amazing speed.  The one thing I wish it had was the capability to turn the phone into portrait orientation and do a taller panorama.  Perhaps we will see this in the future. Nokia Music After getting over the missing album art I found that there were a number of missing features with this app as well.  I have a Zune HD and I am used to being able to go through my collection and adding songs, albums or artists to my now playing.  There also doesn’t seem to be a way to manage playlists that I have seen yet.  Other than that the UI is familiar and it give Nokia City Lens Augmented reality is a cool concept, but I still haven’t seen it implemented in a compelling fashion beyond a demo at TED a couple of years ago.  The app still leaves me wanting as well.  It does give an interesting toy.  It gives you the ability to look for general categories and see general direction and clusters of locations.  I think as this concept is better thought out it will become more compelling. Nokia Trailers I don’t know how often I will use this app, but I do like being able to see what movies are being promoted.  I can’t wait for The Hobbit to come out and the trailer was just what the doctor ordered.  I can see coming back to this app from time to time. PhotoBeamer PhotoBeamer is a strange beast that needs a better instruction manual.  It seems a lot like magic but very confusing.  I need some more testing, but I don’t think this is something that most people are going to understand quickly and may give up before getting it to work.  I may put an update here after playing with it further. Ringtone Maker The app was just published and it didn’t work very well for me. It couldn’t find 95% of the songs that Nokia Music was playing for me and crashed several times.  It also had songs named wrong that when I checked them in Nokia Music they were fine.  This app looks like it has a long way to go. Summary In all I think that Nokia is offering a well rounded set of initial applications that can get any new owner started.  There is definitely room for improvement in all of these apps.  The main need is usability upgrades.  I would guess that with feedback from users they will come up to acceptable levels.  Try them out and see if you agree. del.icio.us Tags: Windows Phone,Nokia,Lumia,Nokia Apps,ESPN,PhotoBeamer,City Lens,YouSendIt,Drive,Maps

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • ATI propriatery drivers install latest 12.8, broke my kernel. Stuck on kernel 3.2.0-26

    - by user66987
    I messed up a bit. Hoping some here can help me. I tried to install the newest catalyst 12.8. Sadly, this broke my system. I was stuck in low graphics mode. I finally managed to restore the proprietary drivers, and get into ubuntu again. But now I am stuck on kernel 3.2.0.26. I had installed kernel 3.2.0-30, but the system no longer sees it. I have kernel 3.2.0-29 too, but the system cannot see that as well. In the grub menu. When I use sudo update-grub, they are both listed. Here are the output I get: Searching for GRUB installation directory ... found: /boot/grub Cannot determine root device. Assuming /dev/hda1 This error is probably caused by an invalid /etc/fstab Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-3.2.0-30-generic Found kernel: /boot/vmlinuz-3.2.0-29-generic Found kernel: /boot/vmlinuz-3.2.0-27-generic Found kernel: /boot/vmlinuz-3.2.0-26-generic Found GRUB 2: /boot/grub/core.img Found kernel: /boot/memtest86+.bin Updating /boot/grub/menu.lst ... done I have searched everywhere to find a solution to my problem, but can't find any solutions. If you need any log outputs to figure out the problem, please let me know which ones. Update: here is the output for grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b set locale_dir=($root)/boot/grub/locale set lang=nb_NO insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, med Linux 3.2.0-26-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-26-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-26-generic } menuentry 'Ubuntu, med Linux 3.2.0-26-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-26-generic ...' linux /boot/vmlinuz-3.2.0-26-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-26-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, med Linux 3.2.0-25-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-25-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-25-generic } menuentry 'Ubuntu, med Linux 3.2.0-25-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-25-generic ...' linux /boot/vmlinuz-3.2.0-25-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-25-generic } menuentry 'Ubuntu, med Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-24-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, med Linux 3.2.0-24-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-24-generic ...' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, med Linux 3.2.0-23-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-23-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-23-generic } menuentry 'Ubuntu, med Linux 3.2.0-23-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-23-generic ...' linux /boot/vmlinuz-3.2.0-23-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-23-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sdb1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 448AF3CE8AF3BA8E chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### How can I set kernel 3.2.0.30 as the default kernel? According to this file, kernel 3.2.0-30 does not exist.

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 3

    - by Max
    Finally Silverlight 4 RC is released and also that Windows 7 Phone Series will rely heavily on Silverlight platform for apps platform. its a really good news for Silverlight developers and designers. More information on this here. You can use SL 4 RC with VS 2010. SL 4 RC does not come with VS 2010, you need to download it separately and install it. So for the next part, be ready with VS 2010 and SL4 RC, we will start using them and not With this momentum, let us go to the next part of our twitter client tutorial. This tutorial will cover setting your status in Twitter and also retrieving your 1) As everything in Silverlight is asynchronous, we need to have some visual representation showing that something is going on in the background. So what I did was to create a progress bar with indeterminate animation. The XAML is here below. <ProgressBar Maximum="100" Width="300" Height="50" Margin="20" Visibility="Collapsed" IsIndeterminate="True" Name="progressBar1" VerticalAlignment="Center" HorizontalAlignment="Center" /> 2) I will be toggling this progress bar to show the background work. So I thought of writing this small method, which I use to toggle the visibility of this progress bar. Just pass a bool to this method and this will toggle it based on its current visibility status. public void toggleProgressBar(bool Option){ if (Option) { if (progressBar1.Visibility == System.Windows.Visibility.Collapsed) progressBar1.Visibility = System.Windows.Visibility.Visible; } else { if (progressBar1.Visibility == System.Windows.Visibility.Visible) progressBar1.Visibility = System.Windows.Visibility.Collapsed; }} 3) Now let us create a grid to hold a textbox and a update button. The XAML will look like something below <Grid HorizontalAlignment="Center"> <Grid.RowDefinitions> <RowDefinition Height="50"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="400"></ColumnDefinition> <ColumnDefinition Width="200"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Name="TwitterStatus" Width="380" Height="50"></TextBox> <Button Name="UpdateStatus" Content="Update" Grid.Row="1" Grid.Column="2" Width="200" Height="50" Click="UpdateStatus_Click"></Button></Grid> 4) The click handler for this update button will be again using the Web Client to post values. Posting values using Web Client. The code is: private void UpdateStatus_Click(object sender, RoutedEventArgs e){ toggleProgressBar(true); string statusupdate = "status=" + TwitterStatus.Text; WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword());  myService.UploadStringCompleted += new UploadStringCompletedEventHandler(myService_UploadStringCompleted); myService.UploadStringAsync(new Uri("https://twitter.com/statuses/update.xml"), statusupdate);  this.Dispatcher.BeginInvoke(() => ClearTextBoxValue());} 5) In the above code, we have a event handler which will be fired on this request is completed – !! Remember SL is Asynch !! So in the myService_UploadStringCompleted, we will just toggle the progress bar and change some status text to say that its done. The code for this will be StatusMessage is just another textblock conveniently positioned in the page.  void myService_UploadStringCompleted(object sender, UploadStringCompletedEventArgs e){ if (e.Error != null) { StatusMessage.Text = "Status Update Failed: " + e.Error.Message.ToString(); } else { toggleProgressBar(false); TwitterCredentialsSubmit(); }} 6) Now let us look at fetching the friends updates of the logged in user and displaying it in a datagrid. So just define a data grid and set its autogenerate columns as true. 7) Let us first create a data structure for use with fetching the friends timeline. The code is something like below: namespace MaxTwitter.Classes{ public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } }} You can add as many fields as you want, for the list of fields, have a look at here. It will ask for your Twitter username and password, just provide them and this will display the xml file. Go through them pick and choose your desired fields and include in your Data Structure. 8) Now the web client request for this is similar to the one we saw in step 4. Just change the uri in the last but one step to https://twitter.com/statuses/friends_timeline.xml Be sure to change the event handler to something else and within that we will use XLINQ to fetch the required details for us. Now let us how this event handler fetches details. public void parseXML(string text){ XDocument xdoc; if(text.Length> 0) xdoc = XDocument.Parse(text); else xdoc = XDocument.Parse(@"I USED MY OWN LOCAL COPY OF XML FILE HERE FOR OFFLINE TESTING"); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, }).ToList(); //MessageBox.Show(text); //this.Dispatcher.BeginInvoke(() => CallDatabindMethod(StatusCollection)); //MessageBox.Show(statusList.Count.ToString()); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false);} in the event handler, we call this method with e.Result.ToString() Parsing XML files using LINQ is super cool, I love it.   I am stopping it here for  this post. Will post the completed files in next post, as I’ve worked on a few more features in this page and don’t want to confuse you. See you soon in my next post where will play with Twitter lists. Have a nice day! Technorati Tags: Silverlight,LINQ,XLINQ,Twitter API,Twitter,Network Credentials

    Read the article

  • ant ftp doesn't download files in subdirectories

    - by Kristof Neirynck
    Hello, I'm trying to download files in subdirectories from an ftp server with ant. The exact set of files is known. Some of them are in subdirectories. Ant only seems to download the ones in the root directory. It does work if I download all files without listing them. The first ftp action should do the exact same thing as the second. Instead it complains about "Hidden files" and seems to prefix the paths with "\\". Does anyone know what's wrong here? Is this a bug in commons-net? build.xml <?xml version="1.0" encoding="utf-8"?> <project name="example" default="example" basedir="."> <taskdef name="ftp" classname="org.apache.tools.ant.taskdefs.optional.net.FTP" /> <target name="example"> <!-- 2 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="root1.txt,root2.txt,a/a.txt,a/b/ab.txt,c/c.txt" /> </ftp> <!-- 5 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="**/*" /> </ftp> </target> </project> run_ant.bat @ECHO OFF PUSHD %~dp0 SET CLASSPATH= SET ANT_HOME=C:\apache-ant-1.8.0 SET ant=%ANT_HOME%\bin\ant.bat SET antoptions=-nouserlib -noclasspath -d SET ftpjars=^ -lib lib\jakarta-oro-2.0.8.jar ^ -lib lib\commons-net-2.0.jar CALL %ant% %antoptions% %ftpjars% > output.txt POPD output.txt Unable to locate tools.jar. Expected to find it in C:\PROGRA~1\Java\jre6\lib\tools.jar Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: G:\ftp\build.xml Adding reference: ant.PropertyHelper Detected Java version: 1.6 in: C:\PROGRA~1\Java\jre6 Detected OS: Windows 7 Adding reference: ant.ComponentHelper Setting ro project property: ant.file -> G:\ftp\build.xml Setting ro project property: ant.file.type -> file Adding reference: ant.projectHelper Adding reference: ant.parsing.context Adding reference: ant.targets parsing buildfile G:\ftp\build.xml with URI = file:/G:/ftp/build.xml Setting ro project property: ant.project.name -> example Adding reference: example Setting ro project property: ant.project.default-target -> example Setting ro project property: ant.file.example -> G:\ftp\build.xml Setting ro project property: ant.file.type.example -> file Project base dir set to: G:\ftp +Target: +Target: example Adding reference: ant.LocalProperties parsing buildfile jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Class org.apache.tools.ant.taskdefs.optional.net.FTP loaded from parent loader (parentFirst) Setting ro project property: ant.project.invoked-targets -> example Attempting to create object of type org.apache.tools.ant.helper.DefaultExecutor Adding reference: ant.executor Build sequence for target(s) `example' is [example] Complete build sequence is [example, ] example: [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [root1.txt, root2.txt, a/a.txt, a/b/ab.txt] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 2 files retrieved [ftp] disconnecting [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [**/*] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] transferring a\a.txt to G:\ftp\downloads\a\a.txt [ftp] File G:\ftp\downloads\a\a.txt copied from localhost [ftp] transferring a\b\ab.txt to G:\ftp\downloads\a\b\ab.txt [ftp] File G:\ftp\downloads\a\b\ab.txt copied from localhost [ftp] transferring c\c.txt to G:\ftp\downloads\c\c.txt [ftp] File G:\ftp\downloads\c\c.txt copied from localhost [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 5 files retrieved [ftp] disconnecting BUILD SUCCESSFUL Total time: 0 seconds server log (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,232 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a/b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,233 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,234 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,235 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,236 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root1.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,237 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root2.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> QUIT (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 221 Goodbye (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> disconnected. (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,239 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,240 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD a (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,241 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD //a/ (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD b (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,242 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD c (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/c" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,243 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,244 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/a.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,245 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/b/ab.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,246 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR c/c.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,247 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root1.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,248 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root2.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> QUIT (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 221 Goodbye (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> disconnected.

    Read the article

  • How to set up spf records to send mail from google hosted apps to gmail addresses

    - by Chris Adams
    Hi there, I'm trying to work out why email I send from one domain I own is rejected by another that I own, and while I think it may be related to how I've setup spf records, I'm not sure what steps I need to take to fix it. Here's the error message I receive: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-Verification failed for <[email protected]> 550-No Such User Here 550 Sender verify failed (state 14). Here's the response from [email protected] Delivered-To: [email protected] Received: by 10.86.92.9 with SMTP id p9cs85371fgb; Wed, 2 Sep 2009 22:33:32 -0700 (PDT) Received: by 10.90.205.4 with SMTP id c4mr2406190agg.29.1251956007562; Wed, 02 Sep 2009 22:33:27 -0700 (PDT) Return-Path: <[email protected]> Received: from verifier.port25.com (207-36-201-235.ptr.primarydns.com [207.36.201.235]) by mx.google.com with ESMTP id 26si831174aga.24.2009.09.02.22.33.25; Wed, 02 Sep 2009 22:33:26 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 207.36.201.235 as permitted sender) client-ip=207.36.201.235; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 207.36.201.235 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; s=auth; d=port25.com; h=Date:From:To:Subject:Message-Id:In-Reply-To; [email protected]; bh=GRMrcnoucTl4upzqJYTG5sOZMLU=; b=uk6TjADEyZVRkceQGjH94ZzfVeRTsiZPzbXuhlqDt1m+kh1zmdUEoiTOzd89ryCHMbVcnG1JajBj 5vOMKYtA3g== DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; s=auth; d=port25.com; b=NqKCPK00Xt49lbeO009xy4ZRgMGpghvcgfhjNy7+qI89XKTzi6IUW0hYqCQyHkd2p5a1Zjez2ZMC l0u9CpZD3Q==; Received: from verifier.port25.com (127.0.0.1) by verifier.port25.com (PowerMTA(TM) v3.6a1) id hjt9pq0hse8u for <[email protected]>; Thu, 3 Sep 2009 01:26:52 -0400 (envelope-from <[email protected]>) Date: Thu, 3 Sep 2009 01:26:52 -0400 From: [email protected] To: [email protected] Subject: Authentication Report Message-Id: <[email protected]> Precedence: junk (auto_reply) In-Reply-To: <[email protected]> This message is an automatic response from Port25's authentication verifier service at verifier.port25.com. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at <[email protected]>. Thank you for using the verifier, The Port25 Solutions, Inc. team ========================================================== Summary of Results ========================================================== SPF check: pass DomainKeys check: neutral DKIM check: neutral Sender-ID check: pass SpamAssassin check: ham ========================================================== Details: ========================================================== HELO hostname: fg-out-1718.google.com Source IP: 72.14.220.158 mail-from: [email protected] ---------------------------------------------------------- SPF check details: ---------------------------------------------------------- Result: pass ID(s) verified: [email protected] DNS record(s): stemcel.co.uk. 14400 IN TXT "v=spf1 include:aspmx.googlemail.com ~all" aspmx.googlemail.com. 7200 IN TXT "v=spf1 redirect=_spf.google.com" _spf.google.com. 300 IN TXT "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ?all" ---------------------------------------------------------- DomainKeys check details: ---------------------------------------------------------- Result: neutral (message not signed) ID(s) verified: [email protected] DNS record(s): ---------------------------------------------------------- DKIM check details: ---------------------------------------------------------- Result: neutral (message not signed) ID(s) verified: NOTE: DKIM checking has been performed based on the latest DKIM specs (RFC 4871 or draft-ietf-dkim-base-10) and verification may fail for older versions. If you are using Port25's PowerMTA, you need to use version 3.2r11 or later to get a compatible version of DKIM. ---------------------------------------------------------- Sender-ID check details: ---------------------------------------------------------- Result: pass ID(s) verified: [email protected] DNS record(s): stemcel.co.uk. 14400 IN TXT "v=spf1 include:aspmx.googlemail.com ~all" aspmx.googlemail.com. 7200 IN TXT "v=spf1 redirect=_spf.google.com" _spf.google.com. 300 IN TXT "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ?all" ---------------------------------------------------------- SpamAssassin check details: ---------------------------------------------------------- SpamAssassin v3.2.5 (2008-06-10) Result: ham (-2.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -2.6 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] 0.0 HTML_MESSAGE BODY: HTML included in message I've registered the spf records for my domain, as advised here Both domains pass validate according to Kitterman's spf record testing tools, so I'm somewhat confused about this. I also have the catchall address set up on the stemcel.co.uk domain here, but I don't have one setup for chrisadams.me.uk. Instead, we have the following forwarders setup [email protected] to [email protected] [email protected] to [email protected] [email protected] to [email protected] [email protected] to [email protected] Any ideas how to get this working? I'm not sure what I should be looking for here.

    Read the article

  • Juniper SSG-5 subinterface vlan routing to the internet

    - by catfish
    I'm unable to get a brand new Juniper SSG-5 with latest 6.3.0r05 firmware routing to the internet from a subinterface I created on bgroup0 setup as vlan2 (bgroup0.1 on "wifi" zone). When connected on the default vlan it gets on the internet just fine. When I switch to vlan2 I'm unable to get to the internet. I am able to get the correct ip address (10.150.0.0/24) from dhcp, able to get to the juniper management page, etc but nothing past the firewall, can't ping 4.2.2.2 or the internet gateway. Even setting up logging on the wifi-to-untrust policy and it does shows the attempts (it's it's timeouts). 172.31.16.0/24 is the untrusted lan, it's already nat'ed but works fine for testing. Can ping this ip from the default vlan but not from vlan2 192.168.1.0/24 is the trusted main lan 10.150.0.0/24 is the wifi isolated lan on vlan2 The idea is to setup an AP with lan and guest access (AP supports multiple ssid's on different vlans). I know I can setup the juniper to use different ports for the wifi lan and use their procurve switch to do the vlan separation, but I never used vlan'ing on a Juniper firewall and I would like to try it out this way. Here is the complete config file: unset key protection enable set clock timezone -5 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "xxxxxxxxxxxxxxxx" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone id 100 "Wifi" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst unset zone "Wifi" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Untrust" set interface "bgroup0" zone "Trust" set interface "bgroup0.1" tag 2 zone "Wifi" set interface "bgroup1" zone "DMZ" set interface bgroup0 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup0 port ethernet0/5 set interface bgroup0 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 172.31.16.243/24 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup0.1 ip 10.150.0.1/24 set interface bgroup0.1 nat set interface bgroup0.1 mtu 1500 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup0.1 ip manageable set interface ethernet0/0 manage ping set interface ethernet0/1 manage ping set interface bgroup0.1 manage ping set interface bgroup0.1 manage telnet set interface bgroup0.1 manage web unset interface bgroup1 manage ping set interface bgroup0 dhcp server service set interface bgroup0.1 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup0.1 dhcp server enable set interface bgroup0 dhcp server option gateway 192.168.1.1 set interface bgroup0 dhcp server option netmask 255.255.255.0 set interface bgroup0 dhcp server option dns1 8.8.8.8 set interface bgroup0.1 dhcp server option lease 1440 set interface bgroup0.1 dhcp server option gateway 10.150.0.1 set interface bgroup0.1 dhcp server option netmask 255.255.255.0 set interface bgroup0.1 dhcp server option dns1 8.8.8.8 set interface bgroup0 dhcp server ip 192.168.1.33 to 192.168.1.126 set interface bgroup0.1 dhcp server ip 10.150.0.50 to 10.150.0.100 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup0.1 dhcp server config next-server-ip set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow no-tcp-seq-check set flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Wifi" to "Untrust" "Any" "Any" "ANY" permit log set policy id 2 exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set snmpv3 local-engine id "0162122009006149" set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route set route 0.0.0.0/0 interface ethernet0/0 gateway 172.31.16.1 exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit

    Read the article

  • What does it mean when ARP shows <incomplete> on eth1

    - by Geoff Dalgas
    We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps.

    Read the article

  • Emails forwarded via postfix get flagged as spam and forged in Gmail

    - by Kendall Hopkins
    I'm trying to setup a forwarding only email server. I'm running into the problem where all messages forwarded via postfix are getting put into gmail's spam folder and getting flagged as forged. I'm testing a very similar setup on a cpanel box and their forwarded emails make it through without any problem. Things I've done: Setup reverse dns on forwarding box Setup SPF record for forwarding box domain CPanel route (not flagged as spam): [email protected] - [email protected] - [email protected] AWS postfix route (flagged as spam): [email protected] - [email protected] - [email protected] Gmail error message: /etc/postfix/main.cf myhostname = sputnik.*domain*.com smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myorigin = /etc/mailname mydestination = sputnik.*domain*.com, localhost.*domain*.com, , localhost relayhost = mynetworks = 127.0.0.0/8 10.0.0.0/24 [::1]/128 [fe80::%eth0]/64 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all virtual_alias_maps = hash:/etc/postfix/virtual Email forwarded by CPanel (doesn't get marked as spam): Delivered-To: *personaluser*@gmail.com Received: by 10.182.144.98 with SMTP id sl2csp14396obb; Wed, 9 May 2012 09:18:36 -0700 (PDT) Received: by 10.182.52.38 with SMTP id q6mr1137571obo.8.1336580316700; Wed, 09 May 2012 09:18:36 -0700 (PDT) Return-Path: <mail@*personaldomain*.com> Received: from web6.*domain*.com (173.193.55.66-static.reverse.softlayer.com. [173.193.55.66]) by mx.google.com with ESMTPS id ec7si1845451obc.67.2012.05.09.09.18.36 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 09 May 2012 09:18:36 -0700 (PDT) Received-SPF: neutral (google.com: 173.193.55.66 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) client-ip=173.193.55.66; Authentication-Results: mx.google.com; spf=neutral (google.com: 173.193.55.66 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) smtp.mail=mail@*personaldomain*.com Received: from mail-vb0-f43.google.com ([209.85.212.43]:56152) by web6.*domain*.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.77) (envelope-from <mail@*personaldomain*.com>) id 1SS9b2-0007J9-LK for mail@kendall.*domain*.com; Wed, 09 May 2012 12:18:36 -0400 Received: by vbbfq11 with SMTP id fq11so599132vbb.2 for <mail@kendall.*domain*.com>; Wed, 09 May 2012 09:18:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=Hr0AH40uUtx/w/u9hltbrhHJhRaD5ubKmz2gGg44VLs=; b=IBKi6Xalr9XVFYwdkWxn9PLRB69qqJ9AjUPdvGh8VxMNW4S+hF6r4GJcGOvkDn2drO kw5r4iOpGuWUQPEMHRPyO4+Ozc9SE9s4Px2oVpadR6v3hO+utvFGoj7UuchsXzHqPVZ8 A9FS4cKiE0E0zurTjR7pfQtZT64goeEJoI/CtvcoTXj/Mdrj36gZ2FYtO8Qj4dFXpfu9 uGAKa4jYfx9zwdvhLzQ3mouWwQtzssKUD+IvyuRppLwI2WFb9mWxHg9n8y9u5IaduLn7 7TvLIyiBtS3DgqSKQy18POVYgnUFilcDorJs30hxFxJhzfTFW1Gdhrwjvz0MTYDSRiGQ P4aw== MIME-Version: 1.0 Received: by 10.52.173.209 with SMTP id bm17mr326586vdc.54.1336580315681; Wed, 09 May 2012 09:18:35 -0700 (PDT) Received: by 10.220.191.134 with HTTP; Wed, 9 May 2012 09:18:35 -0700 (PDT) X-Originating-IP: [99.50.225.7] Date: Wed, 9 May 2012 12:18:35 -0400 Message-ID: <CA+tP6Viyn0ms5RJoqtd20ms3pmQCgyU0yy7GBiaALEACcDBC2g@mail.gmail.com> Subject: test5 From: Kendall Hopkins <mail@*personaldomain*.com> To: mail@kendall.*domain*.com Content-Type: multipart/alternative; boundary=bcaec51b9bf5ee11c004bf9cda9c X-Gm-Message-State: ALoCoQm3t1Hohu7fEr5zxQZsC8FQocg662Jv5MXlPXBnPnx2AiQrbLsNQNknLy39Su45xBMCM47K X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - web6.*domain*.com X-AntiAbuse: Original Domain - kendall.*domain*.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - *personaldomain*.com X-Source: X-Source-Args: X-Source-Dir: --bcaec51b9bf5ee11c004bf9cda9c Content-Type: text/plain; charset=ISO-8859-1 test5 --bcaec51b9bf5ee11c004bf9cda9c Content-Type: text/html; charset=ISO-8859-1 test5 --bcaec51b9bf5ee11c004bf9cda9c-- Email forwarded via AWS postfix box (marked as spam): Delivered-To: *personaluser*@gmail.com Received: by 10.182.144.98 with SMTP id sl2csp14350obb; Wed, 9 May 2012 09:17:46 -0700 (PDT) Received: by 10.229.137.143 with SMTP id w15mr389471qct.37.1336580266237; Wed, 09 May 2012 09:17:46 -0700 (PDT) Return-Path: <mail@*personaldomain*.com> Received: from sputnik.*domain*.com (sputnik.*domain*.com. [107.21.39.201]) by mx.google.com with ESMTP id o8si1330855qct.115.2012.05.09.09.17.46; Wed, 09 May 2012 09:17:46 -0700 (PDT) Received-SPF: neutral (google.com: 107.21.39.201 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) client-ip=107.21.39.201; Authentication-Results: mx.google.com; spf=neutral (google.com: 107.21.39.201 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) smtp.mail=mail@*personaldomain*.com Received: from mail-vb0-f52.google.com (mail-vb0-f52.google.com [209.85.212.52]) by sputnik.*domain*.com (Postfix) with ESMTP id A308122AD6 for <mail@*personaldomain2*.com>; Wed, 9 May 2012 16:17:45 +0000 (UTC) Received: by vbzb23 with SMTP id b23so448664vbz.25 for <mail@*personaldomain2*.com>; Wed, 09 May 2012 09:17:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=XAzjH9tUXn6SbadVSLwJs2JVbyY4arosdTuV8Nv+ARI=; b=U8gIgHd6mhWYqPU4MH/eyvo3kyZsDn/GiYwZj5CLbs6Zz/ZOXQkenRi7zW3ewVFi/9 uAFylT8SQ+Wjw2l6OgAioCTojfZ58s4H/JW+1bu460KAP9aeOTcZDNSsHlsj0wvH5XRV 4DQJa11kz+WFVtVVcFuB33WVUPAgJfXzY+pSTe+FWsrZyrrwL7/Vm9TSKI5PBwRN9i4g zAZabgkmw1o2THT3kbJi6vAbPzlqK2LVbgt82PP0emHdto7jl4iD5F6lVix4U0dsrtRv xuGUE0gDyIwJuR4Q5YTkNubwGH/Y2bFBtpx2q1IORANrolWxIGaZSceUWawABkBGPABX 1/eg== MIME-Version: 1.0 Received: by 10.52.96.169 with SMTP id dt9mr282954vdb.107.1336580265812; Wed, 09 May 2012 09:17:45 -0700 (PDT) Received: by 10.220.191.134 with HTTP; Wed, 9 May 2012 09:17:45 -0700 (PDT) X-Originating-IP: [99.50.225.7] Date: Wed, 9 May 2012 12:17:45 -0400 Message-ID: <CA+tP6VgqZrdxP543Y28d1eMwJAs4DxkS4EE6bvRL8nFoMkgnQQ@mail.gmail.com> Subject: test4 From: Kendall Hopkins <mail@*personaldomain*.com> To: mail@*personaldomain2*.com Content-Type: multipart/alternative; boundary=20cf307f37f6f521b304bf9cd79d X-Gm-Message-State: ALoCoQkrNcfSTWz9t6Ir87KEYyM+zJM4y1AbwP86NMXlk8B3ALhnis+olFCKdgPnwH/sIdzF3+Nh --20cf307f37f6f521b304bf9cd79d Content-Type: text/plain; charset=ISO-8859-1 test4 --20cf307f37f6f521b304bf9cd79d Content-Type: text/html; charset=ISO-8859-1 test4 --20cf307f37f6f521b304bf9cd79d--

    Read the article

  • Need help diagnosing network performance issues

    - by tokes
    I am currently working in a developing country as a system analyst for a government department. My area of expertise is software projects, but I've come across a few issues with the network setup in my office. (Unfortunately, being a developing country, there's not a lot of professional help available for this sort of thing.) Most recently, I am trying to diagnose a problem with slowness on the network. Our office is connected to the internet via an ADSL wireless modem/router (called Router). The modem is connected via ethernet to a switch (called Switch). The modem also acts as a wireless access point (called Wireless1), though because it is in a room at the end of the floor, it's range is limited. There are ethernet ports installed around the office. The cables of these all lead back to the same switch. In closer vicinity to the bulk of the client computers, there is another wireless router that acts as an access point for those clients (called Wireless2). That router is connected via ethernet to a wall port, and therefore to Switch. There is also a Windows server which acts as a DNS server (called DNSBox) which is located in the same room and is connected directly to Switch. ---Internet----------| Router/Wireless1 192.168.10.1 ---------------| |----|=========| DNSBox | |-------------------- 192.168.10.4 --------------------| Switch |---Other clients---- | |-------------------- |----|=========| Wireless2 ------------------| 192.168.10.198 One final thing to mention about the network setup. All clients are configured with manual IP addresses. Their router/gateway is set to the IP address of Router, and their DNS server is set to the IP address of DNSBox (with a secondary IP set to an external IP - that of our ISP's DNS server). Here are the symptoms we are experiencing: Clients connected to Wireless2 AP experience slow and unstable connections to the internet. (Slow here is defined as speeds of ~1KB/s, though ping response times seem to be as normal.) Clients connected via ethernet to Switch also experience the same slowness. Clients connected to Wireless1 AP (i.e. connecting via wireless directly to the ADSL modem) experience normal connections to the internet. Clients connected via ethernet to Router (i.e. connecting via ethernet directly to the ADSL modem) also experience normal connections to the internet. I also tried to gauge the connection performance between two machines on the network via ethernet: A file transfer between two clients who were both directly connected to Switch was the fastest; A file transfer between one client directly connected to Switch, and one client directly connected to Router (which is directly connected to Switch) performed much slower; A file transfer between two clients directly connected to Router also performed slowly. Things I have attempted to diagnose the problem: Restarted Switch -- no change. We tried unplugging ethernet jacks from Switch 4 at a time and testing the internet connection. The thought here was that perhaps a client on the network has contracted a virus, and is possibly spamming the network with traffic? (Not very technical, I know.) Unfortunately we couldn't get any significant increases in performance using this method. There were a couple of times when it seemed to be better, but then the connection speed quickly dropped back to slow/dead pace. I didn't want to unplug all jacks from Switch because I was concerned that users might be affected or that I would re-plug in the jacks incorrectly (should I even be worried about that? a port is a port on a switch, right?) I tried swapping the ethernet cable used to connect Router to Switch -- no change in performance. I tried swapping the port used on Switch for Router -- no change in performance. Anyone got any ideas on what this could be? Should I be mentioning specific brand names/models of the hardware used? Virii outbreaks are common in this country/office -- what could I be doing to figure out if a virus is at fault? If it is a virus, it doesn't seem to be generating a lot of traffic to/from the internet, because a) I can still get a good speed if I am directly connected to Router / Wireless1 and b) our ISP data usage has not risen suspiciously. Thanks for your help! Update #1 Here are the specs of some of the hardware: Switch is an Edimax ES3132RL 32-Port 10/100 Rackmount Switch Router is a D-Link DSL-G604T Update #2 I just tried unplugging everything except a laptop and Router from Switch. Speeds are still slow. I guess that means that Router / Switch are not being flooded? It seems more and more likely that the cause is something to do with the interaction between Router and Switch. However, I still can't find any useful resources on setting the LAN speed for either (and I'm not well-versed in these advanced networking configurations).

    Read the article

  • Can't start httpd 2.4.9 with self-signed SSL certificate

    - by Smollet
    I cannot start the httpd 2.4.9 (tried 2.4.x too) on CentOS 6.5 with the simplest SSL config possible. The openssl version installed on the machine is OpenSSL 1.0.1e-fips 11 Feb 2013 (I've upgraded it using 'yum update' to the latest patched version as well) I have compiled and installed the httpd 2.4.9 using the following commands: ./configure --enable-ssl --with-ssl=/usr/local/ssl/ --enable-proxy=shared --enable-proxy_wstunnel=shared --with-apr=apr-1.5.1/ --with-apr-util=apr-util-1.5.3/ make make install Now I'm generating the default self-signed certificate as described in the CentOS HowTo: openssl genrsa -out ca.key 2048 openssl req -new -key ca.key -out ca.csr openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt cp ca.crt /etc/pki/tls/certs cp ca.key /etc/pki/tls/private/ca.key cp ca.csr /etc/pki/tls/private/ca.csr Here is my httpd-ssl.conf file: Listen 443 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLPassPhraseDialog builtin SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)" SSLSessionCacheTimeout 300 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "/usr/local/apache2/cgi-bin"> SSLOptions +StdEnvVars </Directory> BrowserMatch "MSIE [2-5]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog "/usr/local/apache2/logs/ssl_request_log" \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> when I start httpd using bin/apachectl -k start I get following errors in the error_log: Wed Jun 04 00:29:27.995654 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:29:27.995726 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.995863 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996111 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:29:27.996122 2014] [ssl:info] [pid 24021:tid 139640404293376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:29:27.996209 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.996280 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996295 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:29:27.996303 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: DH PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996308 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: EC PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996318 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:29:27.996321 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I then try to generate missing DH PARAMETERS and EC PARAMETERS: openssl dhparam -outform PEM -out dhparam.pem 2048 openssl ecparam -out ec_param.pem -name prime256v1 cat dhparam.pem ec_param.pem >> /etc/pki/tls/certs/ca.crt And it mitigates the error but the next comes out: [Wed Jun 04 00:34:05.021438 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:34:05.021487 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.021874 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022050 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:34:05.022066 2014] [ssl:info] [pid 24089:tid 140719371077376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:34:05.022285 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1016): AH02540: Custom DH parameters (2048 bits) for 192.168.9.128:443 loaded from /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022389 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1030): AH02541: ECDH curve prime256v1 for 192.168.9.128:443 specified in /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022397 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.022464 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022478 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:34:05.022488 2014] [ssl:emerg] [pid 24089:tid 140719371077376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:34:05.022491 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I have tried to generate the simple certificate/key pair exactly as described in the httpd docs Unfortunately, I still get exact same errors as above. I've seen a bug report with the similar issue: https://issues.apache.org/bugzilla/show_bug.cgi?id=56410 But the openssl version I have is reported as working there. I've also tried to apply the patch from the report as well as build the latest 2.4.x branch with no success, I get the same errors as above. I have also tried to create a short chain of certificates and set the root CA certificate using SSLCertificateChainFile directive. That didn't help either, I get exact same errors as above. I'm not interested in setting up hardened security, etc. The only thing I need is to start httpd with the simplest SSL config possible to continue testing proxy config for the mod_proxy_wstunnel Had anybody encountered and solved this issue? Is my sequence for creating a self-signed certificate incorrect? I'd appreciate any help very much!

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >