Search Results

Search found 7217 results on 289 pages for 'nice'.

Page 287/289 | < Previous Page | 283 284 285 286 287 288 289  | Next Page >

  • CodePlex Daily Summary for Saturday, January 22, 2011

    CodePlex Daily Summary for Saturday, January 22, 2011Popular ReleasesMinecraft Tools: Minecraft Topographical Survey 1.3: MTS requires version 4 of the .NET Framework - you must download it from Microsoft if you have not previously installed it. This version of MTS adds automatic block list updates, so MTS will recognize blocks added in game updates properly rather than drawing them in bright pink. New in this version of MTS: Support for all new blocks added since the Halloween update Auto-update of blockcolors.xml to support future game updates A splash screen that shows while the program searches for upd...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14996.000: New Features: ============= This release is just compiled against the latest release of JetBrains ReSharper 5.1.1766.4 Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes...TweetSharp: TweetSharp v2.0.0.0 - Preview 9: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for lists and suggested users Fixes based on user feedback Third Party Library VersionsHammock v1.1.6: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comjqGrid ASP.Net MVC Control: Version 1.2.0.0: jqGrid 3.8 support jquery 1.4 support New and exciting features Many bugfixes Complete separation from the jquery, & jqgrid codeMediaScout: MediaScout 3.0 Preview 4: Update ReleaseCoding4Fun Tools: Coding4Fun.Phone.Toolkit v1: Coding4Fun.Phone.Toolkit v1MFCMAPI: January 2011 Release: Build: 6.0.0.1024 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeAutoLoL: AutoLoL v1.5.4: Added champion: Renekton Removed automatic file association Fix: The recent files combobox didn't always open a file when an item was selected Fix: Removing a recently opened file caused an errorDotNetNuke® Community Edition: 05.06.01: Major Highlights Fixed issue to remove preCondition checks when upgrading to .Net 4.0 Fixed issue where some valid domains were failing email validation checks. Fixed issue where editing Host menu page settings assigns the page to a Portal. Fixed issue which caused XHTML validation problems in 5.6.0 Fixed issue where an aspx page in any subfolder was inaccessible. Fixed issue where Config.Touch method signature had an unintentional breaking change in 5.6.0 Fixed issue which caused...MiniTwitter: 1.65: MiniTwitter 1.65 ???? ?? List ????? in-reply-to ???????? ????????????????????????? ?? OAuth ????????????????????????????ASP.net Ribbon: Version 2.1: Tadaaa... So Version 2.1 brings a lot of things... Have a look at the homepage to see what's new. Also, I wanted to (really) improve the Designer. I wanted to add great things... but... it took to much time. And as some of you were waiting for fixes, I decided just to fix bugs and add some features. So have a look at the demo app to see new features. Thanks ! (You can expect some realeses if bugs are not fixed correctly... 2.2, 2.3, 2.4....)iTracker Asp.Net Starter Kit: Version 3.0.0: This is the inital release of the version 3.0.0 Visual Studio 2010 (.Net 4.0) remake of the ITracker application. I connsider this a working, stable application but since there are still some features missing to make it "complete" I'm leaving it listed as a "beta" release. I am hoping to make it feature complete for v3.1.0 but anything is possible.ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager changes: RenderView controller extension works for razor also live demo switched to razorBloodSim: BloodSim - 1.3.3.1: - Priority update to resolve a bug that was causing Boss damage to ignore Blood Shields entirelyRawr: Rawr 4.0.16 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of this release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. W...MvcContrib: an Outer Curve Foundation project: MVC 3 - 3.0.51.0: Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcContrib.TestHelper.dll MvcContrib.Extras.Release.zip T4MVC. The extra view engines / controller factories and other functionality which is in the project. This file includes the main MvcContrib assembly. Samples are included in the release. You do not need MvcContrib if you download the Extras.Yahoo! UI Library: YUI Compressor for .Net: Version 1.5.0.0 - Jalthi: Updated solution to VS2010. New: Work Item #4450 - Optional MSBuild task parameter :: Do not error if no files were found. Fixed: Work Item #5028 - Output file encoding is the same as the optional MSBuild task encoding argument. Fixed: Work Item #5824 - MSBuilds where slow, after the first build due to the Current Thread being forced to en-gb, on none en-gb systems. Changed: Work Item #6873 - Project license changed from MS-PL to GPLv2. New: Added all the unit tests from the Java YU...N2 CMS: 2.1.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. 2.1.1 Maintenance release List of changes 2.1 Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open...DNN Menu Provider: 01.00.00 (DNN 5.X and newer): DNN Menu Provider v1.0.0Our first release build is a stable release. It requires at least DotNetNuke 5.1.0. Previous DNN versions are not suported. Major Highlights: Easy to use template system build on a selfmade and powerful engine. Ready for multilingual websites. A flexible translation integration based on the ASP.NET Provider Model as well as an build in provider for DNN Localization (DNN 5.5.X > required) and an provider for EALO Translation API. Advanced in-memory caching function...VidCoder: 0.8.1: Adds ability to choose an arbitrary range (in seconds or frames) to encode. Adds ability to override the title number in the output file name when enqueing multiple titles. Updated presets: Added iPhone 4, Apple TV 2, fixed some existing presets that should have had weightp=0 or trellis=0 on them. Added {parent} option to auto-name format. Use {parent:2} to refer to a folder 2 levels above the input file. Added {title:2} option to auto-name format. Adds leading zeroes to reach the sp...New Projectsagap: Système de gestion d'application.Blog in Silverlight With BlogEngine: I love silverlight and want to build my blog in silverlight base on BlogEngine.NETCavemanTools: Easy to use toolkit with extension methods and special goodies for faster development, written in C# on .Net 3.5. CloudCompanion: Windows Azure performance measurement & analytics tools.Delivery Madness: XNA Game Development ExampledFactor: Visual Studio Add-in. Provides extended refactoring options.GpSpotMe: El objetivo final de GPSmapPoint es mostrar la posición del usuario sobre un mapa topográfico, pero eso no es mas que una escusa para aprender y divertirse programando. Esta desarrollado en C# sobre .NET Compact Framework 3.5 y puede ser usado en teléfonos con Windows Mobile 6.xInvestment Calculator: Allows you to evaluate how much money you could have today if you invested to one of the predefined investment opportunities in the database at a given date.jbucknor: jbucknorLoad Generation: Generates load according to a given pattern over a long period of time for the given number of users. It can vary the load in different patters such as linear, exponential etc. Load can be generated on custom or standard protocol Orchard Profile Module: Provides user profilesOrchard Super Classic Theme: This Orchard Theme is used to demonstrate the dynamic themes capabilities offered by Orchard. Based on the Classic Theme for Orchard, you can chose among different variations from the Dashboard.Orchard Youtube Field Module: Youtube field for orchard.Osac DataLayer: Database layer to simplify the use of database connection, sql queries and commands. Powerd by OSAC. http://www.osac.nlPascal Design: Pascal Design. pascal design is for users writing pascal application or to students learning pascal and don't need to workout the design and want to make same fast "screens". it's developed in C#PathEditor: This tool simplified the access to the environment path variable. No long line, a nice grid for editing easy and secure with directory checking. PLG Graphic File Analyzer: PLG Analyzer is an application that loads a graphic scene described in PLG format, parses the file and displays the information in it on the screen in 3D coordinates using OpenTK for C#. This project is a university project (homework, not final project) so please bear in mind :)Read Application Cofiguration in Javascript: Would it be good , if "AppSettings" is accessed with "Javascript.Configuration.ConfigurationManager.AppSettings["ServiceUrl"]"?Route#: A Microsoft .NET 4.0 C#/F# application to load routes and find minimal paths. The application is meant to let user create a network of nodes and connections representing a web to route. Starting from a point A the application will find the minimal path to B.Send SMS: Send SMS allows you to send SMS (Short Messaging Service) to all mobile operators across India. It allows you to send SMS through popular service providers like Way2SMS, Full On Sms, Tezsms, Sms Inside and more.SocialMe: This is a simple Facebook, MySpace & Twitter client, but small!StockViewer: This is an interactive way to browse stocks.Thailand IC2011 Training: Using for IC2011 training at KU. Topic. Windows Presentation Foundation Silverlight Windows Phone 7 TipTip: Online oblozuvanjeWeb Content Compressor: The objective of this project is to compress any Javascript (js file or inline script at some web files) and Cascading Style Sheets to an efficient level that works exactly as the original source, before it was minified. The library also is integrated with MSBuildwuyuhu: wuyuhuXaml Mvp: With Xaml Mvp you can create Windows Phone 7 applications based on the MVP pattern. By seperating Data from Behaviour you still get the benefit of MVVM without your View Model becoming a fat-controller. Silverlight and WPF coming soon! Developed using C#XnaWavPack: Native C# wavpack decoder, intended for use with the dynamic sound API in XNA4.

    Read the article

  • Working with Tile Notifications in Windows 8 Store Apps – Part I

    - by dwahlin
    One of the features that really makes Windows 8 apps stand out from others is the tile functionality on the start screen. While icons allow a user to start an application, tiles provide a more engaging way to engage the user and draw them into an application. Examples of “live” tiles on part of my current start screen are shown next: I’ll admit that if you get enough of these tiles going the start screen can actually be a bit distracting. Fortunately, a user can easily disable a live tile by right-clicking on it or pressing and holding a tile on a touch device and then selecting Turn live tile off from the AppBar: The can also make a wide tile smaller (into a square tile) or make a square tile bigger assuming the application supports both squares and rectangles. In this post I’ll walk through how to add tile notification functionality into an application. Both XAML/C# and HTML/JavaScript apps support live tiles and I’ll show the code for both options.   Understanding Tile Templates The first thing you need to know if you want to add custom tile functionality (live tiles) into your application is that there is a collection of tile templates available out-of-the-box. Each tile template has XML associated with it that you need to load, update with your custom data, and then feed into a tile update manager. By doing that you can control what shows in your app’s tile on the Windows 8 start screen. So how do you learn more about the different tile templates and their respective XML? Fortunately, Microsoft has a nice documentation page in the Windows 8 Store SDK. Visit http://msdn.microsoft.com/en-us/library/windows/apps/hh761491.aspx to see a complete list of square and wide/rectangular tile templates that you can use. Looking through the templates you’ll It has the following XML template associated with it:  <tile> <visual> <binding template="TileSquareBlock"> <text id="1">Text Field 1</text> <text id="2">Text Field 2</text> </binding> </visual> </tile> An example of a wide/rectangular tile template is shown next:    <tile> <visual> <binding template="TileWideImageAndText01"> <image id="1" src="image1.png" alt="alt text"/> <text id="1">Text Field 1</text> </binding> </visual> </tile>   To use these tile templates (or others you find interesting), update their content, and get them to show for your app’s tile on the Windows 8 start screen you’ll need to perform the following steps: Define the tile template to use in your app Load the tile template’s XML into memory Modify the children of the <binding> tag Feed the modified tile XML into a new TileNotification instance Feed the TileNotification instance into the Update() method of the TileUpdateManager In the remainder of the post I’ll walk through each of the steps listed above to provide wide and square tile notifications for an application. The wide tile that’s shown will show an image and text while the square tile will only show text. If you’re going to provide custom tile notifications it’s recommended that you provide wide and square tiles since users can switch between the two of them directly on the start screen. Note: When working with tile notifications it’s possible to manipulate and update a tile’s XML template without having to know XML parsing techniques. This can be accomplished using some C# notification extension classes that are available. In this post I’m going to focus on working with tile notifications using an XML parser so that the focus is on the steps required to add notifications to the Windows 8 start screen rather than on external extension classes. You can access the extension classes in the Windows 8 samples gallery if you’re interested.   Steps to Create Custom App Tile Notifications   Step 1: Define the tile template to use in your app Although you can cut-and-paste a tile template’s XML directly into your C# or HTML/JavaScript Windows store app and then parse it using an XML parser, it’s easier to use the built-in TileTemplateType enumeration from the Windows.UI.Notifications namespace. It provides direct access to the XML for the various templates so once you locate a template you like in the documentation (mentioned above), simplify reference it:HTML/JavaScript var notifications = Windows.UI.Notifications; var template = notifications.TileTemplateType.tileWideImageAndText01; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C# var template = TileTemplateType.TileWideImageAndText01;   Step 2: Load the tile template’s XML into memory Once the target template’s XML is identified, load it into memory using the TileUpdateManager’s GetTemplateContent() method. This method parses the template XML and returns an XmlDocument object:   HTML/JavaScript   var tileXml = notifications.TileUpdateManager.getTemplateContent(template); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#  var tileXml = TileUpdateManager.GetTemplateContent(template);   Step 3: Modify the children of the <binding> tag Once the XML for a given template is loaded into memory you need to locate the appropriate <image> and/or <text> elements in the XML and update them with your app data. This can be done using standard XML DOM manipulation techniques. The example code below locates the image folder and loads the path to an image file located in the project into it’s inner text. The code also creates a square tile that consists of text, updates it’s <text> element, and then imports and appends it into the wide tile’s XML.   HTML/JavaScript var image = tileXml.selectSingleNode('//image[@id="1"]'); image.setAttribute('src', 'ms-appx:///images/' + imageFile); image.setAttribute('alt', 'Live Tile'); var squareTemplate = notifications.TileTemplateType.tileSquareText04; var squareTileXml = notifications.TileUpdateManager.getTemplateContent(squareTemplate); var squareTileTextAttributes = squareTileXml.selectSingleNode('//text[@id="1"]'); squareTileTextAttributes.appendChild(squareTileXml.createTextNode(content)); var node = tileXml.importNode(squareTileXml.selectSingleNode('//binding'), true); tileXml.selectSingleNode('//visual').appendChild(node); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#var tileXml = TileUpdateManager.GetTemplateContent(template); var text = tileXml.SelectSingleNode("//text[@id='1']"); text.AppendChild(tileXml.CreateTextNode(content)); var image = (XmlElement)tileXml.SelectSingleNode("//image[@id='1']"); image.SetAttribute("src", "ms-appx:///Assets/" + imageFile); image.SetAttribute("alt", "Live Tile"); Debug.WriteLine(image.GetXml()); var squareTemplate = TileTemplateType.TileSquareText04; var squareTileXml = TileUpdateManager.GetTemplateContent(squareTemplate); var squareTileTextAttributes = squareTileXml.SelectSingleNode("//text[@id='1']"); squareTileTextAttributes.AppendChild(squareTileXml.CreateTextNode(content)); var node = tileXml.ImportNode(squareTileXml.SelectSingleNode("//binding"), true); tileXml.SelectSingleNode("//visual").AppendChild(node);  Step 4: Feed the modified tile XML into a new TileNotification instance Now that the XML data has been updated with the desired text and images, it’s time to load the XmlDocument object into a new TileNotification instance:   HTML/JavaScript var tileNotification = new notifications.TileNotification(tileXml); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#var tileNotification = new TileNotification(tileXml);  Step 5: Feed the TileNotification instance into the Update() method of the TileUpdateManager Once the TileNotification instance has been created and the XmlDocument has been passed to its constructor, it needs to be passed to the Update() method of a TileUpdator in order to be shown on the Windows 8 start screen:   HTML/JavaScript notifications.TileUpdateManager.createTileUpdaterForApplication().update(tileNotification); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#TileUpdateManager.CreateTileUpdaterForApplication().Update(tileNotification);    Once the tile notification is updated it’ll show up on the start screen. An example of the wide and square tiles created with the included demo code are shown next:     Download the HTML/JavaScript and XAML/C# sample application here. In the next post in this series I’ll walk through how to queue multiple tiles and clear a queue.

    Read the article

  • Camera doesnt move on opengl qt

    - by hugo
    Here is my code, as my subject indicates i have implemented a camera but i couldnt make it move,Thanks in advance. #define PI_OVER_180 0.0174532925f define GL_CLAMP_TO_EDGE 0x812F include "metinalifeyyaz.h" include include include include include include include metinalifeyyaz::metinalifeyyaz(QWidget *parent) : QGLWidget(parent) { this->setFocusPolicy(Qt:: StrongFocus); time = QTime::currentTime(); timer = new QTimer(this); timer->setSingleShot(true); connect(timer, SIGNAL(timeout()), this, SLOT(updateGL())); xpos = yrot = zpos = 0; walkbias = walkbiasangle = lookupdown = 0.0f; keyUp = keyDown = keyLeft = keyRight = keyPageUp = keyPageDown = false; } void metinalifeyyaz::drawBall() { //glTranslatef(6,0,4); glutSolidSphere(0.10005,300,30); } metinalifeyyaz:: ~metinalifeyyaz(){ glDeleteTextures(1,texture); } void metinalifeyyaz::initializeGL(){ glShadeModel(GL_SMOOTH); glClearColor(1.0,1.0,1.0,0.5); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); glDepthFunc(GL_LEQUAL); glClearColor(1.0,1.0,1.0,1.0); glShadeModel(GL_SMOOTH); GLfloat mat_specular[]={1.0,1.0,1.0,1.0}; GLfloat mat_shininess []={30.0}; GLfloat light_position[]={1.0,1.0,1.0}; glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT,GL_SHININESS,mat_shininess); glLightfv(GL_LIGHT0, GL_POSITION, light_position); glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); QImage img1 = convertToGLFormat(QImage(":/new/prefix1/halisaha2.bmp")); QImage img2 = convertToGLFormat(QImage(":/new/prefix1/white.bmp")); glGenTextures(2,texture); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img1.width(), img1.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img1.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img2.width(), img2.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img2.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really nice perspective calculations } void metinalifeyyaz::resizeGL(int w, int h){ if(h==0) h=1; glViewport(0,0,w,h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, static_cast<GLfloat>(w)/h,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void metinalifeyyaz::paintGL(){ movePlayer(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); GLfloat xtrans = -xpos; GLfloat ytrans = -walkbias - 0.50f; GLfloat ztrans = -zpos; GLfloat sceneroty = 360.0f - yrot; glLoadIdentity(); glRotatef(lookupdown, 1.0f, 0.0f, 0.0f); glRotatef(sceneroty, 0.0f, 1.0f, 0.0f); glTranslatef(xtrans, ytrans+50, ztrans-130); glLoadIdentity(); glTranslatef(1.0f,0.0f,-18.0f); glRotatef(45,1,0,0); drawScene(); int delay = time.msecsTo(QTime::currentTime()); if (delay == 0) delay = 1; time = QTime::currentTime(); timer->start(qMax(0,10 - delay)); } void metinalifeyyaz::movePlayer() { if (keyUp) { xpos -= sin(yrot * PI_OVER_180) * 0.5f; zpos -= cos(yrot * PI_OVER_180) * 0.5f; if (walkbiasangle >= 360.0f) walkbiasangle = 0.0f; else walkbiasangle += 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } else if (keyDown) { xpos += sin(yrot * PI_OVER_180)*0.5f; zpos += cos(yrot * PI_OVER_180)*0.5f ; if (walkbiasangle <= 7.0f) walkbiasangle = 360.0f; else walkbiasangle -= 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } if (keyLeft) yrot += 0.5f; else if (keyRight) yrot -= 0.5f; if (keyPageUp) lookupdown -= 0.5; else if (keyPageDown) lookupdown += 0.5; } void metinalifeyyaz::keyPressEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_Escape: close(); break; case Qt::Key_F1: setWindowState(windowState() ^ Qt::WindowFullScreen); break; default: QGLWidget::keyPressEvent(event); case Qt::Key_PageUp: keyPageUp = true; break; case Qt::Key_PageDown: keyPageDown = true; break; case Qt::Key_Left: keyLeft = true; break; case Qt::Key_Right: keyRight = true; break; case Qt::Key_Up: keyUp = true; break; case Qt::Key_Down: keyDown = true; break; } } void metinalifeyyaz::changeEvent(QEvent *event) { switch (event->type()) { case QEvent::WindowStateChange: if (windowState() == Qt::WindowFullScreen) setCursor(Qt::BlankCursor); else unsetCursor(); break; default: break; } } void metinalifeyyaz::keyReleaseEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_PageUp: keyPageUp = false; break; case Qt::Key_PageDown: keyPageDown = false; break; case Qt::Key_Left: keyLeft = false; break; case Qt::Key_Right: keyRight = false; break; case Qt::Key_Up: keyUp = false; break; case Qt::Key_Down: keyDown = false; break; default: QGLWidget::keyReleaseEvent(event); } } void metinalifeyyaz::drawScene(){ glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,1.0f); // glColor3f(0,0,1); //back glVertex3f(-6,0,-4); glVertex3f(-6,-0.5,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,-1.0f); //front glVertex3f(6,0,4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,0,4); glEnd(); glBegin(GL_QUADS); glNormal3f(-1.0f,0.0f,0.0f); // glColor3f(0,0,1); //left glVertex3f(-6,0,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); // glColor3f(0,0,1); //right glVertex3f(6,0,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(6,0,4); glEnd(); glBindTexture(GL_TEXTURE_2D, texture[0]); glBegin(GL_QUADS); glNormal3f(0.0f,1.0f,0.0f);//top glTexCoord2f(1.0f,0.0f); glVertex3f(6,0,-4); glTexCoord2f(1.0f,1.0f); glVertex3f(6,0,4); glTexCoord2f(0.0f,1.0f); glVertex3f(-6,0,4); glTexCoord2f(0.0f,0.0f); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,-1.0f,0.0f); //glColor3f(0,0,1); //bottom glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glEnd(); // glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture[1]); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); glTexCoord2f(1.0f,0.0f); //right far goal post front face glVertex3f(5,0.5,-0.95); glTexCoord2f(1.0f,1.0f); glVertex3f(5,0,-0.95); glTexCoord2f(0.0f,1.0f); glVertex3f(5,0,-1); glTexCoord2f(0.0f,0.0f); glVertex3f(5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(5,0.5,-1); glVertex3f(5,0,-1); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5,0,-0.95); glVertex3f(5, 0.5, -0.95); glColor3f(1,1,1); //right near goal post front face glVertex3f(5,0.5,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0,1); glVertex3f(5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(5,0.5,1); glVertex3f(5,0,1); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0.5, 0.95); glColor3f(1,1,1); //right crossbar front face glVertex3f(5,0.55,-1); glVertex3f(5,0.55,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5.05,0.5,1); glVertex3f(5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(5.05,0.5,-1); glVertex3f(5.05,0.5,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5,0.55,1); glVertex3f(5,0.55,-1); glColor3f(1,1,1); //left far goal post front face glVertex3f(-5,0.5,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5,0,-1); glVertex3f(-5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(-5,0.5,-1); glVertex3f(-5,0,-1); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5, 0.5, -0.95); glColor3f(1,1,1); //left near goal post front face glVertex3f(-5,0.5,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0,1); glVertex3f(-5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(-5,0.5,1); glVertex3f(-5,0,1); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0.5, 0.95); glColor3f(1,1,1); //left crossbar front face glVertex3f(-5,0.55,-1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5.05,0.5,1); glVertex3f(-5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(-5.05,0.5,-1); glVertex3f(-5.05,0.5,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.55,-1); glEnd(); // glPopMatrix(); // glPushMatrix(); // glTranslatef(0,0,0); // glutSolidSphere(0.10005,500,30); // glPopMatrix(); }

    Read the article

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • 12c - Utl_Call_Stack...

    - by noreply(at)blogger.com (Thomas Kyte)
    Over the next couple of months, I'll be writing about some cool new little features of Oracle Database 12c - things that might not make the front page of Oracle.com.  I'm going to start with a new package - UTL_CALL_STACK.In the past, developers have had access to three functions to try to figure out "where the heck am I in my code", they were:dbms_utility.format_call_stackdbms_utility.format_error_backtracedbms_utility.format_error_stackNow these routines, while useful, were of somewhat limited use.  Let's look at the format_call_stack routine for a reason why.  Here is a procedure that will just print out the current call stack for us:ops$tkyte%ORA12CR1> create or replace  2  procedure Print_Call_Stack  3  is  4  begin  5    DBMS_Output.Put_Line(DBMS_Utility.Format_Call_Stack());  6  end;  7  /Procedure created.Now, if we have a package - with nested functions and even duplicated function names:ops$tkyte%ORA12CR1> create or replace  2  package body Pkg is  3    procedure p  4    is  5      procedure q  6      is  7        procedure r  8        is  9          procedure p is 10          begin 11            Print_Call_Stack(); 12            raise program_error; 13          end p; 14        begin 15          p(); 16        end r; 17      begin 18        r(); 19      end q; 20    begin 21      q(); 22    end p; 23  end Pkg; 24  /Package body created.When we execute the procedure PKG.P - we'll see as a result:ops$tkyte%ORA12CR1> exec pkg.p----- PL/SQL Call Stack -----  object      line  object  handle    number  name0x6e891528         4  procedure OPS$TKYTE.PRINT_CALL_STACK0x6ec4a7c0        10  package body OPS$TKYTE.PKG0x6ec4a7c0        14  package body OPS$TKYTE.PKG0x6ec4a7c0        17  package body OPS$TKYTE.PKG0x6ec4a7c0        20  package body OPS$TKYTE.PKG0x76439070         1  anonymous blockBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1The bit in red above is the output from format_call_stack whereas the bit in black is the error message returned to the client application (it would also be available to you via the format_error_backtrace API call). As you can see - it contains useful information but to use it you would need to parse it - and that can be trickier than it seems.  The format of those strings is not set in stone, they have changed over the years (I wrote the "who_am_i", "who_called_me" functions, I did that by parsing these strings - trust me, they change over time!).Starting in 12c - we'll have structured access to the call stack and a series of API calls to interrogate this structure.  I'm going to rewrite the print_call_stack function as follows:ops$tkyte%ORA12CR1> create or replace 2  procedure Print_Call_Stack  3  as  4    Depth pls_integer := UTL_Call_Stack.Dynamic_Depth();  5    6    procedure headers  7    is  8    begin  9        dbms_output.put_line( 'Lexical   Depth   Line    Name' ); 10        dbms_output.put_line( 'Depth             Number      ' ); 11        dbms_output.put_line( '-------   -----   ----    ----' ); 12    end headers; 13    procedure print 14    is 15    begin 16        headers; 17        for j in reverse 1..Depth loop 18          DBMS_Output.Put_Line( 19            rpad( utl_call_stack.lexical_depth(j), 10 ) || 20                    rpad( j, 7) || 21            rpad( To_Char(UTL_Call_Stack.Unit_Line(j), '99'), 9 ) || 22            UTL_Call_Stack.Concatenate_Subprogram 23                       (UTL_Call_Stack.Subprogram(j))); 24        end loop; 25    end; 26  begin 27    print; 28  end; 29  /Here we are able to figure out what 'depth' we are in the code (utl_call_stack.dynamic_depth) and then walk up the stack using a loop.  We will print out the lexical_depth, along with the line number within the unit we were executing plus - the unit name.  And not just any unit name, but the fully qualified, all of the way down to the subprogram name within a package.  Not only that - but down to the subprogram name within a subprogram name within a subprogram name.  For example - running the PKG.P procedure again results in:ops$tkyte%ORA12CR1> exec pkg.pLexical   Depth   Line    NameDepth             Number-------   -----   ----    ----1         6       20      PKG.P2         5       17      PKG.P.Q3         4       14      PKG.P.Q.R4         3       10      PKG.P.Q.R.P0         2       26      PRINT_CALL_STACK1         1       17      PRINT_CALL_STACK.PRINTBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1This time - we get much more than just a line number and a package name as we did previously with format_call_stack.  We not only got the line number and package (unit) name - we got the names of the subprograms - we can see that P called Q called R called P as nested subprograms.  Also note that we can see a 'truer' calling level with the lexical depth, we can see we "stepped" out of the package to call print_call_stack and that in turn called another nested subprogram.This new package will be a nice addition to everyone's error logging packages.  Of course there are other functions in there to get owner names, the edition in effect when the code was executed and more. See UTL_CALL_STACK for all of the details.

    Read the article

  • CodePlex Daily Summary for Thursday, October 18, 2012

    CodePlex Daily Summary for Thursday, October 18, 2012Popular ReleasesGac Library -- C++ Utilities for GPU Accelerated GUI and Script: Gaclib 0.4.0.0: Gaclib.zip contains the following content GacUIDemo Demo solution and projects Public Source GacUI library Document HTML document. Please start at reference_gacui.html Content Necessary CSS/JPG files for document. Improvements to the previous release Added Windows 8 theme (except tab control, will be provided for the next release) GacUI will choose to use Windows 7 theme or Windows 8 theme as default theme according to OS version Added 1 new demos Template.Window.CustomizedBorder ...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Global Stock Exchange (Hobby Project): Global Stock Exchange - Invst Banking (Hobby Proj): Initial VersionMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...JaySvcUtil - generate JavaScript context from OData metadata: JaySvcUtil 1.2.2: You will need the JayData library to use contexts generated with JaySvcUtil! Get it from here: http://jaydata.codeplex.comVFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...PHPExcel: PHPExcel 1.7.8: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated. Note changes to the PDF Writer: tcPDF is no longer bundled with PHPExcel, but should be installed separately if you wish to use that 3rd-Party library with PHPExcel. Alternatively, you can choose to use mPDF or DomPDF as PDF Rendering libraries instead: PHPExcel now provides a configurable wrapper allowing you a choice of PDF renderer. See the documentation, or the PDF s...DirectX Tool Kit: October 12, 2012: October 12, 2012 Added PrimitiveBatch for drawing user primitives Debug object names for all D3D resources (for PIX and debug layer leak reporting)Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.70: Fixed issue described in discussion #399087: variable references within case values weren't getting resolved.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. Update 14.Oct.2012 - Client side access fixed NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:Goog...New ProjectsAnonymous InfoPath Browser Forms Web Part: Web part for SharePoint 2007 and 2010 that enables the anonymous submission of InfoPath forms.ath sem5 gr3 proj1: just a simple app to learn how to work in groups, just a start in our programming careerAtif M DNNTaskManager: This is a simple task manager AWS for .NET Sample (Amazon EC2, S3, SQS, DynamoDB): Amazon Web Services (AWS) Sample for .NET (C#) with Asp.NET MVC and Web API. Including S3, DynamoDB, Elastic Beanstalk and SQSCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these data objects are just doubles or ints floating through your code with abbreviations on them describing what they represent.Display attachments (list view) SP 2010: Display attachments (SP 2010) is a field for all type lists without types: Document library, Image library, Links, Surveys.Donation Tracker: This is a school projectEveHQ : The Internet Spaceship Toolkit: Multi-faceted character application for Eve-Online. Includes pilot monitoring, skill queue planning, ship fitting, industry and more.EventConni: EventConniFileCloud.Net: FileCloud.NET is a .Net wrapper for http://filecloud.io/ service API.Foodies: FoodiesForce PowerPivot Refresh: This project contains an c# utility class (CustomDataRefreshUtils) to refresh a PowerPivot file in SharePoint 2010. Geometric Modeling: REnder a mesh with Visual Studio - C#Heavysoft Mince: Heavysoft Mince is an opportunity to enhance my skills and to show how free software can be used in enterprise-level systems.HP iLO Management Utility: A small utility to provide a nice interface for managing multiple servers over the iLO interface.iMeeting: iMeeting is a simple application built in Java. iTask: iTask for Windows 8 LamWebOS: this is test project.MackProjects_Musicallis: Projeto da disciplina de Técnicas de Programação Aplicada II da Universidade Presbiteriana Mackenzie - SP.Merge PDF: MergePDF is an easy and simple tool which could help you batch merge PDF files (any number you want) into one PDF file quickly. It is completely free. With MerNetGen: O NetGen é um projeto de geração automatizada de interface Web em ASP.NET a partir de modelo de classes do LINQ-TO-SQL, estando disponível como um Addin para o Visual Studio 2008.Other Ways: ?? ! ????!!!!!Physical Therapy Timer: A simple Windows application that makes it easy to set a countdown timer for a required time period, and keep track of how many times the timer has be run.PicoMax: This version of PicoMax has everything you need to use the Seeed OLED without the MS Graphics class. It can be used in Gadgeteer or a regular NETMF project.post tracker: ASP, WEB API, WINDOWS PhoneRandomEngine: Open Source Game Engine.Resident Raver: This is Windows 8 version of Resident Raver which I wrote for my Intro To HTML5 Game Dev book. It requires the Impact JavaScript game framework.RogueTS: RogueTS is a Roguelike engine written in TypeScript. It features randomly generated maps, basic lighting effects, collision detection and additional stub code.RonYeeStores: ????????????,??C2C?B2C???????????。ShangHaiTunnel monitor: shang hai tunnel projectSharePoint List Security: Free, open source set of features for SharePoint 2010 to enhance the OOTB list permissions by adding column level permissions for users and groups.Skywave Code Lines Counter: It is a tiny program which will count real written lines of code. Real lines means that we removed autogenerated lines, empty lines and single '{' or '}' lines (in C#) and ... Currently supports .cs and .vb files (for C# and VB) It will say how much you worked ;-)switchcontroles: HAT Switch controle sy seguimientoSyscore: Steps : 1 - Definitions 2 - Requirements 3 - Projecting 4 - Delegation 5 - Prototyping 6 - Testing/Analizing 7 - Finalizing 8 - Distribution 9 - Congratulationstarikscodeplexproject: tarikscodeplexprojectTaskManager04: This project is an effort to implement the coding examples given in the TASK MANAGER video series into a working DotNetNuke program.testc: this is a test ,heihei,yongshengtesttom10172012git01: fds fs fs fs tsUnit - TypeScript Unit Testing Framework: tsUnit is a unit testing framework for TypeScript, written in TypeScript. It allows you to encapsulate your test functions in classes and modules.WebGrease: WebGrease is a suite of tools for optimizing javascript, css files and images.????: ??《?????》、《??》、《???》、《??》???。。。

    Read the article

  • Using CSS3 media queries in HTML 5 pages

    - by nikolaosk
    This is going to be the seventh post in a series of posts regarding HTML 5. You can find the other posts here , here , here, here , here and here. In this post I will provide a hands-on example on how to use CSS 3 Media Queries in HTML 5 pages. This is a very important feature since nowadays lots of users view websites through their mobile devices. Web designers were able to define media-specific style sheets for quite a while, but have been limited to the type of output. The output could only be Screen, Print .The way we used to do things before CSS 3 was to have separate CSS files and the browser decided which style sheet to use. Please have a look at the snippet below - HTML 4 media queries <link rel="stylesheet" type="text/css" media="screen" href="styles.css"> <link rel="stylesheet" type="text/css" media="print" href="print-styles.css"> ?he browser determines which style to use. With CSS 3 we can have all media queries in one stylesheet. Media queries can determine the resolution of the device, the orientation of the device, the width and height of the device and the width and height of the browser window.We can also include CSS 3 media queries in separate stylesheets. In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School. Another excellent resource is HTML 5 Doctor. Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Before I go on with the actual demo I will use the (http://www.caniuse.com) to see the support for CSS 3 Media Queries from the latest versions of modern browsers. Please have a look at the picture below. We see that all the latest versions of modern browsers support this feature. We can see that even IE 9 supports this feature.   Let's move on with the actual demo.  This is going to be a rather simple demo.I create a simple HTML 5 page. The markup follows and it is very easy to use and understand.This is a page with a 2 column layout. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>    <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>    <div id="main">      <div id="mainnews">        <div>          <h2>HTML 5</h2>        </div>        <div>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          <div class="quote">            <h4>Do More with Less</h4>            <p>             jQuery is a fast and concise JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid web development.             </p>            </div>          <p>            The HTML5 test(html5test.com) score is an indication of how well your browser supports the upcoming HTML5 standard and related specifications. Even though the specification isn't finalized yet, all major browser manufacturers are making sure their browser is ready for the future. Find out which parts of HTML5 are already supported by your browser today and compare the results with other browsers.                      The HTML5 test does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform.</p>        </div>      </div>              <div id="CSS">        <div>          <h2>CSS 3 Intro</h2>        </div>        <div>          <p>          Cascading Style Sheets (CSS) is a style sheet language used for describing the presentation semantics (the look and formatting) of a document written in a markup language. Its most common application is to style web pages written in HTML and XHTML, but the language can also be applied to any kind of XML document, including plain XML, SVG and XUL.          </p>        </div>      </div>            <div id="CSSmore">        <div>          <h2>CSS 3 Purpose</h2>        </div>        <div>          <p>            CSS is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation, including elements such as the layout, colors, and fonts.[1] This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple pages to share formatting, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design).          </p>        </div>      </div>                </div>    <div id="footer">        <p>Feel free to google more about the subject</p>      </div>     </body>  </html>    The CSS code (style.css) follows  body{        line-height: 30px;        width: 1024px;        background-color:#eee;      }            p{        font-size:17px;    font-family:"Comic Sans MS"      }      p,h2,h3,h4{        margin: 0 0 20px 0;      }            #main, #header, #footer{        width: 100%;        margin: 0px auto;        display:block;      }            #header{        text-align: center;         border-bottom: 1px solid #000;         margin-bottom: 30px;      }            #footer{        text-align: center;         border-top: 1px solid #000;         margin-bottom: 30px;      }            .quote{        width: 200px;       margin-left: 10px;       padding: 5px;       float: right;       border: 2px solid #000;       background-color:#F9ACAE;      }            .quote :last-child{        margin-bottom: 0;      }            #main{        column-count:2;        column-gap:20px;        column-rule: 1px solid #000;        -moz-column-count: 2;        -webkit-column-count: 2;        -moz-column-gap: 20px;        -webkit-column-gap: 20px;        -moz-column-rule: 1px solid #000;        -webkit-column-rule: 1px solid #000;      } Now I view the page in the browser.Now I am going to write a media query and add some more rules in the .css file in order to change the layout of the page when the page is viewed by mobile devices. @media only screen and (max-width: 480px) {          body{            width: 480px;          }          #main{            -moz-column-count: 1;            -webkit-column-count: 1;          }        }   I am specifying that this media query applies only to screen and a max width of 480 px. If this condition is true, then I add new rules for the body element. I change the number of columns to one. This rule will not be applied unless the maximum width is 480px or less.  As I decrease the size-width of the browser window I see no change in the column's layout. Have a look at the picture below. When I resize the window and the width of the browser so the width is less than 480px, the media query and its respective rules take effect.We can scroll vertically to view the content which is a more optimised viewing experience for mobile devices. Have a look at the picture below Hope it helps!!!!

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • Camera doesn't move

    - by hugo
    Here is my code, as my subject indicates i have implemented a camera but I couldn't make it move. #define PI_OVER_180 0.0174532925f #define GL_CLAMP_TO_EDGE 0x812F #include "metinalifeyyaz.h" #include <GL/glu.h> #include <GL/glut.h> #include <QTimer> #include <cmath> #include <QKeyEvent> #include <QWidget> #include <QDebug> metinalifeyyaz::metinalifeyyaz(QWidget *parent) : QGLWidget(parent) { this->setFocusPolicy(Qt:: StrongFocus); time = QTime::currentTime(); timer = new QTimer(this); timer->setSingleShot(true); connect(timer, SIGNAL(timeout()), this, SLOT(updateGL())); xpos = yrot = zpos = 0; walkbias = walkbiasangle = lookupdown = 0.0f; keyUp = keyDown = keyLeft = keyRight = keyPageUp = keyPageDown = false; } void metinalifeyyaz::drawBall() { //glTranslatef(6,0,4); glutSolidSphere(0.10005,300,30); } metinalifeyyaz:: ~metinalifeyyaz(){ glDeleteTextures(1,texture); } void metinalifeyyaz::initializeGL(){ glShadeModel(GL_SMOOTH); glClearColor(1.0,1.0,1.0,0.5); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); glDepthFunc(GL_LEQUAL); glClearColor(1.0,1.0,1.0,1.0); glShadeModel(GL_SMOOTH); GLfloat mat_specular[]={1.0,1.0,1.0,1.0}; GLfloat mat_shininess []={30.0}; GLfloat light_position[]={1.0,1.0,1.0}; glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT,GL_SHININESS,mat_shininess); glLightfv(GL_LIGHT0, GL_POSITION, light_position); glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); QImage img1 = convertToGLFormat(QImage(":/new/prefix1/halisaha2.bmp")); QImage img2 = convertToGLFormat(QImage(":/new/prefix1/white.bmp")); glGenTextures(2,texture); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img1.width(), img1.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img1.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img2.width(), img2.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img2.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really nice perspective calculations } void metinalifeyyaz::resizeGL(int w, int h){ if(h==0) h=1; glViewport(0,0,w,h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, static_cast<GLfloat>(w)/h,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void metinalifeyyaz::paintGL(){ movePlayer(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); GLfloat xtrans = -xpos; GLfloat ytrans = -walkbias - 0.50f; GLfloat ztrans = -zpos; GLfloat sceneroty = 360.0f - yrot; glLoadIdentity(); glRotatef(lookupdown, 1.0f, 0.0f, 0.0f); glRotatef(sceneroty, 0.0f, 1.0f, 0.0f); glTranslatef(xtrans, ytrans+50, ztrans-130); glLoadIdentity(); glTranslatef(1.0f,0.0f,-18.0f); glRotatef(45,1,0,0); drawScene(); int delay = time.msecsTo(QTime::currentTime()); if (delay == 0) delay = 1; time = QTime::currentTime(); timer->start(qMax(0,10 - delay)); } void metinalifeyyaz::movePlayer() { if (keyUp) { xpos -= sin(yrot * PI_OVER_180) * 0.5f; zpos -= cos(yrot * PI_OVER_180) * 0.5f; if (walkbiasangle >= 360.0f) walkbiasangle = 0.0f; else walkbiasangle += 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } else if (keyDown) { xpos += sin(yrot * PI_OVER_180)*0.5f; zpos += cos(yrot * PI_OVER_180)*0.5f ; if (walkbiasangle <= 7.0f) walkbiasangle = 360.0f; else walkbiasangle -= 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } if (keyLeft) yrot += 0.5f; else if (keyRight) yrot -= 0.5f; if (keyPageUp) lookupdown -= 0.5; else if (keyPageDown) lookupdown += 0.5; } void metinalifeyyaz::keyPressEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_Escape: close(); break; case Qt::Key_F1: setWindowState(windowState() ^ Qt::WindowFullScreen); break; default: QGLWidget::keyPressEvent(event); case Qt::Key_PageUp: keyPageUp = true; break; case Qt::Key_PageDown: keyPageDown = true; break; case Qt::Key_Left: keyLeft = true; break; case Qt::Key_Right: keyRight = true; break; case Qt::Key_Up: keyUp = true; break; case Qt::Key_Down: keyDown = true; break; } } void metinalifeyyaz::changeEvent(QEvent *event) { switch (event->type()) { case QEvent::WindowStateChange: if (windowState() == Qt::WindowFullScreen) setCursor(Qt::BlankCursor); else unsetCursor(); break; default: break; } } void metinalifeyyaz::keyReleaseEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_PageUp: keyPageUp = false; break; case Qt::Key_PageDown: keyPageDown = false; break; case Qt::Key_Left: keyLeft = false; break; case Qt::Key_Right: keyRight = false; break; case Qt::Key_Up: keyUp = false; break; case Qt::Key_Down: keyDown = false; break; default: QGLWidget::keyReleaseEvent(event); } } void metinalifeyyaz::drawScene(){ glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,1.0f); // glColor3f(0,0,1); //back glVertex3f(-6,0,-4); glVertex3f(-6,-0.5,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,-1.0f); //front glVertex3f(6,0,4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,0,4); glEnd(); glBegin(GL_QUADS); glNormal3f(-1.0f,0.0f,0.0f); // glColor3f(0,0,1); //left glVertex3f(-6,0,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); // glColor3f(0,0,1); //right glVertex3f(6,0,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(6,0,4); glEnd(); glBindTexture(GL_TEXTURE_2D, texture[0]); glBegin(GL_QUADS); glNormal3f(0.0f,1.0f,0.0f);//top glTexCoord2f(1.0f,0.0f); glVertex3f(6,0,-4); glTexCoord2f(1.0f,1.0f); glVertex3f(6,0,4); glTexCoord2f(0.0f,1.0f); glVertex3f(-6,0,4); glTexCoord2f(0.0f,0.0f); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,-1.0f,0.0f); //glColor3f(0,0,1); //bottom glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glEnd(); // glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture[1]); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); glTexCoord2f(1.0f,0.0f); //right far goal post front face glVertex3f(5,0.5,-0.95); glTexCoord2f(1.0f,1.0f); glVertex3f(5,0,-0.95); glTexCoord2f(0.0f,1.0f); glVertex3f(5,0,-1); glTexCoord2f(0.0f,0.0f); glVertex3f(5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(5,0.5,-1); glVertex3f(5,0,-1); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5,0,-0.95); glVertex3f(5, 0.5, -0.95); glColor3f(1,1,1); //right near goal post front face glVertex3f(5,0.5,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0,1); glVertex3f(5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(5,0.5,1); glVertex3f(5,0,1); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0.5, 0.95); glColor3f(1,1,1); //right crossbar front face glVertex3f(5,0.55,-1); glVertex3f(5,0.55,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5.05,0.5,1); glVertex3f(5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(5.05,0.5,-1); glVertex3f(5.05,0.5,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5,0.55,1); glVertex3f(5,0.55,-1); glColor3f(1,1,1); //left far goal post front face glVertex3f(-5,0.5,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5,0,-1); glVertex3f(-5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(-5,0.5,-1); glVertex3f(-5,0,-1); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5, 0.5, -0.95); glColor3f(1,1,1); //left near goal post front face glVertex3f(-5,0.5,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0,1); glVertex3f(-5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(-5,0.5,1); glVertex3f(-5,0,1); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0.5, 0.95); glColor3f(1,1,1); //left crossbar front face glVertex3f(-5,0.55,-1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5.05,0.5,1); glVertex3f(-5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(-5.05,0.5,-1); glVertex3f(-5.05,0.5,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.55,-1); glEnd(); // glPopMatrix(); // glPushMatrix(); // glTranslatef(0,0,0); // glutSolidSphere(0.10005,500,30); // glPopMatrix(); }

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • How to configure fastcgi to work with ligttpd in ubuntu

    - by michael
    I am able to run lighttpd on ubuntu 9.10. But when i tried to setup fastcgi with lighttpd by putting this in the ligttpd.conf file: #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => "9098", "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) This is what I get in the error.log in ligttpd: 2010-03-07 21:00:11: (log.c.166) server started 2010-03-07 21:00:11: (mod_fastcgi.c.1104) the fastcgi-backend /usr/local/bin/cgi-fcgi failed to start: 2010-03-07 21:00:11: (mod_fastcgi.c.1108) child exited with status 1 /usr/local/bin/cgi-fcgi 2010-03-07 21:00:11: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2010-03-07 21:00:11: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2010-03-07 21:00:11: (server.c.931) Configuration of plugins failed. Going down. I do have cgi-fcgi in /usr/local/bin: $ which cgi-fcgi /usr/local/bin/cgi-fcgi '/usr/local/bin/cgi-fcgi' is the executable after I download and compile fast-cgi. Here is my lighttpd conf file: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you for your help.

    Read the article

  • Unicenter Software Delivery 4 not able to connect to MS SQL 2000 Database after W2003 SP2 upgrade

    - by grub
    Hello Everyone Yesterday I installed the Windows Server 2003 Service Pack 2 on a Windows Server 2003 which has Unicenter Software Delivery 4 installed. Prior to the installation I disabled every CA service on the server (Brightstor, SDO , RCO, TNG) and the MS SQL 2000 service. After the installation of the SP2 I enabled the services again but the Unicenter Service is not able to connect to the MS SQL 2000 Database anymore. The database itself is up and running and I can connect to it with the Enterprise Manager. A dbcc checkdb doesnt return any errors on the Unicenter database. The Unicenter service throws the following error messages during startup: IM[1] 27/05 10:38:31,272 Installation Manager in init phase IM[1] 27/05 10:38:31,694 Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:31,694 sqls error details: IM[1] 27/05 10:38:31,694 (null) IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. IM[1] 27/05 10:38:32,069 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:32,069 sqls error details: IM[1] 27/05 10:38:32,069 (null) IM[1] 27/05 10:38:32,069 returned 0. IM[1] 27/05 10:38:32,084 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. IM[1] 27/05 10:38:32,084 Failed to open database. IM[1] 27/05 10:38:32,084 Installation Manager ends> If I check the Unicenter configutation with *chkmib_l* the tool throws an exception and creates a small dump file. An Exception Occurred: Time: 27/05 09:49:38,928 Reason: ChkMIB_l.exe caused an UNKNOWN_EXCEPTION in module kernel32.dll at 7C82001B:77E4BEE7 Registers: EAX=0012F908 EBX=00000000 ECX=00000000 EDX=02410004 ESI=0012F998 EDI=0012F998 EBP=0012F958 ESP=0012F904 EIP=77E4BEE7 FLG=00000206 CS =7C82001B DS =B90023 SS =120023 ES =120023 FS =7C82003B GS =3F0000 Call Stack: 7C82001B:77E4BEE7 (0xE06D7363 0x00000001 0x00000003 0x0012F98C) kernel32.dll 7C82001B:77BB3259 (0x0012F9B8 0x2B017C50 0x2B024404 0x00B68C98) MSVCRT.dll 7C82001B:2B010C42 (0x00020003 0x010C00FE 0x003F0190 0x00B69050) PS.dll << SOFTWARE DELIVERY INSTANCE INFO >> TRIGGER 0(1) instances: JCE 0(1) instances: TM 0(1) instances: IM 0(1) instances: DM 0(1) instances: DPU 0(71) instances: NATF 0(1) instances: MIBCONV 0(0) instances: API 0(4) instances: DTSFT 0(0) instances: TNGPOP 0(0) instances: DGATE 0(0) instances: << FLUSHING MEMORY TRACES >> << STOP FLUSHING MEMORY TRACES >> I compared the configuration of the SDO service and the system configuration with another server on which the Windows Server 2003 SP2 is installed and SDO is working. The configuration is the same and the same driver and software versions are used. Do you have any idea what causes the connection issue? Should I deinstall the unicenter service and make a fresh installation on the server or should I remove the Windows Server 2003 SP2? I don't want to remove the SP2 because it's a requirement for WSUS3 SP2 and I really don't want to know how many possible exploits are possible in such an old system ;-) Thank you very much and have a nice day. Below you can find more detailed information about the system and the SDO service. psinfo output (system information) System information for \\CZZAAS1003: Uptime: 0 days 14 hours 38 minutes 50 seconds Kernel version: Microsoft Windows Server 2003, Multiprocessor Free Product type: Standard Edition Product version: 5.2 Service pack: 2 Kernel build number: 3790 Install date: 23.9.2004, 11:16:11s IE version: 6.0000 System root: C:\WINDOWS Processors: 2 Processor speed: 2.3 GHz Processor type: Intel(R) Xeon(TM) CPU Physical memory: 1024 MB Video driver: RAGE XL PCI Family (Microsoft Corporation) sdver output (Unicenter Software delivery version) Unicenter Software Delivery 4.0 SP1 I2 ENU [2901] Copyright 2004 Computer Associates International, Incorporated ms sql 2000 version and odbc driver version MS SQL 2000 Server Standard Edition Product Version: 8.00.760 (SP3) ODBC Driver: SQL Server - Version 2000.86.3959.00 complete Unicenter Software delivery service log file TRIGGER[1] 27/05 10:38:28,366 SD Trigger Agent has started NATF[1] 27/05 10:38:28,928 Initiation phase finished IM[1] 27/05 10:38:31,272 Installation Manager in init phase IM[1] 27/05 10:38:31,694 Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:31,694 sqls error details: IM[1] 27/05 10:38:31,694 (null) IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. IM[1] 27/05 10:38:32,069 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:32,069 sqls error details: IM[1] 27/05 10:38:32,069 (null) IM[1] 27/05 10:38:32,069 returned 0. IM[1] 27/05 10:38:32,084 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. IM[1] 27/05 10:38:32,084 Failed to open database. IM[1] 27/05 10:38:32,084 Installation Manager ends TM[1] 27/05 10:38:32,116 Task Manager in init phase TM[1] 27/05 10:38:32,334 Process TM(L) - [006132] failed to open database SDDATA. dbopen() call failed. TM[1] 27/05 10:38:32,334 sqls error details: TM[1] 27/05 10:38:32,334 (null) TM[1] 27/05 10:38:32,381 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. TM[1] 27/05 10:38:32,381 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. TM[1] 27/05 10:38:32,381 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process TM(L) - [006132] failed to open database SDDATA. dbopen() call failed. TM[1] 27/05 10:38:32,381 sqls error details: TM[1] 27/05 10:38:32,381 (null) TM[1] 27/05 10:38:32,381 returned 0. TM[1] 27/05 10:38:32,381 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. TM[1] 27/05 10:38:32,381 Failed to open database. TM[1] 27/05 10:38:32,381 Task Manager ends DM[1] 27/05 10:38:33,272 Dialogue Manager is now active API[1] 27/05 10:38:34,397 API Server Process in init phase API[1] 27/05 10:38:34,397 API - SDNLS_Init API[1] 27/05 10:38:34,397 API - connectEM API[1] 27/05 10:38:34,412 API - apiServ.init DM[1] 27/05 10:38:34,678 **AND** 1 Agents triggered API[1] 27/05 10:38:34,709 Process API(L) - [005680] failed to open database SDDATA. dbopen() call failed. API[1] 27/05 10:38:34,709 sqls error details: API[1] 27/05 10:38:34,709 (null) API[1] 27/05 10:38:34,756 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. API[1] 27/05 10:38:34,756 ##EXCEPTION## TableError C@:MainAPIL\APISERVL.CXX:246. API[1] 27/05 10:38:34,756 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process API(L) - [005680] failed to open database SDDATA. dbopen() call failed. API[1] 27/05 10:38:34,756 sqls error details: API[1] 27/05 10:38:34,756 (null) API[1] 27/05 10:38:34,756 returned 0. API[1] 27/05 10:38:34,756 Open of the database failed. API[1] 27/05 10:38:34,756 API - apiServ.init complete API[1] 27/05 10:38:34,756 API - start_APIServer DM[1] 27/05 10:38:34,803 CZZAAR1037 DPU[1:CZZAAR1037] 27/05 10:38:35,772 DPU in init phase DPU[1:CZZAAR1037] 27/05 10:38:36,100 >> GetManagerData DPU[1:CZZAAR1037] 27/05 10:38:36,287 >> SetCompInfo DPU[1:CZZAAR1037] 27/05 10:38:36,334 >> GetContainerList DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,381 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,522 >> SetCompAttr DPU[1:CZZAAR1037] 27/05 10:38:36,569 >> SetDetected DPU[1:CZZAAR1037] 27/05 10:38:36,584 disconnect DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,584 DPU ends DM[1] 27/05 10:38:38,006 **AND** 0 Agents triggered JCE[1] 27/05 10:38:38,053 JCE starts DM[1] 27/05 10:38:38,287 CZZAAS1003 DPU[2:CZZAAS1003] 27/05 10:38:38,412 DPU in init phase DPU[2:CZZAAS1003] 27/05 10:38:38,647 >> GetManagerData DPU[2:CZZAAS1003] 27/05 10:38:38,756 >> SetCompInfo DPU[2:CZZAAS1003] 27/05 10:38:38,787 >> GetContainerList DM[1] 27/05 10:38:38,850 **AND** 1 Agents triggered DM[1] 27/05 10:38:38,928 CZZAAR1124 DPU[3:CZZAAR1124] 27/05 10:38:39,053 DPU in init phase DPU[3:CZZAAR1124] 27/05 10:38:39,272 >> GetManagerData DM[1] 27/05 10:38:39,334 **AND** 1 Agents triggered DPU[3:CZZAAR1124] 27/05 10:38:39,381 >> SetCompInfo DPU[3:CZZAAR1124] 27/05 10:38:39,412 >> GetContainerList DM[1] 27/05 10:38:39,412 CZZAAR1125 DPU[3:CZZAAR1124] 27/05 10:38:39,428 getJobState 3 from 5b88e DPU[3:CZZAAR1124] 27/05 10:38:39,428 getJobState 3 from 5b88e DPU[2:CZZAAS1003] 27/05 10:38:39,491 >> SetCompAttr DPU[3:CZZAAR1124] 27/05 10:38:39,522 >> SetCompAttr DPU[4:CZZAAR1125] 27/05 10:38:39,522 DPU in init phase DPU[3:CZZAAR1124] 27/05 10:38:39,584 >> SetDetected DPU[2:CZZAAS1003] 27/05 10:38:39,584 >> SetDetected DPU[3:CZZAAR1124] 27/05 10:38:39,584 disconnect DPU[3:CZZAAR1124] 27/05 10:38:39,600 getJobState 3 from 5b88e DPU[3:CZZAAR1124] 27/05 10:38:39,600 DPU ends DPU[2:CZZAAS1003] 27/05 10:38:39,631 disconnect DPU[2:CZZAAS1003] 27/05 10:38:39,631 DPU ends DPU[4:CZZAAR1125] 27/05 10:38:39,756 >> GetManagerData DPU[4:CZZAAR1125] 27/05 10:38:39,850 >> SetCompInfo DPU[4:CZZAAR1125] 27/05 10:38:39,881 >> GetContainerList DPU[4:CZZAAR1125] 27/05 10:38:39,897 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:39,897 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:39,991 >> SetCompAttr DPU[4:CZZAAR1125] 27/05 10:38:40,100 >> SetDetected DPU[4:CZZAAR1125] 27/05 10:38:40,116 disconnect DPU[4:CZZAAR1125] 27/05 10:38:40,116 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:40,116 DPU ends DM[1] 27/05 10:38:40,741 **AND** 0 Agents triggered JCE[1] 27/05 10:38:42,756 JCE ends DM[1] 27/05 10:38:47,475 **AND** 0 Agents triggered DM[1] 27/05 10:38:54,241 **AND** 0 Agents triggered

    Read the article

  • ServerRoot in my lighttpd.conf

    - by michael
    Hi, I have use the following example lighttpd.conf to launch my lighttpd. Can you please tell me where is my 'ServerRoot'? # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.socket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you.

    Read the article

  • Upgrading Redmine, activerecord-mysql2-adapter not recognized

    - by David Kaczynski
    For upgrading Redmine from 1.0.1 to 2.1.2, I need to execute the command: rake db:migrate RAILS_ENV=production However, doing so produces the following error: rake aborted! Please install the mysql2 adapter: gem install activerecord-mysql2-adapter (mysql2 is not part of the bundle. Add it to Gemfile.) I have ran gem install activerecord-mysql2-adapter, but I still get the same error when I try to run the rake ... command. How do I get my RoR app to recognize that I have the mysql2 adapter installed already? or Is there something wrong with my activerecord-mysql2-adapter installation? Results of sudo bundle install: Using rake (10.0.0) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.8) Using builder (3.0.0) Using activemodel (3.2.8) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.8) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.8) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.8) Using activeresource (3.2.8) Using coderay (1.0.8) Using fastercsv (1.5.5) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.8) Using jquery-rails (2.0.3) Using metaclass (0.0.1) Using mocha (0.12.3) Using mysql (2.8.1) Using net-ldap (0.3.1) Using pg (0.14.1) Using ruby-openid (2.1.8) Using rack-openid (1.3.1) Using bundler (1.2.1) Using rails (3.2.8) Using rmagick (2.13.1) Using shoulda (2.11.3) Using sqlite3 (1.3.6) Using yard (0.8.3) [32mYour bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.[0m Results of sudo find / -name "*mysql2*": /var/lib/gems/1.8/doc/mysql2-0.3.11 /var/lib/gems/1.8/doc/activerecord-3.2.9/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/em_mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/gems/mysql2-0.3.11 /var/lib/gems/1.8/gems/mysql2-0.3.11/spec/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/mysql2.gemspec /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2.rb /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.c /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.h /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.o /var/lib/gems/1.8/gems/activerecord-3.2.9/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/activerecord-mysql2-adapter.gemspec /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/arel/engines/sql/compilers/mysql2_compiler.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/em_mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-3.2.8/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/cache/mysql2-0.3.11.gem /var/lib/gems/1.8/cache/activerecord-mysql2-adapter-0.0.3.gem /var/lib/gems/1.8/specifications/activerecord-mysql2-adapter-0.0.3.gemspec /var/lib/gems/1.8/specifications/mysql2-0.3.11.gemspec Contents of /usr/share/redmine/Gemfile: source 'http://rubygems.org' gem 'rails', '3.2.8' gem "jquery-rails", "~> 2.0.2" gem "i18n", "~> 0.6.0" gem "coderay", "~> 1.0.6" gem "fastercsv", "~> 1.5.0", :platforms => [:mri_18, :mingw_18, :jruby] gem "builder", "3.0.0" # Optional gem for LDAP authentication group :ldap do gem "net-ldap", "~> 0.3.1" end # Optional gem for OpenID authentication group :openid do gem "ruby-openid", "~> 2.1.4", :require => "openid" gem "rack-openid" end # Optional gem for exporting the gantt to a PNG file, not supported with jruby platforms :mri, :mingw do group :rmagick do # RMagick 2 supports ruby 1.9 # RMagick 1 would be fine for ruby 1.8 but Bundler does not support # different requirements for the same gem on different platforms gem "rmagick", ">= 2.0.0" end end # Database gems platforms :mri, :mingw do group :postgresql do gem "pg", ">= 0.11.0" end group :sqlite do gem "sqlite3" end end platforms :mri_18, :mingw_18 do group :mysql do gem "mysql" end end platforms :mri_19, :mingw_19 do group :mysql do gem "mysql2", "~> 0.3.11" end end platforms :jruby do gem "jruby-openssl" group :mysql do gem "activerecord-jdbcmysql-adapter" end group :postgresql do gem "activerecord-jdbcpostgresql-adapter" end group :sqlite do gem "activerecord-jdbcsqlite3-adapter" end end group :development do gem "rdoc", ">= 2.4.2" gem "yard" end group :test do gem "shoulda", "~> 2.11" # Shoulda does not work nice on Ruby 1.9.3 and seems to need test-unit explicitely. gem "test-unit", :platforms => [:mri_19] gem "mocha", "0.12.3" end local_gemfile = File.join(File.dirname(__FILE__), "Gemfile.local") if File.exists?(local_gemfile) puts "Loading Gemfile.local ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(local_gemfile) end # Load plugins' Gemfiles Dir.glob File.expand_path("../plugins/*/Gemfile", __FILE__) do |file| puts "Loading #{file} ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(file) end Contents of /usr/share/redmine/Gemfile.lock: GEM remote: http://rubygems.org/ specs: actionmailer (3.2.8) actionpack (= 3.2.8) mail (~> 2.4.4) actionpack (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) erubis (~> 2.7.0) journey (~> 1.0.4) rack (~> 1.4.0) rack-cache (~> 1.2) rack-test (~> 0.6.1) sprockets (~> 2.1.3) activemodel (3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) activerecord (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) arel (~> 3.0.2) tzinfo (~> 0.3.29) activeresource (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) activesupport (3.2.8) i18n (~> 0.6) multi_json (~> 1.0) arel (3.0.2) builder (3.0.0) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) railties (>= 3.1.0, < 5.0) thor (~> 0.14) json (1.7.5) mail (2.4.4) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) metaclass (~> 0.0.1) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack (>= 0.4) rack-openid (1.3.1) rack (>= 1.1.0) ruby-openid (>= 2.1.8) rack-ssl (1.3.2) rack rack-test (0.6.2) rack (>= 1.0) rails (3.2.8) actionmailer (= 3.2.8) actionpack (= 3.2.8) activerecord (= 3.2.8) activeresource (= 3.2.8) activesupport (= 3.2.8) bundler (~> 1.0) railties (= 3.2.8) railties (3.2.8) actionpack (= 3.2.8) activesupport (= 3.2.8) rack-ssl (~> 1.3.2) rake (>= 0.8.7) rdoc (~> 3.4) thor (>= 0.14.6, < 2.0) rake (10.0.0) rdoc (3.12) json (~> 1.4) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.1.3) hike (~> 1.2) rack (~> 1.0) tilt (~> 1.1, != 1.3.0) sqlite3 (1.3.6) test-unit (2.5.2) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) polyglot polyglot (>= 0.3.1) tzinfo (0.3.35) yard (0.8.3) PLATFORMS ruby DEPENDENCIES activerecord-jdbcmysql-adapter activerecord-jdbcpostgresql-adapter activerecord-jdbcsqlite3-adapter builder (= 3.0.0) coderay (~> 1.0.6) fastercsv (~> 1.5.0) i18n (~> 0.6.0) jquery-rails (~> 2.0.2) jruby-openssl mocha (= 0.12.3) mysql mysql2 (~> 0.3.11) net-ldap (~> 0.3.1) pg (>= 0.11.0) rack-openid rails (= 3.2.8) rdoc (>= 2.4.2) rmagick (>= 2.0.0) ruby-openid (~> 2.1.4) shoulda (~> 2.11) sqlite3 test-unit yard Results of gem list: actionmailer (3.2.9, 3.2.8) actionpack (3.2.9, 3.2.8) activemodel (3.2.9, 3.2.8) activerecord (3.2.9, 3.2.8) activerecord-mysql2-adapter (0.0.3) activeresource (3.2.9, 3.2.8) activesupport (3.2.9, 3.2.8) arel (3.0.2) builder (3.0.0) bundler (1.2.1) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) json (1.7.5) mail (2.4.4) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-openid (1.3.1) rack-ssl (1.3.2) rack-test (0.6.2) rails (3.2.9, 3.2.8) railties (3.2.9, 3.2.8) rake (10.0.0) rdoc (3.12) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.2.1, 2.1.3) sqlite3 (1.3.6) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) tzinfo (0.3.35) yard (0.8.3) Results of 'bundle show`: Gems included by the bundle: * actionmailer (3.2.8) * actionpack (3.2.8) * activemodel (3.2.8) * activerecord (3.2.8) * activeresource (3.2.8) * activesupport (3.2.8) * arel (3.0.2) * builder (3.0.0) * bundler (1.2.1) * coderay (1.0.8) * erubis (2.7.0) * fastercsv (1.5.5) * hike (1.2.1) * i18n (0.6.1) * journey (1.0.4) * jquery-rails (2.0.3) * json (1.7.5) * mail (2.4.4) * metaclass (0.0.1) * mime-types (1.19) * mocha (0.12.3) * multi_json (1.3.7) * mysql (2.8.1) * net-ldap (0.3.1) * pg (0.14.1) * polyglot (0.3.3) * rack (1.4.1) * rack-cache (1.2) * rack-openid (1.3.1) * rack-ssl (1.3.2) * rack-test (0.6.2) * rails (3.2.8) * railties (3.2.8) * rake (10.0.0) * rdoc (3.12) * rmagick (2.13.1) * ruby-openid (2.1.8) * shoulda (2.11.3) * sprockets (2.1.3) * sqlite3 (1.3.6) * thor (0.16.0) * tilt (1.3.3) * treetop (1.4.12) * tzinfo (0.3.35) * yard (0.8.3)

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • How does Windows 7 DNS client work?

    - by Mark Allison
    I am using a local DHCP and DNS server on my home network on a linux machine. It is running CentOS 6.3 with dnsmasq 2.48. It's all working fine except for local DNS lookups for Windows machines only. I have a mix of Ubuntu, CentOS and Windows machines on the network, some virtual, some physical. I have a machine called boron and the domain is called localdomain If I ping boron from any linux machine, I get [root@lithium lists]# ping -c3 boron PING boron.localdomain (10.0.0.5) 56(84) bytes of data. 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=1 ttl=64 time=0.740 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=2 ttl=64 time=0.478 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=3 ttl=64 time=0.458 ms --- boron.localdomain ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.458/0.558/0.740/0.131 ms If I do it from my Windows 7 machine, I get: Ping request could not find host boron. Please check the name and try again. If I try ping boron.localdomain I get: Pinging boron.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=16ms TTL=57 Reply from 67.215.65.132: bytes=32 time=188ms TTL=57 Reply from 67.215.65.132: bytes=32 time=15ms TTL=57 Reply from 67.215.65.132: bytes=32 time=14ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 14ms, Maximum = 188ms, Average = 58ms which is clearly wrong. Why is it going out to the internet? Why can't my windows machine resolve the boron hostname to a FQDN? My Windows machines and linux machines get their network config from DHCP. UPDATE If I do ipconfig /all in Windows, it looks as I would expect: Windows IP Configuration Host Name . . . . . . . . . . . . : lanthanum Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : .localdomain Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : .localdomain Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 50-E5-49-38-FC-A2 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.0.0.57(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 23 August 2012 13:58:45 Lease Expires . . . . . . . . . . : 24 August 2012 07:58:48 Default Gateway . . . . . . . . . : 10.0.0.6 DHCP Server . . . . . . . . . . . : 10.0.0.6 DNS Servers . . . . . . . . . . . : 10.0.0.6 208.67.222.222 208.67.220.220 NetBIOS over Tcpip. . . . . . . . : Enabled When I do an nslookup I get: Server: carbon.localdomain Address: 10.0.0.6 *** carbon.localdomain can't find boron: Unspecified error However if I do ifconfig -a in Linux I get: [root@nitrogen ~]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:0C:29:AF:EC:2A inet addr:10.0.0.7 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:187687 errors:0 dropped:0 overruns:0 frame:0 TX packets:5857 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23910700 (22.8 MiB) TX bytes:712964 (696.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:329894 errors:0 dropped:0 overruns:0 frame:0 TX packets:329894 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:67153143 (64.0 MiB) TX bytes:67153143 (64.0 MiB) and nslookup: [root@nitrogen ~]# nslookup boron Server: 10.0.0.6 Address: 10.0.0.6#53 Name: boron Address: 10.0.0.5 Both machines are on the same network using the same DHCP server. UPDATE 2 I thought the issue was resolved but I am getting intermittent DNS resolving issues but only on my Windows 7 machine. All my linux boxes are fine. This is what happens when I ping and nslookup from Windows to a Windows 2008 Server: C:\Users\mark>nslookup magnesium Server: carbon.localdomain Address: 10.0.0.6 Name: magnesium.localdomain Address: 10.0.0.12 C:\Users\mark>ping magnesium Pinging magnesium.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=267ms TTL=57 Reply from 67.215.65.132: bytes=32 time=162ms TTL=57 Reply from 67.215.65.132: bytes=32 time=510ms TTL=57 Reply from 67.215.65.132: bytes=32 time=146ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 146ms, Maximum = 510ms, Average = 271ms And from Linux: [root@beryllium ~]# ping -c4 magnesium PING magnesium.localdomain (10.0.0.12) 56(84) bytes of data. 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=1 ttl=128 time=0.176 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=2 ttl=128 time=0.634 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=3 ttl=128 time=0.685 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=4 ttl=128 time=0.263 ms --- magnesium.localdomain ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.176/0.439/0.685/0.223 ms [root@beryllium ~]# nslookup magnesium Server: 10.0.0.6 Address: 10.0.0.6#53 Name: magnesium.localdomain Address: 10.0.0.12 UPDATE 3 I stopped the Windows DNS client on my Windows 7 machine with net stop dnscache and it is now working fine. It would be nice to get DNS working with the DNS client on, but I might be OK without it, what do you think?

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • SOA PARTNER COMMUNITY NEWSLETTER JULY 2012

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTER JULY 2012 Dear SOA partner community member To provide our community members the best of our knowledge, we want your feedback on our SOA Partner community. Thus we are organizing SOA Partner Community Survey 2012. We request you to participate in the survey and give your valuable feedback on various areas of marketing, sales and education. To continue our successful BPM Suite, Oracle is launching together with you Process Accelerators initiative. It’s your opportunity to co-develop and market predefined processes. Oracle Fusion Applications Design Patterns are a great tool to develop your SOA or BPM solution or process accelerators. To promote your SOA & BPM Specialization we continue to offer several benefits. This month we would like to highlight our Specialization Plaques - make sure you request one for your office! Our Fusion Middleware Summer Camps are booked out, if could not get a seat you can attend the SOA & BPM track @ Virtual Developer Day: Oracle Fusion Development Oracle demo systems offer´s two new demos: Business Driven Development based on BPM Suite & SOA Lifecycle Management. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Community SurveyProcess Accelerators KitPlaques SOA & BPM SpecializedSOA & BPM at Virtual Developer Day News from our Partners & CommunityOverview of SOA Diagnostics in 11.1.1.6 Business driven development(BDD) demo now available! SOA Lifecycle Management Oracle Fusion applications design patterns Updated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum COMMUNITY SURVEY Like every year we would like to get your feedback in our SOA Partner Community Survey 2012. Make sure that You attend to further develop our community and support our planning! It is key for us to get your feedback to prepare for the next fiscal year. Back to top PROCESS ACCELERATORS KIT Oracle is very interested to co-develop and market with you, our partners, pre-defined processes for BPM Suite.I am very happy to announce a new program called “Oracle BPM Partner Solution Catalog”. This program will provide a one-stop shop for our customers looking for Oracle BPM partner solutions available in the market today.The Oracle BPM Solution Catalog will be hosted on our very popular Oracle Technology Network (OTN). To give you an idea of the scale of customer visibility, OTN today receives over 1Million hits per day from our business and developer community. We would like to invite you to list your Oracle BPM 11g solutions available today.In order to participate in this program, you need to do the following: Fill in the attached slide templates - #3 and #4 for each Oracle BPM 11g solution you would like to list on OTN.Please add links to whitepapers , videos, references to the specific solution in the template slide. We recommend that you create a landing page on your website for these linked artifacts and just point to the same from within the PowerPoint template. This will give you the flexibility to update the information as frequently as needed. If you have the particular solution in production or a reference available, please list them as well. Send the PowerPoint template slides (1 set of slides for each Oracle BPM solution) to [email protected]. In addition to having the opportunity to list your solutions on OTN for Oracle customers, you will have the chance to advertise your new wins/implementations/solutions in an Oracle Sponsored PM Webinar held every quarter. This program is targeted to go live by the end of summer 2012. At this point, we are targeting a soft launch in July end 2012 so send on your BPM solutions information as soon as possible. We would love to have your solution(s) listed in the “Oracle BPM Partner Solution Catalog” at the time of the launch. This will be a live repository so you can keep adding more solutions as they become available. If you have any questions, please feel free to contact us [email protected], Product Strategy Director, Oracle BPM , Phone +1 650.506.5486.Thank you and look forward to hearing from you. Oracle BPM team Process Accelerators Overview.pdf ProcessAcceleratorsDataSheet.pdf Demos draUPK.zip & trmUPK.zip BPM Solution repository slides.ppt Additional BPM material BPM Process Development Lifecycle Document that describes recommended approach to collaborative process modeling across business and IT tools ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) by Andrejus Baranovskis BPMN process editor problems in 11.1.1.6 by Mark Nelson BPM – Disable DBMS job to refresh B2B Materialized View by Mark Nelson For the complete kit please visit the BPM folder at our SOA Community Workspace (SOA Community membership required). For the complete presentation please visit our SOA Community Workspace (SOA Community membership required). Information is Oracle and Partner confidential! Back to top PLAQUES SOA & BPM SPECIALIZED We continue to offer you a nice SOA & BPM Specialization plaque with your logo to proof your success. If you are a SOA or BPM Specialized partner and would like to request the plaque please send Brigitte an e-mail with the following information: Partner Name Partner logo (preferred eps file) Partner Status gold or platinum Your shipping address Your Specialization: SOA or BPM We recommend to mount the plaque at your office reception in addition you can use the SOA Specialization logos at your website download Logo: Gold & Platinum or the BPM logos Gold & Platinum Back to top SOA & BPM AT VIRTUAL DEVELOPER DAY Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staffOracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect - this event has something for everyone. Don’t miss this opportunity.Tuesday, July 10, 2012. 9:00 a.m. PT -1:00 p.m. PT 11:00 a.m. CT - 3:00 p.m. CT 12:00 p.m. ET - 4:00 p.m. ET 1:00 p.m. BRT - 5:00 p.m. BRT Register online now! for this FREE event. Agenda: 09:00 am Opening 09:30 am Keynote: Oracle Fusion Development Track1Introduction to Fusion Development Track2What's New in Fusion Development Track3Fusion Development in the Enterprise 10:00 am Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 am Rich Web UI made simple - an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration *Hands On Lab – WebCenter and ADF Lab w/ JDeveloper - Lab materials will be provided ahead of the event to give you ample time to work through the lab and increase the productivity of the live chat sessions the day of the event. Sessions abstractsRegister online now! for this FREE event Read more on Community Events and post your comment here. Back to top NEWS FROM OUR PARTNERS AND COMMUNITY Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity JDeveloper & ADF?Troubleshooting BPMN process editor problems in 11.1.1.6http://dlvr.it/1p0FfS SOA Community?SOA & BPM @ Virtual Developer Day: Oracle Fusion Development - July 10th 2012https://soacommunity.wordpress.com/2012/07/02/soa-bpm-virtual-developer-day- oracle-fusion-developmentjuly-10th-2012/#soacommunity #soa #bom #education orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove.http://ow.ly/1kYqES SOA CommunitySOA Community Newsletter June 2012http://wp.me/p10C8u-qw SOA CommunityBPMN process editor problems in 11.1.1.6 by Mark Nelsonhttp://redstack.wordpress.com/2012/06/27/ bpmn-process-editor-problems-in-11-1-1-6 #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization)http://fb.me/1coX4r1X1 SOA CommunitySOA Learning Library provides a comprehensive curriculum for the SOA and BPM product suites https://soacommunity.wordpress.com/2012/06/27/soa-learning-library #soacommunity #soa #bpm OTNArchBeat ?A Universal JMX Client for Weblogic - Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koserhttp://pub.vitrue.com/mQVZ OTNArchBeat ?BPM - Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0Oracle SOA ?Learn how Choice Hotels Implements Innovative Google Maps Solution with #OracleSOA http://bit.ly/MTwIJ3 SOA Communitytop Tweets SOA Partner Community - June 2012 Send your tweets @soacommunity #soacommunity https://soacommunity.wordpress.com/2012/06/25/top-tweets-soa-partner-community-june-2012 Torsten Winterberg#OPITZ is pushing Oracle commitment to the next level: New Specializations done: ADF, BPM, WLS, Exadatahttp://bit.ly/KX1WVS ServiceTechSymposium ?Only 8 more days left until Super Early Bird Registration Discount expires! http://www.servicetechsymposium.com OracleBlogsSOA Management in 3 minutes - Video explainerhttp://ow.ly/1kN5pn SOA Community ?SOA, Cloud & Service Technology Symposium 2012 London - Enter Promo Code: Djmxz370https://soacommunity.wordpress.com/2012/06/22/soa-cloud-service-technology-symposium-2012-london #soasymposium #soacommunity #soa Heidi BuelowGreat course! w David Read RT @soacommunity: product management ADF for BPM training 5 seats left https://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity SOA Community ?product management ADF for BPM training 5 seats lefthttps://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity OTNArchBeat ?Oacle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broinhttp://pub.vitrue.com/UEiF OTNArchBeat ?SOA, Cloud & Service Technology Symposium 2012London - Special Oracle Discounthttp://pub.vitrue.com/8E0J SOA CommunityBecome a facebook fan of soacommunity http://www.facebook.com/soacommunity #soacommunity SOA Community ?SOA Suite HealthCare Integration Architecture https://blogs.oracle.com/SOAForHealthcare/entry/soa_suite_healthcare_integration_architecture #soacommunity #soa Andrejus Baranovskis ?Running Pre-built Virtual Machine for SOA Suite and BPM Suite 11g PS5 on Mac OS X Snow Leopard (10.6http://fb.me/vB8nO0Vg OracleBlogsPrinciples of Service-Oriented Architecture by Douwe P. van den Bos http://ow.ly/1kIcOP OTNArchBeatOracle Public Cloud Architecture | @TylerJewell http://ow.ly/bHAcL The SOA Network ?Business Process Management, Service-Oriented Architecture, and Web 2.0: Business Transformation or.http://bit.ly/LBgREL #ITNews #SOA OracleBlogs ?Oracle SOA Foundation Practitioner Certificationhttp://ow.ly/1kGYYg Frank Nimphius ?Learn Advanced ADF. ORACLE Fusion Middleware Summer Camps in Lisbon - July 9th - 13thhttp://bit.ly/KGCl3i SOA CommunityTransform Your Application Integration with Best Practices from Oracle Customershttps://blogs.oracle.com/SOA/entry/transform_your_application_integration_with #soacommunity #soa #bpm Simone GeibWhat you always wanted to know about #oraclesoa diagnostics: Shawn Bailey, Overview of SOA Diagnostics in 11.1.1.6,http://ow.ly/bxK0M Oracle SOA ?Save the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to #oraclesoa http://bit.ly/LsNDGl OTNArchBeat ?New VirtualBox images for Oracle SOA Suite & Oracle BPM Suite 11.1.1.6.0http://ow.ly/bwDAl OracleBlogs ?Process development lifecycle in Oracle BPM 11g http://ow.ly/1ktesY Daniel AmadeiNew post: Oracle BPEL 11g Message Delivery & Recovery.http://amadei.com.br/blog/index.php /oracle-bpel-11g-message-delivery SOA Community ?Sending out the June edition of the #soacommunity newsletter - read it or become a member http://www.oracle.com/goto/emea/soa!#soa #bpm Arun Pareek ?For the past six months Ahmed Aboulnaga and me have been working on Oracle SOA Suite 11g Administrator's Handbook.http://lnkd.in/CAvpUQ SOA CommunitySun shine all day no clouds - solar eclipse is over... #sunshine #cloud http://www.infoq.com/presentations/Swarm-Computing Michel SchildmeijerWatch my blog Oracle Service Bus 11g: listing projects and services with WLST - part 1 http://lnkd.in/B7f3GQ @TITAN_GS @wlscommunity OTNArchBeatBook Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Rahejahttp://ow.ly/bn2cc OTNArchBeat ?Driving from Business Architecture to Business Process Services | @vghariharan http://ow.ly/bn5UB OTNArchBeat ?SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 - Part II | Dawit Lessanu http://ow.ly/bn6sX Simone Geib ?Contact me directly for ideas how to improvehttp://bit.ly/advancedsoasuite and additional posts, presentations, white papers, ... #soasuite Simone Geib ?#soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite OracleBlogs ?June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA ServiceTechSymposium ?New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOeDebra Lilley ?looks good - real proof people are using the apps ! RT @fteter: Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps demed ?rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA CommunityMiddleware Oracle Excellence Awards 2012-HAPPY NEW YEAR! https://soacommunity.wordpress.com/2012/05/31/middleware-oracle-excellence-awards-2012happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle SOA CommunityHappy New Year #soacommunity thanks for the business! Time for a drink http://pic.twitter.com/zkK08KWB OTNArchBeat ?Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q SOA Communitytop Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposiumNew session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com/agenda2012.php #elastic_soa_in_the_cloud orclateamsoa ?A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11Ghttp://ow.ly/1k2cnl ServiceTechSymposium ?New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Linkhttp://www.servicetechsymposium.com/agenda2012.php#soa_governance_at_edp SOA Community ?VirtualBox image SOA Suite & BPM Suite 11.1.1.6.0&ndash;Your feedback?http://wp.me/p10C8u-qh Oracle MiddlewareSave the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to#oraclesoa http://bit.ly/LU1y5N OTNArchBeat ?Goodbye, Silos. Hello SOA. | @stephanieoverbyhttp://pub.vitrue.com/NJJO SOA CommunityBPM Standard Edition - to start your BPM project http://wp.me/p10C8u-qj Please feel free to send us your news! And add your blog to our SOA blog wiki. Back to top OVERVIEW OF SOA DIAGNOSTICS IN 11.1.1.6 What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections.In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. read the full blog post here. Read more on Oracle and post your comment here. Back to top BUSINESS DRIVEN DEVELOPMENT (BDD) DEMO NOW AVAILABLE! For access to the Oracle demo systems please visit OPN and talk to your Partner Expert DSS is pleased to announce the availability of the demo “Business Driven Development“. This innovative demonstration uses a case-study approach to show business users how they can easily streamline their Business Processes - delivering greater efficiency, agility, visibility and collaboration with Oracle BPM and WebCenter. The BDD demonstration uses a case study-based approach to highlight a business problem at a fictional company, Avitek Corporation, and uses Oracle BPM and Oracle WebCenter to solve the business problem. This holistic approach has specifically been used to appeal to a non-technical business analyst user. This demo is NOT focused on product features, but aims to guide users through a complete BPM lifecycle. The scenario is based on improving a simple order process (scenario details are in the demo script). Avitek Corporation is sufferinng from a manual email-driven ordering process. Sales reps don’t know where the customer orders are stuck (no visibility) and finance users are unable to manually approve every order (no automation). There are several areas where this process can be improved with Business Process Management technology. This demo shows how improving following areas will ignificantly help resolve the business problems Avitek Corporation is facing. Areas for improvement include: Utilizing BPM for process management, rather than an unregulated, email-based process. Utilizing automated services, rather than requiring a human to key into a system. For example, Finance checking the customer’s credit rating is something that could be automated. Centralizing business rules that can be integrated into a business process, rather than requiring a human to process them. For example, Finance must determine when orders can be automatically approved. Provide insight and visibility into the process. For example, Sales Rep needs to know the status of their customer’s orders. The BDD Demo uses the following products. Oracle BPM Suite 11g PS4FP Oracle WebCenter 11g PS4FP (for Process Spaces) Oracle Business Activity Monitoring 11g Oracle Database 11g Back to top SOA LIFECYCLE MANAGEMENT For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the SOA Management demo that showcases some of the key provisioning and lifecycle management capabilities of SOA Management Pack Enterprise Edition (EE). This demo specifically focuses on some of the lifecycle management solutions for Oracle SOA Suite and Oracle Service Bus (OSB). Demo Highlights The demo showcases the following capabilities. Provisioning of SOA Composites Provisioning of OSB Projects Provision SOA and OSB artifacts in a future maintenance window Back to top ORACLE FUSION APPLICATIONS DESIGN PATTERNS The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments! Read more on Fusionapps and post your comment here. Back to top UPDATED ORACLE MATERIAL Integrated SOA Gateway Documentation - Implementation Guide | Developer’s Guide Webcast Series: Oracle’s SOA and Oracle Business Process Management Solutions (Choice Hotels, Eaton, Farmers Insurance) BAM design pointers By Kavitha Srinivasan Seeking Oracle Fusion Middleware Go Live StoriesOracle Fusion Middleware product management is looking for recent go live stories to share with the Oracle sales team, sales consulting, product management and other internal groups. Customer contact details may remain anonymous. Your successful implementation will be featured in a quarterly report. The chance to present on an internal webcast is also available. Contact Maria Forney ([email protected]) if you have a noteworthy implementation success story. This is a good opportunity for partners interested in showcasing Oracle Fusion Middleware implementations, and gaining more exposure within Oracle. Performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources Back to top HAVE YOU MISSED OUR LAST SOA PARTNER COMMUNITY WEBCASTS? UPK Webcast Business Driven Application Management & BPM11g & Application Grid & GoldenGate & Fusion Middleware Pricing & OC4J to WebLogic & Next Generation SOA & Fusion Middleware in Utility & Fusion Middleware in Communications & Fusion Middleware in Public Services & Fusion Middleware in Financial Services Please check your local OPN trainings calendar for additional training dates and locations. Back to top SOA PARTNER COMMUNITY CALENDAR On-Demand Trainings Event Name Language Type SOA Virtual Developers Day English Tech In-Class Trainings Date Event name Location / Country Contact person Type 09-13.07.2012 BPM Suite 11g advanced training by David Read Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 ADF 11g advanced training by Grant Ronald and Frank Nimphius Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Lisbon, Portugal Jürgen Kress Tech 10.07.2012 Fusion Middleware Virtual Developer Day Online OTN Tech 10- 12.07.2012 WebLogic 12c training by Cosmin Tudor Lisbon, Portugal Jürgen Kress Tech 16-18.07.2012 SOA Suite 11g advanced training by Niall Commiskey Munich, Germany Jürgen Kress Tech 16-18.07.2012 ADF for BPM Suite 11g advanced training by David Read Munich, Germany Jürgen Kress Tech 16-18.07.2012 WebCenter Sites 11g advanced training by Product Management Munich, Germany Jürgen Kress Tech 17-20.07.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 23-26.07.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 29-31.08.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 02-05.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 28-30.11.2012 Oracle AIA 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 11-14.12.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 20-22.2.2013 Oracle AIA 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 14-17.1.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.3.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech Please check your local OPN Training Calendar for additional training and locations here. Back to top SOASCHOOL.COM - SOA CERTIFIED PROFESSIONAL(SOACP) PROGRAM The SOASchool.com - SOA Certified Professional (SOACP) program is dedicated to excellence in the field of SOA and service-oriented computing. Through a series of seasoned course modules and exams, IT professionals have the opportunity to obtain a number of different certifications to recognize their accomplishment of gaining "project ready" SOA proficiency. This comprehensive and strictly vendor-neutral program was developed in cooperation with best-selling SOA author Thomas Erl and several major SOA organizations and academic institutions. Through the involvement of the SOA Education Committee, course contents and certification requirements are constantly reviewed and revised to stay current with developments in the service-oriented computing industry. The program is currently comprised of 12 course modules and 5 certifications and is expanding to 18 course modules and 8 certifications throughout 2009. For more information, visit www.soaschool.com and www.soacp.com. Blog Twitter LinkedIn Mix Forum Wiki Back to top YOUR CONTENT ON THE NEWSLETTER AND ON THE SOA COMMUNITY PORTAL Publishing Your StoriesWe would like to invite our partners to publish information in the newsletter or on our SOA Community portal. Especially we are looking for your real life experience with our SOA technology. Please send your documents to Jürgen Kress. We look forward to getting your suggestions! Back to top SOA DISCUSSION FORUM BECOMES INTERACTIVE AT THE SOA COMMUNITY! Do you want to chat to experts, including partners and Oracle SOA Product Development? Do you want to get the latest information about our SOA solutions and events?Attend our private online SOA Discussion Forum at OTN. Please send your OTN forums user name to Brigitte Felisaz. You must be a registered user to access the SOA Discussion Forum. Back to top INVITE YOUR COLLEAGUES TO JOIN THE SOA COMMUNITY Please feel free to invite your colleagues to join the SOA Community and to participate in the SOA Assessment tests. For registration please login the Oracle PartnerNetwork and go to: www.oracle.com/goto/emea/soa For any questions on the above or concerning SOA and Oracle in general please contact the Oracle EMEA Alliances & Channels SOA Team. Best regardsOracle EMEA SOA TeamJürgen Kress Jürgen KressSOA Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • 26 Days: Countdown to Oracle OpenWorld 2012

    - by Michael Snow
    Welcome to our countdown to Oracle OpenWorld! Oracle OpenWorld 2012 is just around the corner. In less than 26 days, San Francisco will be invaded by an expected 50,000 people from all over the world. Here on the Oracle WebCenter team, we’ve all been working to help make the experience a great one for all our WebCenter customers. For a sneak peak  – we’ll be spending this week giving you a teaser of what to look forward to if you are joining us in San Francisco from September 30th through October 4th. We have Oracle WebCenter sessions covering all topics imaginable. Take a look and use the tools we provide to build out your schedule in advance and reserve your seats in your favorite sessions.  That gives you plenty of time to plan for your week with us in San Francisco. If unfortunately, your boss denied your request to attend - there are still some ways that you can join in the experience virtually On-Demand. This year - we are expanding even more up North of Market Street and will be taking over Union Square as well. Check out this map of San Francisco to get a sense of how much of a footprint Oracle OpenWorld has grown to this year. With so much to see and so many sessions to learn from - its no wonder that people get excited. Add to that a good mix of fun and all of the possible WebCenter sessions you could attend - you won't want to sleep at all to take full advantage of such an opportunity. We'll also have our annual WebCenter Customer Appreciation reception - stay tuned this week for some more info on registration to make sure you'll be able to join us. If you've been following the America's Cup at all and believe in EXTREME PERFORMANCE you'll definitely want to take a look at this video from last year's OpenWorld Keynote. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Important OpenWorld Links:  Attendee / Presenters Toolkit Oracle Schedule Builder WebCenter Sessions (listed in the catalog under Fusion Middleware as "Portals, Sites, Content, and Collaboration" ) Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night -LOOK HERE!! Oracle OpenWorld LIVE On-Demand Here are all the WebCenter sessions broken down by day for your viewing pleasure. Monday, October 1st CON8885 - Simplify CRM Engagement with Contextual Collaboration Are your sales teams disconnected and disengaged? Do you want a tool for easily connecting expertise across your organization and providing visibility into the complete sales process? Do you want a way to enhance and retain organization knowledge? Oracle Social Network is the answer. Attend this session to learn how to make CRM easy, effective, and efficient for use across virtual sales teams. Also learn how Oracle Social Network can drive sales force collaboration with natural conversations throughout the sales cycle, promote sales team productivity through purposeful social networking without the noise, and build cross-team knowledge by integrating conversations with CRM and other business applications. CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Oracle WebCenter is a user engagement platform for social business, connecting people and information. Attend this session to learn about the Oracle WebCenter strategy, and understand where Oracle is taking the platform to help companies engage customers, empower employees, and enable partners. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—Web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! ¶ HOL10208 - Add Social Capabilities to Your Enterprise Applications Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions. ¶ CON8893 - Improve Employee Productivity with Intuitive and Social Work Environments Social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. Forward-thinking organizations today need technologies and infrastructures to help them advance to the next level and integrate social activities with business applications to deliver a user experience that simplifies business processes and enterprise application engagement. Attend this session to hear from an innovative Oracle Social Network customer and learn how you can improve productivity with intuitive and social work environments and empower your employees with innovative social tools to enable contextual access to content and dynamic personalization of solutions. ¶ CON8270 - Oracle WebCenter Content Strategy and Vision Oracle WebCenter provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. ¶ CON8269 - Oracle WebCenter Sites Strategy and Vision Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. ¶ CON8896 - Living with SharePoint SharePoint is a popular platform, but it’s not always the best fit for Oracle customers. In this session, you’ll discover the technical and nontechnical limitations and pitfalls of SharePoint and learn about Oracle alternatives for collaboration, portals, enterprise and Web content management, social computing, and application integration. The presentation shows you how to integrate with SharePoint when business or IT requirements dictate and covers cloud-based (Office 365) and on-premises versions of SharePoint. Presented by a former Microsoft director of SharePoint product management and backed by independent customer research, this session will prepare you to answer the question “Why don’t we just use SharePoint for that?’ the next time it comes up in your organization. ¶ CON7843 - Content-Enabling Enterprise Processes with Oracle WebCenter Organizations today continually strive to automate business processes, reduce costs, and improve efficiency. Many business processes are content-intensive and unstructured, requiring ad hoc collaboration, and distributed in nature, requiring many approvals and generating huge volumes of paper. In this session, learn how Oracle and SYSTIME have partnered to help a customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them with Oracle Applications. ¶ CON6114 - Tape Robotics’ Newest Superhero: Now Fueled by Oracle Software For small, midsize, and rapidly growing businesses that want the most energy-efficient, scalable storage infrastructure to meet their rapidly growing data demands, Oracle’s most recent addition to its award-winning tape portfolio leverages several pieces of Oracle software. With Oracle Linux, Oracle WebLogic, and Oracle Fusion Middleware tools, the library achieves a higher level of usability than previous products while offering customers a familiar interface for management, plus ease of use. This session examines the competitive advantages of the tape library and how Oracle software raises customer satisfaction. Learn how the combination of Oracle engineered systems, Oracle Secure Backup, and Oracle’s StorageTek tape libraries provide end-to-end coverage of your data. ¶ CON9437 - Mobile Access Management With more than five billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session focuses on how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON7815 - Customer Experience Online in Cloud: Oracle WebCenter Sites, Oracle ATG Apps, Oracle Exalogic Oracle WebCenter Sites and Oracle’s ATG product line together can provide a compelling marketing and e-commerce experience. When you couple them with the extreme performance of Oracle Exalogic, you’ll see unmatched scalability that provides you with a true cloud-based solution. In this session, you’ll learn how running Oracle WebCenter Sites and ATG applications on Oracle Exalogic delivers both a private and a public cloud experience. Find out what it takes to get these systems working together and delivering engaging Web experiences. Even if you aren’t considering Oracle Exalogic today, the rich Web experience of Oracle WebCenter, paired with the depth of the ATG product line, can provide your business full support, from merchandising through sale completion. ¶ CON8271 - Oracle WebCenter Portal Strategy and Vision To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from customers how Oracle WebCenter Portal extends the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. ¶ CON8880 - The Connected Customer Experience Begins with the Online Channel There’s a lot of talk these days about how to connect the customer journey across various touchpoints—from Websites and e-commerce to call centers and in-store—to provide experiences that are more relevant and engaging and ultimately gain competitive edge. Doing it all at once isn’t a realistic objective, so where do you start? Come to this session, and hear about three steps you can take that can help you begin your journey toward delivering the connected customer experience. You’ll hear how Oracle now has an integrated digital marketing platform for your corporate Website, your e-commerce site, your self-service portal, and your marketing and loyalty campaigns, and you’ll learn what you can do today to begin executing on your customer experience initiatives. ¶ GEN11451 - General Session: Building Mobile Applications with Oracle Cloud With the prevalence of smart mobile devices, companies are facing an increased demand to provide access to data and applications from new channels. However, developing applications for mobile devices poses some unique challenges. Come to this session to learn how Oracle addresses these challenges, offering a simpler way to develop and deploy cross-device mobile applications. See how Oracle Cloud enables you to access applications, data, and services from mobile channels in an easier way.  CON8272 - Oracle Social Network Strategy and Vision One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.  --- Tuesday, October 2nd HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.¶ CON8877 - Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables users to have information available at their fingertips, enabling them to take action the moment they make a decision, interact in the moment of convenience, and take advantage of new service offerings in their preferred channels. All your employees have your mobile applications in their pocket; now what are you going to do? It is a critical step for companies to think through what their employees, customers, and partners really need on their devices. Attend this session to see how Oracle WebCenter enables you to better engage your customers, employees, and partners by providing a unified experience across multiple channels. ¶ CON9447 - Enabling Access for Hundreds of Millions of Users How do you grow your business by identifying, authenticating, authorizing, and federating users on the Web, leveraging social identity and the open source OAuth protocol? How do you scale your access management solution to support hundreds of millions of users? With social identity support out of the box, Oracle’s access management solution is also benchmarked for 250-million-user deployment according to real-world customer scenarios. In this session, you will learn about the social identity capability and the 250-million-user benchmark testing of Oracle Access Manager and Oracle Adaptive Access Manager running on Oracle Exalogic and Oracle Exadata. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON2906 - Get Proactive: Best Practices for Maintaining Oracle Fusion Middleware You chose Oracle Fusion Middleware products to help your organization deliver superior business results. Now learn how to take full advantage of your software with all the great tools, resources, and product updates you’re entitled to through Oracle Support. In this session, Oracle product experts provide proven best practices to help you work more efficiently, plan and prepare for upgrades and patching more effectively, and manage risk. Topics include configuration management tools, remote diagnostics, My Oracle Support Community, and My Oracle Support Lifecycle Advisors. New users and Oracle Fusion Middleware experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps. ¶ CON8878 - Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups Cloud computing represents a paradigm shift in how we build applications, automate processes, collaborate, and share and in how we secure our enterprise. Additionally, as you adopt cloud-based services in your organization, it’s likely that you will still have many critical on-premises applications running. With these mixed environments, multiple user interfaces, different security, and multiple datasources and content sources, how do you start evolving your strategy to account for these challenges? Oracle WebCenter offers a complete array of technologies enabling you to solve these challenges and prepare you for the cloud. Attend this session to learn how you can use Oracle WebCenter in the cloud as well as create on-premises and cloud application mash-ups. ¶ CON8901 - Optimize Enterprise Business Processes with Oracle WebCenter and Oracle BPM Do you have business processes that span multiple applications? Are you grappling with how to have visibility across these business processes; how to manage content that is associated with these processes; and, most importantly, how to model and optimize these business processes? Attend this session to hear how Oracle WebCenter and Oracle Business Process Management provide a unique set of integrated solutions to provide a composite application dashboard across these business processes and offer a solution for content-centric business processes. ¶ CON8883 - Deliver Engaging Interfaces to Oracle Applications with Oracle WebCenter Critical business processes live within enterprise applications, and application users need to manage and execute these processes as effectively as possible. Oracle provides a comprehensive user engagement platform to increase user productivity and optimize overall processes within Oracle Applications—Oracle E-Business Suite and Oracle’s Siebel, PeopleSoft, and JD Edwards product families—and third-party applications. Attend this session to learn how you can integrate these applications with Oracle WebCenter to deliver composite application dashboards to your end users—whether they are your customers, partners, or employees—for enhanced usability and Web 2.0–enabled enterprise portals.¶ Wednesday, October 3rd CON8895 - Future-Ready Intranets: How Aramark Re-engineered the Application Landscape There are essential techniques and technologies you can use to deliver employee portals that garner higher productivity, improve business efficiency, and increase user engagement. Attend this session to learn how you can leverage Oracle WebCenter Portal as a user engagement platform for bringing together business process management, enterprise content management, and business intelligence into a highly relevant and integrated experience. Hear how Aramark has leveraged Oracle WebCenter Portal and Oracle WebCenter Content to deliver a unified workspace providing simpler navigation and processing, consolidation of tools, easy access to information, integrated search, and single sign-on. ¶ CON8886 - Content Consolidation: Save Money, Increase Efficiency, and Eliminate Silos Organizations are looking for ways to save money and be more efficient. With content in many different places, it’s difficult to know where to look for a document and whether the document is the most current version. With Oracle WebCenter, content can be consolidated into one best-of-breed repository that is secure, scalable, and integrated with your business processes and applications. Users can find the content they need, where they need it, and ensure that it is the right content. This session covers content challenges that affect your business; content consolidation that can lead to savings in storage and administration costs and can lower risks; and how companies are realizing savings. ¶ CON8911 - Improve Online Experiences for Customers and Partners with Self-Service Portals Are you able to provide your customers and partners an easy-to-use online self-service experience? Are you processing high-volume transactions and struggling with call center bottlenecks or back-end systems that won’t integrate, causing order delays and customer frustration? Are you looking to target content such as product and service offerings to your end users? This session shares approaches to providing targeted delivery as well as strategies and best practices for transforming your business by providing an intuitive user experience for your customers and partners. ¶ CON6156 - Top 10 Ways to Integrate Oracle WebCenter Content This session covers 10 common ways to integrate Oracle WebCenter Content with other enterprise applications and middleware. It discusses out-of-the-box modules that provide expanded features in Oracle WebCenter Content—such as enterprise search, SOA, and BPEL—as well as developer tools you can use to create custom integrations. The presentation also gives guidance on which integration option may work best in your environment. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON7817 - Migration to Oracle WebCenter Imaging 11g Customers today continually strive to automate business processes, reduce costs, and improve efficiency. The accounts payable process—which is often distributed in nature, requires many approvals, and generates huge volumes of paper invoices—is automated by many customers. In this session, learn how Oracle and SYSTIME have partnered to help a customer migrate its existing Oracle Imaging and Process Management Release 7.6 to the latest Oracle WebCenter Imaging 11g and integrate it with Oracle’s JD Edwards family of products. ¶ CON8910 - How to Engage Customers Across Web, Mobile, and Social Channels Whether on desktops at the office, on tablets at home, or on mobile phones when on the go, today’s customers are always connected. To engage today’s customers, you need to make the online customer experience connected and consistent across a host of devices and multiple channels, including Web, mobile, and social networks. Managing this multichannel environment can result in lots of headaches without the right tools. Attend this session to learn how Oracle WebCenter Sites solves the challenge of multichannel customer engagement. ¶ HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. ¶ CON8900 - Building Next-Generation Portals: An Interactive Customer Panel Discussion Social and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. ¶ CON9625 - Taking Control of Oracle WebCenter Security Organizations are increasingly looking to extend their Oracle WebCenter portal for social business, to serve external users and provide seamless access to the right information. In particular, many organizations are extending Oracle WebCenter in a business-to-business scenario requiring secure identification and authorization of business partners and their users. This session focuses on how customers are leveraging, securing, and providing access control to Oracle WebCenter portal and mobile solutions. You will learn best practices and hear real-world examples of how to provide flexible and granular access control for Oracle WebCenter deployments, using Oracle Platform Security Services and Oracle Access Management Suite product offerings. ¶ CON8891 - Extending Social into Enterprise Applications and Business Processes Oracle Social Network is an extensible social platform that enables contextual collaboration within enterprise applications and business processes, providing relevant data from across various enterprise systems in one place. Attend this session to see how an Oracle Social Network customer is integrating multiple applications—such as CRM, HCM, and business processes—into Oracle Social Network and Oracle WebCenter to enable individuals and teams to solve complex cross-organizational business problems more effectively by utilizing the social enterprise. ¶ Thursday, October 4th CON8899 - Becoming a Social Business: Stories from the Front Lines of Change What does it really mean to be a social business? How can you change our organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. It takes a thought-provoking look at becoming a social business from the inside. ¶ CON6851 - Oracle WebCenter and Oracle Business Intelligence Enterprise Edition to Create Vendor Portals Large manufacturers of grocery items routinely find themselves depending on the inventory management expertise of their wholesalers and distributors. Inventory costs can be managed more efficiently by the manufacturers if they have better insight into the inventory levels of items carried by their distributors. This creates a unique opportunity for distributors and wholesalers to leverage this knowledge into a revenue-generating subscription service. Oracle Business Intelligence Enterprise Edition and Oracle WebCenter Portal play a key part in enabling creation of business-managed business intelligence portals for vendors. This session discusses one customer that implemented this by leveraging Oracle WebCenter and Oracle Business Intelligence Enterprise Edition. ¶ CON8879 - Provide a Personalized and Consistent Customer Experience in Your Websites and Portals Your customers engage with your company online in different ways throughout their journey—from prospecting by acquiring information on your corporate Website to transacting through self-service applications on your customer portal—and then the cycle begins again when they look for new products and services. Ensuring that the customer experience is consistent and personalized across online properties—from branding and content to interactions and transactions—can be a daunting task. Oracle WebCenter enables you to speak and interact with your customers with one voice across your Websites and portals by providing an integrated platform for delivery of self-service and engagement that unifies and personalizes the online experience. Learn more in this session. ¶ CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we’re not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. ¶ CON8908 - Oracle WebCenter Portal: Creating and Using Content Presenter Templates Oracle WebCenter Portal applications use task flows to display and integrate content stored in the Oracle WebCenter Content server. Among the most flexible task flows is Content Presenter, which renders various types of content on an Oracle WebCenter Portal page. Although Oracle WebCenter Portal comes with a set of predefined Content Presenter templates, developers can create their own templates for specific rendering needs. This session shows the lifecycle of developing Content Presenter task flows, including how to create, package, import, modify at runtime, and use such templates. In addition to simple examples with Oracle Application Development Framework (Oracle ADF) UI elements to render the content, it shows how to use other UI technologies, CSS files, and JavaScript libraries. ¶ CON8897 - Using Web Experience Management to Drive Online Marketing Success Every year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from online marketers how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. ¶ CON8892 - Oracle’s Journey to Social Business Social business is a revolution, one that is causing rapidly accelerating change in how companies and customers engage with one another and how employees work together. Oracle’s goal in becoming a social business is to create a socially connected organization in which working collaboratively across geographical locations, lines of business, and management chains is second nature, enabling innovative solutions to business challenges. We can achieve this by connecting the right people, finding the right content, communicating with the right people, collaborating at the right time, and building the right communities in the right context—all ready in the CLOUD. Attend this session to see how Oracle is transforming itself into a social business. ¶  ------------ If you've read all the way to the end here - we are REALLY looking forward to seeing you in San Francisco.

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 283 284 285 286 287 288 289  | Next Page >