Search Results

Search found 6353 results on 255 pages for 'font face'.

Page 78/255 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • Facebook doesn't work on computer, but work on mobile device, both use the same router

    - by sasa
    I have a very strange problem and I'm thinking that can be problem with dns or something similar, but not sure and don't know how to solve. My computer is connected to router and every site works fine except facebook (Chrome and Firefox). Chrome shows "Error 101 (net::ERR_CONNECTION_RESET): The connection was reset." But, on mobile device witch is connected to the same router facebook works fine (Fb application and Delphin browser). Pinging facebook works fine. Clearing cookies and cache didn't help. Also, I performed antivirus and antimalware scan and there is nothing. What can be a problem? Update: I'm also connect notebook on that wifi router, and on it works fine. nslookup facebook.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: facebook.com Addresses: 2a03:2880:2110:3f01:face:b00c:: 2a03:2880:10:1f02:face:b00c:0:25 2a03:2880:10:8f01:face:b00c:0:25 69.171.224.37 69.171.229.11 69.171.242.11 66.220.149.11 66.220.158.11

    Read the article

  • OpenGL polygons stuck at top left?

    - by user146780
    I'm using theallegro library and through it OpenGL. I enable depth testing, then upon resize I do: glEnable(GL_CULL_FACE); glCullFace(GL_BACK); glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glFrustum(0,event.display.width,event.display.height,0,1,300); Then my drawing looks like: float BOX_SIZE = 200.0f; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glRotatef(0.5, 0, 0.0f, 1.0f); glBegin(GL_QUADS); //Top face glColor3f(1.0f, 1.0f, 0.0f); glNormal3f(0.0, 1.0f, 0.0f); glVertex3f(500 + -BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(500 + -BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(500 + BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(500 + BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); //Bottom face glColor3f(1.0f, 0.0f, 1.0f); glNormal3f(0.0, -1.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); //Left face glNormal3f(-1.0, 0.0f, 0.0f); glColor3f(0.0f, 1.0f, 1.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glColor3f(0.0f, 0.0f, 1.0f); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); //Right face glNormal3f(1.0, 0.0f, 0.0f); glColor3f(1.0f, 0.0f, 0.0f); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); glColor3f(0.0f, 1.0f, 0.0f); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glEnd(); glEnable(GL_TEXTURE_2D); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glColor3f(1.0f, 1.0f, 1.0f); glBegin(GL_QUADS); //Front face glNormal3f(0.0, 0.0f, 1.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glTexCoord2f(1.0f, 0.0f); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glTexCoord2f(1.0f, 1.0f); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glTexCoord2f(0.0f, 1.0f); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); //Back face glNormal3f(0.0, 0.0f, -1.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glTexCoord2f(1.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); glTexCoord2f(1.0f, 1.0f); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); glTexCoord2f(0.0f, 1.0f); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glEnd(); glDisable(GL_TEXTURE_2D); //code goes here al_flip_display(); for some reason the cube no matter how I translate it seems to extend from 0,0 and theres always a vertex at 0,0. What have I done wrong? Thanks

    Read the article

  • Make Your Clock Creates a Custom Clock for your Android Homescreen

    - by ETC
    If you’d like to create a custom clock face your Android homescreen Make Your Clock makes it easy to create a clock face with customized colors, font, display style, and more. You can create a clock that looks like a digital watch face, an old fashioned flip clock, a combination of digital output and date, and other variations. You can also adjust the size of the clock to anywhere between 1×1 to 4×2. Currently the app is limited to displaying the time and date, future releases are slated to include weather and lunar phases in addition to the time. Check out the video below to see the app in action: Make Your Clock [AppBrain via Yahoo!] Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • IIM Calcutta &ndash; EPBM 14 &ndash; Campus Visit &ndash; Day 1 &ndash; Registration &amp; Beginning

    - by Ram Shankar Yadav
    Hey Guys! I’m back with the updates, it was an awesome Monday morning, for me it started when Sun came on my face, and the time was 5:30AM~~ I was amazed that this part of the country gets the sunrise quite early, but I ignored the sunlight for a while by covering my face, but…finally the door knocked….~ It was Mukesh, and the time was 6 AM, so I thought let’s get rid of laziness and start my day~ After having my brush and bath, I shaved and we headed for the Breakfast~ We quickly had our bread butter jam combo, and left for the Auditorium for Registration~ We searched for our names and signed the Registration paper and got a cool IIM C bag, with following in it: - a IIMC Notepad - Cello X Caliber pen - a book “What the Best MBAs Know”, and - Reading Material for Campus Sessions Today we had lectures on “Evolution of Indian Corporate Sector” (2 Session of 1.5 hrs each) and “Indian Economy: Crisis & Response” (2 Sessions of 1.5 hrs). “Evolution of Indian Corporate Sector” was by Prof. Raghabendra Chattopadhyay, was one of my best lectures I’ve ever attended in my life, he started with a question that saying that “The Indian Capitalists didn’t wanted the economy to open up till the economic reforms occurred?”, he is one of the best story tellers I’ve ever met, he started with the ancient European and Indian history and linked the trade & economics with it, simply amazing~ I can’t believe I didn’t get bore even after a 2hour long session…awesome~~ Afterward we had our lunch break, we did our lunch in “New Hostel” building and got back for “Indian Economy” sessions. Indian Economy session was taken by Sudip Chaudhuri, for us he’s a well known face as we have already attended his sessions on Macroeconomics~ It was an interactive, easy going, and a laughable session, and we did discussed some serious issues as well. After the class got over we went out and got few T-Shirts and Mugs for ourselves, and yep not to forget it “Rained” in Kolkata today~~ We got back and had our dinner and dispersed finally… I loved this amazing Monday, and hope the spirit continues till Saturday~ I’m feeling the enrichment in my thought and perceptions~ I’m lovin’ it~~ ram :)

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • Models with more than one mesh in JMonkeyEngine

    - by Andrea Tucci
    I’m a new jmonkey engine developer and I’m beginning to import models. I tried to import simple models and no problems appeared, but when I export some obj models having more than one mesh in the OgreXML format, Blender saves multiple meshes with their own materials (e.g. one mesh for face, another for body etc). Can I export all the meshes in one? I’ve tried to join all the meshes to a major one with blender (face joins body), but when I export the model and then create the Spatial in jme(loading the path of the “merged” mesh), all the meshes that are joined to the major doesn’t have their materials! I give a more clear example: I have an .obj model with 3 meshes and I export it. I have : mesh1.mesh.xml , mesh2.mesh.xml , mesh3.mesh.xml and their materials mesh1.material, mesh2.material mesh3.material so I import the folder in Assets/Models/Test and now I have to create something like: Spatial head = assetManager.loadModel( [path] ); Spatial face = assetManager.loadModel( [path] ) one for each mesh and than attach them to a common node. I think there is a way to merge those mesh maintaining their materials! What do you think? Thanks

    Read the article

  • Jittery Movement, Uncontrollably Rotating + Front of Sprite?

    - by Vipar
    So I've been looking around to try and figure out how I make my sprite face my mouse. So far the sprite moves to where my mouse is by some vector math. Now I'd like it to rotate and face the mouse as it moves. From what I've found this calculation seems to be what keeps reappearing: Sprite Rotation = Atan2(Direction Vectors Y Position, Direction Vectors X Position) I express it like so: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X); If I just go with the above, the sprite seems to jitter left and right ever so slightly but never rotate out of that position. Seeing as Atan2 returns the rotation in radians I found another piece of calculation to add to the above which turns it into degrees: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X) * 180 / PI; Now the sprite rotates. Problem is that it spins uncontrollably the closer it comes to the mouse. One of the problems with the above calculation is that it assumes that +y goes up rather than down on the screen. As I recorded in these two videos, the first part is the slightly jittery movement (A lot more visible when not recording) and then with the added rotation: Jittery Movement So my questions are: How do I fix that weird Jittery movement when the sprite stands still? Some have suggested to make some kind of "snap" where I set the position of the sprite directly to the mouse position when it's really close. But no matter what I do the snapping is noticeable. How do I make the sprite stop spinning uncontrollably? Is it possible to simply define the front of the sprite and use that to make it "face" the right way?

    Read the article

  • How to set up dual quadro cards on RHEL 5.5?

    - by Alex J. Roberts
    I have a RHEL 5 workstation with 2 nvidia Quadro FX4500 cards, with one display attached to each card. After doing a clean install of RHEL 5.5, the second display doesnt work (it worked ok in RHEL 5.2). Neither separate X screens nor Xinerama are working. The kernel version is 2.6.18-194.el5 I've tried nvidia drivers 185.18.36 (the ones that i was using on 5.2) and the latest 260.19.36 and neither works. My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildmeister@builder58) Fri Aug 14 18:34:43 PDT 2009 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" FontPath "unix/:7100" EndSection Section "ServerFlags" Option "Xinerama" "1" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from data in "/etc/sysconfig/keyboard" Identifier "Keyboard0" Driver "kbd" Option "XkbLayout" "us" Option "XkbModel" "pc105" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:10:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:129:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection And the Xorg Log: X Window System Version 7.1.1 Release Date: 12 May 2006 X Protocol Version 11, Revision 0, Release 7.1.1 Build Operating System: Linux 2.6.18-164.11.1.el5 x86_64 Red Hat, Inc. Current Operating System: Linux blur.svsdsde 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 Build Date: 06 March 2010 Build ID: xorg-x11-server 1.1.1-48.76.el5 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Module Loader present Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Feb 18 09:52:08 2011 (==) Using config file: "/etc/X11/xorg.conf" (==) ServerLayout "Layout0" (**) |-->Screen "Screen0" (0) (**) | |-->Monitor "Monitor0" (**) | |-->Device "Device0" (**) |-->Screen "Screen1" (1) (**) | |-->Monitor "Monitor1" (**) | |-->Device "Device1" (**) |-->Input Device "Keyboard0" (**) |-->Input Device "Mouse0" (**) FontPath set to: unix/:7100 (==) RgbPath set to "/usr/share/X11/rgb" (==) ModulePath set to "/usr/lib64/xorg/modules" (**) Option "Xinerama" "1" (**) Xinerama: enabled (==) Max clients allowed: 512, resource mask: 0xfffff (II) Open ACPI successful (/var/run/acpid.socket) (II) Module ABI versions: X.Org ANSI C Emulation: 0.3 X.Org Video Driver: 1.0 X.Org XInput driver : 0.6 X.Org Server Extension : 0.3 X.Org Font Renderer : 0.5 (II) Loader running on linux (II) LoadModule: "bitmap" (II) Loading /usr/lib64/xorg/modules/fonts/libbitmap.so (II) Module bitmap: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Bitmap (II) LoadModule: "pcidata" (II) Loading /usr/lib64/xorg/modules/libpcidata.so (II) Module pcidata: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Video Driver, version 1.0 (++) using VT number 7 (II) PCI: PCI scan (all values are in hex) (II) PCI: 00:00:0: chip 10de,005e card 103c,1500 rev a3 class 05,80,00 hdr 00 (II) PCI: 00:01:0: chip 10de,0051 card 103c,1500 rev a3 class 06,01,00 hdr 80 (II) PCI: 00:01:1: chip 10de,0052 card 103c,1500 rev a2 class 0c,05,00 hdr 80 (II) PCI: 00:02:0: chip 10de,005a card 103c,1500 rev a2 class 0c,03,10 hdr 80 (II) PCI: 00:02:1: chip 10de,005b card 103c,1500 rev a3 class 0c,03,20 hdr 80 (II) PCI: 00:04:0: chip 10de,0059 card 103c,1500 rev a2 class 04,01,00 hdr 00 (II) PCI: 00:06:0: chip 10de,0053 card 103c,1500 rev f2 class 01,01,8a hdr 00 (II) PCI: 00:07:0: chip 10de,0054 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:08:0: chip 10de,0055 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:09:0: chip 10de,005c card 0000,0000 rev a2 class 06,04,01 hdr 01 (II) PCI: 00:0a:0: chip 10de,0057 card 103c,1500 rev a3 class 06,80,00 hdr 00 (II) PCI: 00:0e:0: chip 10de,005d card 0000,0000 rev a3 class 06,04,00 hdr 01 (II) PCI: 00:18:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 05:05:0: chip 104c,8023 card 103c,1500 rev 00 class 0c,00,10 hdr 00 (II) PCI: 0a:00:0: chip 10de,009d card 10de,02af rev a1 class 03,00,00 hdr 00 (II) PCI: End of PCI scan (II) PCI-to-ISA bridge: (II) Bus -1: bridge is at (0:1:0), (0,-1,-1), BCTRL: 0x0008 (VGA_EN is set) (II) Subtractive PCI-to-PCI bridge: (II) Bus 5: bridge is at (0:9:0), (0,5,5), BCTRL: 0x0206 (VGA_EN is cleared) (II) Bus 5 non-prefetchable memory range: [0] -1 0 0xf5000000 - 0xf50fffff (0x100000) MX[B] (II) PCI-to-PCI bridge: (II) Bus 10: bridge is at (0:14:0), (0,10,10), BCTRL: 0x000a (VGA_EN is set) (II) Bus 10 I/O range: [0] -1 0 0x00003000 - 0x00003fff (0x1000) IX[B] (II) Bus 10 non-prefetchable memory range: [0] -1 0 0xf3000000 - 0xf4ffffff (0x2000000) MX[B] (II) Bus 10 prefetchable memory range: [0] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B] (II) Host-to-PCI bridge: (II) Bus 0: bridge is at (0:24:0), (0,0,10), BCTRL: 0x0008 (VGA_EN is set) (II) Bus 0 I/O range: [0] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) Bus 0 non-prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (II) Bus 0 prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (--) PCI:*(10:0:0) nVidia Corporation Quadro FX 4500 rev 161, Mem @ 0xf3000000/24, 0xc0000000/28, 0xf4000000/24, I/O @ 0x3000/7 (II) Addressable bus resource ranges are [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] [1] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) OS-reported resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) Active PCI resource ranges: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) Active PCI resource ranges after removing overlaps: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) OS-reported resource ranges after removing overlaps with PCI: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) All system resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) LoadModule: "extmod" (II) Loading /usr/lib64/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension SHAPE (II) Loading extension MIT-SUNDRY-NONSTANDARD (II) Loading extension BIG-REQUESTS (II) Loading extension SYNC (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XC-MISC (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-Misc (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension TOG-CUP (II) Loading extension Extended-Visual-Information (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib64/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "glx" (II) Loading /usr/lib64/xorg/modules/extensions/libglx.so (II) Module glx: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Server Extension (II) NVIDIA GLX Module 185.18.36 Fri Aug 14 18:27:24 PDT 2009 (II) Loading extension GLX (II) LoadModule: "freetype" (II) Loading /usr/lib64/xorg/modules/fonts/libfreetype.so (II) Module freetype: vendor="X.Org Foundation & the After X-TT Project" compiled for 7.1.1, module version = 2.1.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font FreeType (II) LoadModule: "type1" (II) Loading /usr/lib64/xorg/modules/fonts/libtype1.so (II) Module type1: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.2 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Type1 (II) LoadModule: "record" (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading sub module "drm" (II) LoadModule: "drm" (II) Loading /usr/lib64/xorg/modules/linux/libdrm.so (II) Module drm: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading extension XFree86-DRI (II) LoadModule: "nvidia" (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so (II) Module nvidia: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Video Driver (II) LoadModule: "kbd" (II) Loading /usr/lib64/xorg/modules/input/kbd_drv.so (II) Module kbd: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.0 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) LoadModule: "mouse" (II) Loading /usr/lib64/xorg/modules/input/mouse_drv.so (II) Module mouse: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.1 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) NVIDIA dlloader X Driver 185.18.36 Fri Aug 14 17:51:02 PDT 2009 (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs (II) Primary Device is: PCI 0a:00:0 (--) Chipset NVIDIA GPU found (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib64/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.3 (II) Loading sub module "wfb" (II) LoadModule: "wfb" (II) Loading /usr/lib64/xorg/modules/libwfb.so (II) Module wfb: vendor="NVIDIA Corporation" compiled for 7.1.99.2, module version = 1.0.0 (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Loading /usr/lib64/xorg/modules/libramdac.so (II) Module ramdac: vendor="X.Org Foundation" compiled for 7.1.1, module version = 0.1.0 ABI class: X.Org Video Driver, version 1.0 (II) resource ranges after xf86ClaimFixedResources() call: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) resource ranges after probing: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "TwinView" "0" (**) NVIDIA(0): Option "MetaModes" "nvidia-auto-select +0+0" (**) NVIDIA(0): Enabling RENDER acceleration (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) NVIDIA(0): enabled. (II) NVIDIA(0): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:10:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(0): Detected PCI Express Link width: 16X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro FX 4500 at PCI:10:0:0: (--) NVIDIA(0): DELL 3007WFP (DFP-0) (--) NVIDIA(0): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Assigned Display Device: DFP-0 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): "nvidia-auto-select+0+0" (II) NVIDIA(0): Virtual screen size determined to be 2560 x 1600 (--) NVIDIA(0): DPI set to (101, 101); computed from "UseEdidDpi" X config (--) NVIDIA(0): option (WW) NVIDIA(0): UBB is incompatible with the Composite extension. Disabling (WW) NVIDIA(0): UBB. (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp (II) do I need RAC? No, I don't. (II) resource ranges after preInit: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) NVIDIA(GPU-1): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:129:0:0 (GPU-1) (--) NVIDIA(GPU-1): Memory: 524288 kBytes (--) NVIDIA(GPU-1): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(GPU-1): Detected PCI Express Link width: 16X (--) NVIDIA(GPU-1): Interlaced video modes are supported on this GPU (--) NVIDIA(GPU-1): Connected display device(s) on Quadro FX 4500 at PCI:129:0:0: (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0) (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Initialized GPU GART. (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Loading extension NV-GLX (II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized (==) NVIDIA(0): Disabling shared memory pixmaps (II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture (==) NVIDIA(0): Backing store disabled (==) NVIDIA(0): Silken mouse enabled (**) Option "dpms" (**) NVIDIA(0): DPMS enabled (II) Loading extension NV-CONTROL (==) RandR enabled (II) Setting vga for screen 0. (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-APPGROUP (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension XFree86-Bigfont (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE (II) Initializing built-in extension XEVIE (II) Initializing extension GLX (WW) Disabling Composite since Xinerama is enabled (**) Option "CoreKeyboard" (**) Keyboard0: Core Keyboard (**) Option "Protocol" "standard" (**) Keyboard0: Protocol: standard (**) Option "AutoRepeat" "500 30" (**) Option "XkbRules" "xorg" (**) Keyboard0: XkbRules: "xorg" (**) Option "XkbModel" "pc105" (**) Keyboard0: XkbModel: "pc105" (**) Option "XkbLayout" "us" (**) Keyboard0: XkbLayout: "us" (**) Option "CustomKeycodes" "off" (**) Keyboard0: CustomKeycodes disabled (**) Option "Protocol" "auto" (**) Mouse0: Device: "/dev/input/mice" (**) Mouse0: Protocol: "auto" (**) Option "CorePointer" (**) Mouse0: Core Pointer (**) Option "Device" "/dev/input/mice" (**) Option "Emulate3Buttons" "no" (**) Option "ZAxisMapping" "4 5" (**) Mouse0: ZAxisMapping: buttons 4 and 5 (**) Mouse0: Buttons: 9 (II) XINPUT: Adding extended input device "Mouse0" (type: MOUSE) (II) XINPUT: Adding extended input device "Keyboard0" (type: KEYBOARD) (--) Mouse0: PnP-detected protocol: "ExplorerPS/2" (II) Mouse0: ps2EnableDataReporting: succeeded (II) Open ACPI successful (/var/run/acpid.socket) (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Mouse0: ps2EnableDataReporting: succeeded (the snipped part can be changed if necessary) Any help at all would be appreciated. Cheers, Alex

    Read the article

  • Dependency Injection in ASP.NET Web API using Autofac

    - by shiju
    In this post, I will demonstrate how to use Dependency Injection in ASP.NET Web API using Autofac in an ASP.NET MVC 4 app. The new ASP.NET Web API is a great framework for building HTTP services. The Autofac IoC container provides the better integration with ASP.NET Web API for applying dependency injection. The NuGet package Autofac.WebApi provides the  Dependency Injection support for ASP.NET Web API services. Using Autofac in ASP.NET Web API The following command in the Package Manager console will install Autofac.WebApi package into your ASP.NET Web API application. PM > Install-Package Autofac.WebApi The following code block imports the necessary namespaces for using Autofact.WebApi using Autofac; using Autofac.Integration.WebApi; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The following code in the Bootstrapper class configures the Autofac. 1: public static class Bootstrapper 2: { 3: public static void Run() 4: { 5: SetAutofacWebAPI(); 6: } 7: private static void SetAutofacWebAPI() 8: { 9: var configuration = GlobalConfiguration.Configuration; 10: var builder = new ContainerBuilder(); 11: // Configure the container 12: builder.ConfigureWebApi(configuration); 13: // Register API controllers using assembly scanning. 14: builder.RegisterApiControllers(Assembly.GetExecutingAssembly()); 15: builder.RegisterType<DefaultCommandBus>().As<ICommandBus>() 16: .InstancePerApiRequest(); 17: builder.RegisterType<UnitOfWork>().As<IUnitOfWork>() 18: .InstancePerApiRequest(); 19: builder.RegisterType<DatabaseFactory>().As<IDatabaseFactory>() 20: .InstancePerApiRequest(); 21: builder.RegisterAssemblyTypes(typeof(CategoryRepository) 22: .Assembly).Where(t => t.Name.EndsWith("Repository")) 23: .AsImplementedInterfaces().InstancePerApiRequest(); 24: var services = Assembly.Load("EFMVC.Domain"); 25: builder.RegisterAssemblyTypes(services) 26: .AsClosedTypesOf(typeof(ICommandHandler<>)) 27: .InstancePerApiRequest(); 28: builder.RegisterAssemblyTypes(services) 29: .AsClosedTypesOf(typeof(IValidationHandler<>)) 30: .InstancePerApiRequest(); 31: var container = builder.Build(); 32: // Set the WebApi dependency resolver. 33: var resolver = new AutofacWebApiDependencyResolver(container); 34: configuration.ServiceResolver.SetResolver(resolver); 35: } 36: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The RegisterApiControllers method will scan the given assembly and register the all ApiController classes. This method will look for types that derive from IHttpController with name convention end with “Controller”. The InstancePerApiRequest method specifies the life time of the component for once per API controller invocation. The GlobalConfiguration.Configuration provides a ServiceResolver class which can be use set dependency resolver for ASP.NET Web API. In our example, we are using AutofacWebApiDependencyResolver class provided by Autofac.WebApi to set the dependency resolver. The Run method of Bootstrapper class is calling from Application_Start method of Global.asax.cs. 1: protected void Application_Start() 2: { 3: AreaRegistration.RegisterAllAreas(); 4: RegisterGlobalFilters(GlobalFilters.Filters); 5: RegisterRoutes(RouteTable.Routes); 6: BundleTable.Bundles.RegisterTemplateBundles(); 7: //Call Autofac DI configurations 8: Bootstrapper.Run(); 9: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Autofac.Mvc4 The Autofac framework’s integration with ASP.NET MVC has updated for ASP.NET MVC 4. The NuGet package Autofac.Mvc4 provides the dependency injection support for ASP.NET MVC 4. There is not any syntax change between Autofac.Mvc3 and Autofac.Mvc4 Source Code I have updated my EFMVC app with Autofac.WebApi for applying dependency injection for it’s ASP.NET Web API services. EFMVC app also updated to Autofac.Mvc4 for it’s ASP.NET MVC 4 web app. The above code sample is taken from the EFMVC app. You can download the source code of EFMVC app from http://efmvc.codeplex.com/

    Read the article

  • Display System Information on Your Desktop with Desktop Info

    - by Asian Angel
    Do you like to monitor your system but do not want a complicated app to do it with? If you love simplicity and easy configuration then join us as we look at Desktop Info. Desktop Info in Action Desktop Info comes in a zip file format so you will need to unzip the app, place it into an appropriate “Program Files Folder”, and create a shortcut. Do NOT delete the “Read Me File”…this will be extremely useful to you when you make changes to the “Configuration File”. Once you have everything set up you are ready to start Desktop Info up. This is the default layout and set of listings displayed when you start Desktop Info up for the first time. The font colors will be a mix of colors as seen here and the font size will perhaps be a bit small but those are very easy to change if desired. You can access the “Context Menu” directly over the “information area”…so no need to look for it in the “System Tray”. Notice that you can easily access that important “Read Me File” from here… The full contents of the configuration file (.ini file) are displayed here so that you can see exactly what kind of information can be displayed using the default listings. The first section is “Options”…you will most likely want to increase the font size while you are here. Then “Items”… If you are unhappy with any of the font colors in the “information area” this is where you can make the changes. You can turn information display items on or off here. And finally “Files, Registry, & Event Logs”. Here is our displayed information after a few tweaks in the configuration file. Very nice. Conclusion If you have been looking for a system information app that is simple and easy to set up then you should definitely give Desktop Info a try. Links Download Desktop Info Similar Articles Productive Geek Tips Ask the Readers: What are Your Computer’s Hardware Specs?Allow Remote Control To Your Desktop On UbuntuHow To Get Detailed Information About Your PCGet CPU / System Load Average on Ubuntu LinuxEnable Remote Desktop (VNC) on Kubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • How Mary Meeker’s Latest Findings May Make You Re-Imagine Commerce

    - by Brenna Johnson-Oracle
    0 0 1 954 5439 Endeca Technologies 45 12 6381 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Today, Mary Meeker released her highly anticipated annual “Internet Trends” presentation for 2014. All 164 slides are jam-packed with pretty much everything you need to know about the state of the Internet. And as luck would have it, Oracle is staying ahead of these trends (but we’ll talk about that later). There were a few surprises, some stats to solidify what you likely already know, and Meeker’s novel observations about where we are all going. What interested me the most is not only how people are engaging in their personal lives, but how they engage with brands. As you could probably predict, Internet usage growth is slowing while tablet user and mobile data traffic growth continue their meteoric rise around the globe, with tremendous growth in underpenetrated markets like China, India, Brazil and Indonesia. Now hold those the “Internet is dead” comments. Keep in mind there’s still plenty of room to grow, and a multiscreen model is Meeker’s vision for our future. Despite 1.5x YOY growth for mobile traffic, mobile still only makes up about 23% of all traffic today. With tablet shipments easily outpacing figures for PCs even at their height (in 2007), mobile will only continue on it’s path, but won’t be everything to everyone. Mobile won’t replace every touchpoint, it’s just created our shorter attention spans and demand for simpler, more personal experiences. As Meeker points out TVs, tablets, PCs, and smartphones are used for different activities at present, but lines will blur (for example, 84% of smartphones owners use their device while watching TV). Day-to-day activities are being re-imagining through simple, beautiful user experiences. It seems like every day I discover a new way a brand/site/app made the most mundane or mounting task enjoyable and frictionless – and I’m not alone. Meeker points out the evolution of how we do everything from how we communicate, get information, use money, meet someone, get places, order a meal, and consume media is all done through new user interfaces that make day-to-day tasks simpler. This movement has caused just about everyone’s patience for a poor UX to take a nosedive. And it’s not just the digital user experience, technology is making a lot of people’s offline lives easier, and less expensive. Today 47% of online shopping utilizes free shipping— nearly half. And Meeker predicts same day local delivery will be the “next big thing” (and you can take a guess on who will own that). Content, Community and Commerce creates the “Internet Trifecta.” Meeker pointed out that when content, communities and commerce occur in a single experience it’s embraced by consumers, which translates to big dollars for brands. The magic happens when consumers can get inspired, research, and buy in a single experience. As the buying cycle has changed and touchpoints (Web, mobile, social, store) are no longer tied to “roles” or steps in the customer journey, brands must make all experiences (content and commerce) available in a single, adaptable experience. (We at Oracle Commerce have a lot to say on this topic – stay tuned!) And in what Meeker calls the “biggest re-imagination of all:” consumers enabled with smartphones and sensors are creating troves of findable and sharable data, which she says is in the early stages, by growing rapidly. She notes that transparency and patterns of consumers with this hardware (FYI - there are up to 10 sensors embedded in smartphones now) has created a Big Data treasure chest to be mined to improve business and the life of the consumer. The opportunities are endless. So what does it all mean for a company doing business online? Start thinking about how you can: Re-imagine your experience. Not your online experience and your mobile experience and your social experience – your overall experience. When consumers can research, buy, and advocate from anywhere (and their attention spans are at an all-time low) channels don’t exist. Enable simple and beautiful interactions informed by all of the online and offline data you leverage across your enterprise. Ethically leverage the endless supply of data (user generated content, clicks, purchases, in-store behavior, social activity) to make experiences more beautiful, more accurate, and more personalized (not to mention, more lucrative for you). Re-imagine content and commerce. Content and commerce must co-exist in a single destination where shoppers can get inspired, explore, research, share, and purchase in a collective experience. Think of how you can deliver an experience where all types of experiences (brand stories and commerce) adapt to every customer need. (Look for more on this topic coming soon). Re-imagine your reach. Look to Meeker’s findings to see how the global appetite for digital experiences is growing, but under-served in many places (i.e.: India, Mexico, Indonesia, Brazil, Philippines, etc.). Growing your online business to a new geography doesn’t have to mean starting from scratch or having an entirely new team manage the new endeavor. Expand using what you’ve already built in a multisite framework, with global language support. And of course, make sure it’s optimized for mobile! Re-imagine the possible. After every Meeker report, I’m always left with the thought “we are just at the beginning.” Everyday there is more data, more possibilities, more online consumers, and more opportunities to use new latest technology to get closer to your customers and be more successful. There’s a lot going on in our Product Development and Product Innovations groups to automate innovation for our customers, so that they can continue to stay ahead of these trends, without disrupting their business. Check out a recent interview with our Innovations Team on some of these new possibilities. Staying on track despite the seemingly endless possibilities out there is the hard part. Prioritizing where you will focus based on your unique brand promise, customer and goals is what you do best. To learn how Oracle Commerce can help your business achieve your goals check out oracle.com/commerce. Check out Meeker’s entire report here.

    Read the article

  • Visual Studio 2013, ASP.NET MVC 5 Scaffolded Controls, and Bootstrap

    - by plitwin
    A few days ago, I created an ASP.NET MVC 5 project in the brand new Visual Studio 2013. I added some model classes and then proceeded to scaffold a controller class and views using the Entity Framework. Scaffolding Some Views Visual Studio 2013, by default, uses the Bootstrap 3 responsive CSS framework. Great; after all, we all want our web sites to be responsive and work well on mobile devices. Here’s an example of a scaffolded Create view as shown in Google Chrome browser   Looks pretty good. Okay, so let’s increase the width of the Title, Description, Address, and Date/Time textboxes. And decrease the width of the  State and MaxActors textbox controls. Can’t be that hard… Digging Into the Code Let’s take a look at the scaffolded Create.cshtml file. Here’s a snippet of code behind the Create view. Pretty simple stuff. @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-horizontal"> <h4>RandomAct</h4> <hr /> @Html.ValidationSummary(true) <div class="form-group"> @Html.LabelFor(model => model.Title, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.EditorFor(model => model.Title) @Html.ValidationMessageFor(model => model.Title) </div> </div> <div class="form-group"> @Html.LabelFor(model => model.Description, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> </div> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A little more digging and I discover that there are three CSS files of importance in how the page is rendered: boostrap.css (and its minimized cohort) and site.css as shown below.   The Root of the Problem And here’s the root of the problem which you’ll find the following CSS in Site.css: /* Set width on the form input elements since they're 100% wide by default */ input, select, textarea { max-width: 280px; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Yes, Microsoft is for some reason setting the maximum width of all input, select, and textarea controls to 280 pixels. Not sure the motivation behind this, but until you change this or overrride this by assigning the form controls to some other CSS class, your controls will never be able to be wider than 280px. The Fix Okay, so here’s the deal: I hope to become very competent in all things Bootstrap in the near future, but I don’t think you should have to become a Bootstrap guru in order to modify some scaffolded control widths. And you don’t. Here is the solution I came up with: Find the aforementioned CSS code in SIte.css and change it to something more tenable. Such as: /* Set width on the form input elements since they're 100% wide by default */ input, select, textarea { max-width: 600px; } Because the @Html.EditorFor html helper doesn’t support the passing of HTML attributes, you will need to repalce any @Html.EditorFor() helpers with @Html.TextboxFor(), @Html.TextAreaFor, @Html.CheckBoxFor, etc. helpers, and then add a custom width attribute to each control you wish to modify. Thus, the earlier stretch of code might end up looking like this: @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-horizontal"> <h4>Random Act</h4> <hr /> @Html.ValidationSummary(true) <div class="form-group"> @Html.LabelFor(model => model.Title, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.TextBoxFor(model => model.Title, new { style = "width: 400px" }) @Html.ValidationMessageFor(model => model.Title) </div> </div> <div class="form-group"> @Html.LabelFor(model => model.Description, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.TextAreaFor(model => model.Description, new { style = "width: 400px" }) @Html.ValidationMessageFor(model => model.Description) </div> </div> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Resulting Form Here’s what the page looks like after the fix: Technorati Tags: ASP.NET MVC,ASP.NET MVC 5,Bootstrap

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • September Independent Oracle User Group (IOUG) Regional Events:

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} September 5, 2012 – Denver, CO Oracle 11g Database Upgrade Seminar Join Roy Swonger, Senior Director of software development at Oracle to learn about upgrading to Oracle Database 11g. Topics include: All the required preparatory steps Database upgrade strategies Post-upgrade performance analysis Helpful tips and common pitfalls to watch out for http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=152242&src=7598177&src=7598177&Act=4 September 6, 2012 – Salt Lake City, UT Fall Symposium 2012 Plan to join us for our annual fall event on Sept 6. They day will be filled with learning and networking with tracks focused on Applications, APEX, BI, Development and DBA Topics. This event is free for UTOUG members to attend, but please register. http://www.utoug.org/apex/f?p=972:2:6686308836668467::::P2_EVENT_ID:121 September 6, 2012 – Portland, OR Oracle’s Hands on Workshop Series focused on providing Defense-in-Depth Solutions to secure data at the source, reduce risk and simplify compliance The Oracle Database Security Workshop is a one-day hands-on session for IT Managers, IT Security Architects and Oracle DBAs who are looking for solutions to address their information protection, privacy, and accountability challenges within their Oracle database environment. Most security programs offered today fail toadequately address database security. Customers continue to be challenged tosecure information against loss and protect the integrity of sensitiveinformation like critical financial data, personally identifiable information(PII) and credit card data for PCI compliance. http://nwoug.org/content.aspx?page_id=87&club_id=165905&item_id=241082 September 11, 2012 – Montreal, QC APEXposed! For APEX aficionados – join ODTUG in Montreal, September 11-12 for APEXposed! Topics will include Dynamic Actions, Plug-ins, Tuning, and Building Mobile Apps. The cost is $399 US and early registration ends August 15th. For more information: http://www.odtugapextraining.com  September 11, 2012 – Philadelphia, PA Big Data & What are we still doing wrong with Tom Kyte Tom Kyte is a Senior Technical Architect in Oracle's Server Technology Division. Tom is the Tom behind the AskTom column in Oracle Magazine and is also the author of Expert Oracle Database Architecture (Apress, 2005/2009) among other books Abstract: Big Data The term "big data" draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. However, beyond that critical data is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. This presentation will take a look at what Big Data is and means - and Oracle's strategy for handling it Abstract: What are we still doing wrong? I've given many best practices presentations in the last 10 years. I've given many worst practices presentations in the last 10 years. I've seen some things change over the last ten years and many other things stay exactly the same. In this talk - we'll be taking a look at the good and the bad - what we do right and what we continue to do wrong over and over again. We'll look at why "Why" is probably the right initial answer to most any question. We'll look at how we get to "Know what we Know", and why that can be both a help and a hindrance. We'll peek at "Best Practices" and tie them into what I term "Worst Practices". In short, a talk on the good and the bad. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://ioug.itconvergence.com/pls/apex/f?p=207:27:3669516430980563::NO September 12, 2012- New York, NY NYOUG Fall General Meeting “Trends in Database Administration and Why the Future of Database Administration is the Vdba” http://www.nyoug.org/upcoming_events.htm#General_Meeting1 September 21, 2012 – Cleveland, OH Oracle Database 11g for Developers: What You need to know or Oracle Database 11g New Features for Developers Attendees are introduced to the new and improved features of Oracle 11g (both Oracle 11g R1 and Oracle 11g R2) that directly impact application development. Special emphasis is placed on features that reduce development time, make development simpler, improve performance, or speed deployment. Specific topics include: New SQL functions, virtual columns, result caching, XML improvements, pivot statements, JDBC improvements, and PL/SQL enhancements such as compound triggers. http://www.neooug.org/ September 24, 2012 – Ottawa, ON Introduction to Oracle Spatial The free Oracle Locator functionality, and the Oracle Spatial option which dramatically extends Locator, are very useful, but poorly understood capabilities of the database. In the afternoon we will extend into additional areas selected from: storage and performance; answering business problems with spatial queries; using Oracle Maps in OBIEE; an overview and capabilities of Oracle Topology; under the covers with GeoCoding. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:4209274028390246::NO

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Managing Social Relationships for the Enterprise – Part 1

    - by Michael Hylton
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Reggie Bradford, Senior Vice President, Oracle Today, Mark Hurd, President of Oracle, Thomas Kurian, Executive Vice President of Oracle and I discussed the strategic importance of how social media is impacting the enterprise and how it is changing the way customers, prospects employees and investors interact with brands worldwide. Oracle understands that the consumer is in control and as such, brands must evolve and change to meet growing needs. In addition, according to social media thought leader and Analyst from Altimeter Group, Jeremiah Owyang, companies now average 178 corporate-owned social media accounts. When Oracle added leading social marketing, listening analytics and development tools from Vitrue, Collective Intellect and Involver to its Oracle’s Cloud Services Suite we went beyond providing a single set of tools. We developed an entire framework to include a comprehensive social relationship management suite to help companies move beyond the social enterprise and achieve the social-enabled enterprise. The fundamental shift from transaction to engagement means that enterprises need not only a social strategy, but should also ensure that the information and data received from social initiatives flow back to marketing, sales, support and service. Doing so enables companies to deliver a proactive and compelling experience and provides analytics to turn engagement into opportunity – and ultimately that opportunity into revenue. On September 13, 2012, I am delighted to sit down with Jeremiah to further the discussion about how enterprises are addressing social media strategies and managing content. In addition, we will be taking your questions after the webinar via Twitter (@Oracle, @ReggieBradford, @cwfinn, @jowyang). Use #oracle and #socbiz to submit questions and follow the conversation. I look forward to speaking with you and answering your questions online. For more information about becoming a social-enabled enterprise, visit www.oracle.com/social. And don’t miss the insights of other social business thought leaders at www.oracle.com/goto/socialbusiness. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Arial","sans-serif"; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin;}

    Read the article

  • Changing CSS with jQuery syntax in Silverlight using jLight

    - by Timmy Kokke
    Lately I’ve ran into situations where I had to change elements or had to request a value in the DOM from Silverlight. jLight, which was introduced in an earlier article, can help with that. jQuery offers great ways to change CSS during runtime. Silverlight can access the DOM, but it isn’t as easy as jQuery. All examples shown in this article can be looked at in this online demo. The code can be downloaded here.   Part 1: The easy stuff Selecting and changing properties is pretty straight forward. Setting the text color in all <B> </B> elements can be done using the following code:   jQuery.Select("b").Css("color", "red");   The Css() method is an extension method on jQueryObject which is return by the jQuery.Select() method. The Css() method takes to parameters. The first is the Css style property. All properties used in Css can be entered in this string. The second parameter is the value you want to give the property. In this case the property is “color” and it is changed to “red”. To specify which element you want to select you can add a :selector parameter to the Select() method as shown in the next example.   jQuery.Select("b:first").Css("font-family", "sans-serif");   The “:first” pseudo-class selector selects only the first element. This example changes the “font-family” property of the first <B></B> element to “sans-serif”. To make use of intellisense in Visual Studio I’ve added a extension methods to help with the pseudo-classes. In the example below the “font-weight” of every “Even” <LI></LI> is set to “bold”.   jQuery.Select("li".Even()).Css("font-weight", "bold");   Because the Css() extension method returns a jQueryObject it is possible to chain calls to Css(). The following example show setting the “color”, “background-color” and the “font-size” of all headers in one go.   jQuery.Select(":header").Css("color", "#12FF70") .Css("background-color", "yellow") .Css("font-size", "25px");   Part 2: More complex stuff In only a few cases you need to change only one style property. More often you want to change an entire set op style properties all in one go.  You could chain a lot of Css() methods together. A better way is to add a class to a stylesheet and define all properties in there. With the AddClass() method you can set a style class to a set of elements. This example shows how to add the “demostyle” class to all <B></B> in the document.   jQuery.Select("b").AddClass("demostyle");   Removing the class works in the same way:   jQuery.Select("b").RemoveClass("demostyle");   jLight is build for interacting with to the DOM from Silverlight using jQuery. A jQueryObjectCss object can be used to define different sets of style properties in Silverlight. The over 60 most common Css style properties are defined in the jQueryObjectCss class. A string indexer can be used to access all style properties ( CssObject1[“background-color”] equals CssObject1.BackgroundColor). In the code below, two jQueryObjectCss objects are defined and instantiated.   private jQueryObjectCss CssObject1; private jQueryObjectCss CssObject2;   public Demo2() { CssObject1 = new jQueryObjectCss { BackgroundColor = "Lime", Color="Black", FontSize = "12pt", FontFamily = "sans-serif", FontWeight = "bold", MarginLeft = 150, LineHeight = "28px", Border = "Solid 1px #880000" }; CssObject2 = new jQueryObjectCss { FontStyle = "Italic", FontSize = "48", Color = "#225522" }; InitializeComponent(); }   Now instead of chaining to set all different properties you can just pass one of the jQueryObjectCss objects to the Css() method. In this case all <LI></LI> elements are set to match this object.   jQuery.Select("li").Css(CssObject1); When using the jQueryObjectCss objects chaining is still possible. In the following example all headers are given a blue backgroundcolor and the last is set to match CssObject2.   jQuery.Select(":header").Css(new jQueryObjectCss{BackgroundColor = "Blue"}) .Eq(-1).Css(CssObject2);   Part 3: The fun stuff Having Silverlight call JavaScript and than having JavaScript to call Silverlight requires a lot of plumbing code. Everything has to be registered and strings are passed back and forth to execute the JavaScript. jLight makes this kind of stuff so easy, it becomes fun to use. In a lot of situations jQuery can call a function to decide what to do, setting a style class based on complex expressions for example. jLight can do the same, but the callback methods are defined in Silverlight. This example calls the function() method for each <LI></LI> element. The callback method has to take a jQueryObject, an integer and a string as parameters. In this case jLight differs a bit from the actual jQuery implementation. jQuery uses only the index and the className parameters. A jQueryObject is added to make it simpler to access the attributes and properties of the element. If the text of the listitem starts with a ‘D’ or an ‘M’ the class is set. Otherwise null is returned and nothing happens.   private void button1_Click(object sender, RoutedEventArgs e) { jQuery.Select("li").AddClass(function); }   private string function(jQueryObject obj, int index, string className) { if (obj.Text[0] == 'D' || obj.Text[0] == 'M') return "demostyle"; return null; }   The last thing I would like to demonstrate uses even more Silverlight and less jLight, but demonstrates the power of the combination. Animating a style property using a Storyboard with easing functions. First a dependency property is defined. In this case it is a double named Intensity. By handling the changed event the color is set using jQuery.   public double Intensity { get { return (double)GetValue(IntensityProperty); } set { SetValue(IntensityProperty, value); } }   public static readonly DependencyProperty IntensityProperty = DependencyProperty.Register("Intensity", typeof(double), typeof(Demo3), new PropertyMetadata(0.0, IntensityChanged));   private static void IntensityChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var i = (byte)(double)e.NewValue; jQuery.Select("span").Css("color", string.Format("#{0:X2}{0:X2}{0:X2}", i)); }   An animation has to be created. This code defines a Storyboard with one keyframe that uses a bounce ease as an easing function. The animation is set to target the Intensity dependency property defined earlier.   private Storyboard CreateAnimation(double value) { Storyboard storyboard = new Storyboard(); var da = new DoubleAnimationUsingKeyFrames(); var d = new EasingDoubleKeyFrame { EasingFunction = new BounceEase(), KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromSeconds(1.0)), Value = value }; da.KeyFrames.Add(d); Storyboard.SetTarget(da, this); Storyboard.SetTargetProperty(da, new PropertyPath(Demo3.IntensityProperty)); storyboard.Children.Add(da); return storyboard; }   Initially the Intensity is set to 128 which results in a gray color. When one of the buttons is pressed, a new animation is created an played. One to animate to black, and one to animate to white.   public Demo3() { InitializeComponent(); Intensity = 128; }   private void button2_Click(object sender, RoutedEventArgs e) { CreateAnimation(255).Begin(); }   private void button3_Click(object sender, RoutedEventArgs e) { CreateAnimation(0).Begin(); }   Conclusion As you can see jLight can make the life of a Silverlight developer a lot easier when accessing the DOM. Almost all jQuery functions that are defined in jLight use the same constructions as described above. I’ve tried to stay as close as possible to the real jQuery. Having JavaScript perform callbacks to Silverlight using jLight will be described in more detail in a future tutorial about AJAX or eventing.

    Read the article

  • Firefox Fonts Blurry (Web Content ONLY, **NOT** Window Decorations or Other Programs)

    - by Lambert
    A picture is worth a thousand words... so does anyone know how to fix this font blurriness in Firefox? (You'll need to right-click the picture below go to View Image to view it full-size; it's too small to see anything here.) Note: My other applications (and the Firefox non-client area, as you can see in the screen) are completely fine, so obviously going to System-Appearance and changing the font settings isn't fixing the situation.

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >