Search Results

Search found 4581 results on 184 pages for 'deviling master'.

Page 65/184 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Is there any kind of established architecture for browser based MMO games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that?

    Read the article

  • ALPS touchpad on DELL Inspiron I15RN-3647BK with Ubuntu 11.10 x64

    - by Miguel
    I can't make work the multitouch of the ALPS touchpad in my Dell Inspiron I15RN-3647BK with Ubuntu 11.10 x64, kernel 3.0.0-17-generic... I have tried with this driver but it doesn't work http://people.canonical.com/~sforshee/alps-touchpad/psmouse-alps-0.10/psmouse-alps-dkms_0.10_all.deb. This is the result from the following command: user@laptop:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys That came with or without the alps driver from canonical repository and the touchpad just works like a PS2 generic mouse... Any ideas? Thanks in advance...

    Read the article

  • Disable touchpad tap to click on Oneiric ocelot

    - by AWE
    You've heard this a million times but the "tap to click" is a pain in the behind and I want to disable it. There is no touchpad in gpointing-device-settings and neither in mouse and touchpad in system settings. I've tried some commands in terminal but it's all crap. Dconf-editor doesn't react. How about solving this once and for all? xinput list: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Power Button id=9 [slave keyboard (3)] ? Sleep Button id=10 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? Dell WMI hotkeys id=14 [slave keyboard (3)]

    Read the article

  • WIFI/LAN not working after Installing Ubuntu 13.10

    - by user183025
    I can't connect to my WIFI after installing Ubuntu 13.10. I tried re-installing it three times (thinking that I did something wrong during the installation) but I still can't connect to my WIFI. It just gives me the message that I'm offline and that I can't connect to my WIFI. I also tried connecting to the internet using LAN cable but with the same results. I tried google but it seems that there's no solution to this yet.... Anybody knows how to solve this? Thanks! FYI: H/W path Device Class Description ====================================================== system RC530/RC730 (To be filled by O.E.M.) /0 bus RC530/RC730 /0/0 memory 64KiB BIOS /0/4 processor Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz /0/4/5 memory 32KiB L1 cache /0/41 memory 4GiB System Memory /0/41/0 memory DIMM [empty] /0/41/1 memory 4GiB SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) /0/100 bridge 2nd Generation Core Processor Family DRAM Controller 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 130 (rev 34) Subsystem: Intel Corporation Centrino Wireless-N 130 BGN Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7200000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) Subsystem: Samsung Electronics Co Ltd Device c0c1 Flags: bus master, fast devsel, latency 0, IRQ 41 I/O ports at b000 [size=256] Memory at f2104000 (64-bit, prefetchable) [size=4K] Memory at f2100000 (64-bit, prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Git Branch Model for iOS projects with one developer

    - by glenwayguy
    I'm using git for an iOS project, and so far have the following branch model: feature_brach(usually multiple) -> development -> testing -> master Feature-branches are short-lived, just used to add a feature or bug, then merged back in to development and deleted. Development is fairly stable, but not ready for production. Testing is when we have a stable version with enough features for a new update, and we ship to beta testers. Once testing is finished, it can be moved back into development or advanced into master. The problem, however, lies in the fact that we can't instantly deploy. On iOS, it can be several weeks between the time a build is released and when it actually hits users. I always want to have a version of the code that is currently on the market in my repo, but I also have to have a place to keep the current stable code to be sent for release. So: where should I keep stable code where should I keep the code currently on the market and where should I keep the code that is in review with Apple, and will be (hopefully) put on the market soon? Also, this is a one developer team, so collaboration is not totally necessary, but preferred because there may be more members in the future.

    Read the article

  • Looking to Implement/Upgrade Your MDM Solution? OOW Has the Session For You

    - by Mala Narasimharajan
    By Bala Mahalingam  Hurray!  Oracle Open World next week.  Oh my God!  I need to plan my calendar for MDM focused sessions. The implementation/upgrade of Oracle Master Data Management solution is an art & science combined. This year at Open World, we have a dedicated session focused on sharing two great implementation stories of Oracle Customer Hub. Also hear from Oracle on the implementation/upgrade approach and methodology for Oracle Master Data Management and Data Quality applications. Here are some of the questions that you might be thinking around the implementation of Oracle MDM solution. If you are in the process of implementation / upgrade or evaluating the options for implementation of MDM solution and you would like to hear directly from T-Mobile and Sony on their roadmap and implementation experience, then I would highly recommend this session.     Hope to see you at Oracle Open World 2012 and stay in touch via our future blogs. Look here for a list of all the MDM sessions at OpenWorld.

    Read the article

  • Blank screen during boot after clean Ubuntu 11.10 install (Intel N10 graphics)

    - by Coen
    After a clean install of Ubuntu 11.10 on my Asus eee PC 1005p, Ubuntu seems to boot correctly, except for initialization of the LCD screen. What I observe: I choose Ubuntu 11.10 in the GRUB 2 menu A blank screen with a blinking cursor in the top left of the screen, for 15-20 seconds. The ubuntu logo with 5 red dots in the center of the screen, for 1 second. The LCD screen is entirely blank The startup sound plays (Ubuntu is configured to auto-login) Still, the LCD screen is entirely blank. When I press Fn-F8 (the switch between LCD screen and external VGA), the LCD screen shows my desktop correctly and everything seems to work fine. Except for the adjust contrast buttons (Fn-F5 and Fn-F6), these seem to cycle through random brightness modes. Something like: 0% - 50% - 20% - 0% - 20% - 0% Any ideas what's causing this or how to solve this? coen@elpicu:~$ lspci -v 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7e00000 (32-bit, non-prefetchable) [size=512K] I/O ports at dc00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at f7d00000 (32-bit, non-prefetchable) [size=1M] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0 Memory at f7e80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied>

    Read the article

  • How can I configure the touchpad and keyboard settings on a Dell Inspiron 5110?

    - by Robik
    I am using ubuntu 11.10. I want the following 3 things: I have dell inspiron 5110 laptop. There is button at the top right corner of laptop which can be used for turning the screen off. It works in windows but it does not work in ubuntu 11.10. Even in the manual of the laptop, it the button is supported only in windows. Is there a way to activate it in ubuntu 11.10? Some of the keys like: "break" etc. are missing. Can I use other keys (or combinations of other keys) to function as those missing keys? In the program, "mouse and touchpad", there is no tab for touchpad. I want to enable vertical and horizontal scrolling. How do I do that? The command: xinput list shows Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Power Button id=9 [slave keyboard (3)] ? Sleep Button id=10 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? Dell WMI hotkeys Please help!!

    Read the article

  • Value my C++ knowledge

    - by PirateOwh
    I have only followed antiRTFM tutorials and read 2 books So, I'll list the things I know better : basic input output and all the variables : integers ( signed unsigned ), float, double, char arrays if, for, while, switch functions, and passing variables to functions and return type thing classes and the concept of oop with separating declaration and definition in the header and in the source pointers so this and some more i think is all i know of C++.. But, i need some exercises to test my knowledge because i want to move on to the library SDL, so I don't know if i should feel ready or not to move on to something totally different.. I feel I should know the basics for good at least. So the question is : How can i value my c++ knowledge? Is there any online tests? Is there any GDD ( Game Design Document ) for free to use and see if i can manage to do it so "i'd pass" ? ( I'm saying GDD since ill move on to SDL and try to make my own game ) When should I move to SDL? What are ALL the things I should "master" ( master is a big word to say.. but so you understand what i mean ) before moving on ? Please I'm really in need of expert advice. I think my question is detailed so i hope you understand what i mean and can give me a good reply. Thanks for the help!

    Read the article

  • After installing Ubuntu my computer boots into GRUB rescue mode after displaying 'no such partition' error message.

    - by VictorVictor5
    I wanted Ubuntu on my Gateway Solo Laptop. It had Win98 on the C:\ and WinXP on the D:\. Before the install, I could not boot from CD, but I could boot from floppy. I installed Ubuntu to run alongside Windows XP as that was one of the options. It didn't detect Win98, but I swear it's on there. Install seemed fine and then it asked me to reboot. When I rebooted, I got the message error: no such partition and the grub rescue> command prompt. I've looked around, but some commands, like sudo, don't work. One command I did get to work was set root=(hd0,0) if that helps. I'm a noob, and it was a pain installing Win98 and XP since this system is so old. I don't want to wipe my drive and start all over! Additional details copied from comments Addendum, I restored my master boot record via my Win98 boot floppy and typing in frisk /mbr. But, I'd still like to get ubuntu - any help? If I restored my master boot record - did I delete Ubuntu?

    Read the article

  • IE7 doesn't render part of page until the window resizes or switch between tabs

    - by BlackMael
    I have a problem with IE7. I have a fixed layout for keeping the header and a sidepanel fixed on a page leaving only the "main content" area switch can happily scroll it's content. This layout works perfectly fine for IE6 and IE8, but sometimes one page may start "hiding" the content that should be showing in the "main content" area. The page finishes loading just fine. For a split second IE7 will render the main content just fine and then it will immediately hide it from view.. somewhere.. It would also seem that it only experiences this problem when there is enough content to force the "main content" area to scroll. By resizing the window or switching to another open tab and back again will cause IE7 to show the page as it was intended. Note the same problem does occur with IE8 in compatibility mode, but the page is rendered correctly in IE8 mode. If need be I can attach the basic CSS styling I use, but I first want to see if this is a known issue with IE7. Does IE7 have issues with positioned layout and overflow scrolling that is sometimes likes to forgot to finish rendering the page correctly until some window redraw event forces to finish rendering? Please remember, this exact same layout is used across multiple pages in the site as it is set up in a master page. It is just (in this case) one page that is experiencing this problem. Other pages with the exact same layout do render correctly. Even if the main content is full enough to also scroll. UPDATE: A related question which doesn't have an answer at this point. LATE UPDATE: Adding example masterpage and css Please note this same layout is the same for all the pages in the application. My problem with IE7 only occurs on one such page. All other pages have happily render correctly in IE7. Just one page, using the exact same layout, has issues where it sometimes hides the content in the "work-space" div. The master page <%@ Master Language="VB" CodeFile="MasterPage.master.vb" Inherits="shared_templates_MasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <link rel="Stylesheet" type="text/css" href="~/common/yui/2.7.0/build/reset-fonts/reset-fonts.css" runat="server" /> <link rel="Stylesheet" type="text/css" href="~/shared/css/layout.css" runat="server" /> <asp:ContentPlaceHolder ID="head" runat="server" /> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" /> <div id="app-header"> </div> <div id="side-panel"> </div> <div id="work-space"> <asp:ContentPlaceHolder ID="WorkSpaceContentPlaceHolder" runat="server" /> </div> <div id="status-bar"> <asp:ContentPlaceHolder ID="StatusBarContentPlaceHolder" runat="server" /> </div> </form> </body> </html> The layout.css html { overflow: hidden; } body { overflow: hidden; padding: 0; margin: 0; width: 100%; height: 100%; background-color: white; } body, table, td, th, select, textarea, input { font-family: Tahoma, Arial, Sans-Serif; font-size: 9pt; } p { padding-left: 1em; margin-bottom: 1em; } #app-header { position: absolute; top: 0; left: 0; width: 100%; height: 80px; background-color: #dcdcdc; border-bottom: solid 4px #000; } #side-panel { position: absolute; top: 84px; left: 0px; bottom: 0px; overflow: auto; padding: 0; margin: 0; width: 227px; background-color: #AABCCA; border-right: solid 1px black; background-repeat: repeat-x; padding-top: 5px; } #work-space { position: absolute; top: 84px; left: 232px; right: 0px; padding: 0; margin: 0; bottom: 22px; overflow: auto; background-color: White; } #status-bar { position: absolute; height: 20px; left: 228px; right: 0px; padding: 0; margin: 0; bottom: 0px; border-top: solid 1px #c0c0c0; background-color: #f0f0f0; } The Default.aspx <%@ Page Title="Test" Language="VB" MasterPageFile="~/shared/templates/MasterPage.master" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" %> <asp:Content ID="WorkspaceContent" ContentPlaceHolderID="WorkSpaceContentPlaceHolder" Runat="Server"> Workspace <asp:ListView ID="DemoListView" runat="server" DataSourceID="DemoObjectDataSource" ItemPlaceholderID="DemoPlaceHolder"> <LayoutTemplate> <table style="border: 1px solid #a0a0a0; width: 600px"> <colgroup> <col width="80" /> <col /> <col width="80" /> <col width="120" /> </colgroup> <tbody> <asp:PlaceHolder ID="DemoPlaceHolder" runat="server" /> </tbody> </table> </LayoutTemplate> <ItemTemplate> <tr> <th><%#Eval("ID")%></th> <td><%#Eval("Name")%></td> <td><%#Eval("Size")%></td> <td><%#Eval("CreatedOn", "{0:yyyy-MM-dd HH:mm:ss}")%></td> </tr> </ItemTemplate> </asp:ListView> <asp:ObjectDataSource ID="DemoObjectDataSource" runat="server" OldValuesParameterFormatString="original_{0}" SelectMethod="GetData" TypeName="DemoLogic"> <SelectParameters> <asp:Parameter Name="path" Type="String" /> </SelectParameters> </asp:ObjectDataSource> </asp:Content> <asp:Content ID="StatusContent" ContentPlaceHolderID="StatusBarContentPlaceHolder" Runat="Server"> Ready OK. </asp:Content>

    Read the article

  • What is wrong with my XSLT for the XML File?

    - by atrueguy
    Actually my XML file has SVG info, and my Project lead wants me to develop an XSLT for the XMl file to convert it in to a PDF file. But when I try to do so I am failing to convert the XML file to PDF, can anyone help me out in this....... My Sample XML file <?xml version="1.0" encoding="ISO-8859-1"?> <!--<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">--> <!-- Generator: Arbortext IsoDraw 7.0 --> <svg width="100%" height="100%" viewBox="0 0 214.819 278.002"> <g id="Standard_x0020_layer"/> <g id="Catalog"> <line stroke-width="0.353" stroke-linecap="butt" x1="5.839" y1="262.185" x2="209.039" y2="262.185"/> <text transform="matrix(0.984 0 0 0.93 183.515 265.271)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174">© 2009 k Co.</text> <text transform="matrix(0.994 0 0 0.93 7.235 265.3)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174">087156-8-</text> <text transform="matrix(0.995 0 0 0.93 21.708 265.357)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174" font-weight="bold">AB</text> <text x="103.292" y="265.298" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174">P. 1/1</text> <g id="IC_TextBlock.1"> <g> <text transform="matrix(0.994 0 0 0.93 192.812 8.076)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="4.586" font-weight="bold">Fittings</text> <text transform="matrix(0.994 0 0 0.93 188.492 13.323)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="4.586" font-weight="bold">Raccords</text> <text transform="matrix(0.994 0 0 0.93 183.431 18.571)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="4.586" font-weight="bold">Conexiones</text> </g> </g> <g> <path stroke="none" fill="#000000" d="M26.507 12.628L26.507 4.977 28.599 4.977 28.599 10.673 30.946 10.673 30.946 12.628 26.507 12.628z"/> <path stroke="none" fill="#000000" d="M19.693 12.628L19.693 4.977 21.785 4.977 21.785 7.66 23.893 7.66 23.893 4.977 25.986 4.977 25.986 12.628 23.893 12.628 23.893 9.782 21.785 9.782 21.785 12.628 19.693 12.628z"/> <path stroke="none" fill="#000000" d="M12.587 4.977L9.566 8.621 13.019 12.631 10.25 12.63 7.905 9.9 7.9 9.9 7.9 12.628 5.81 12.628 5.81 4.977 7.9 4.977 7.9 7.267 7.884 7.27 9.875 4.977 12.587 4.977z"/> <path stroke="none" fill="#000000" d="M11.455 8.739C11.455 6.538 13.221 4.753 15.4 4.753L15.4 6.775C14.419 6.775 13.625 7.653 13.625 8.737 13.625 9.821 14.419 10.699 15.4 10.699 16.382 10.699 17.176 9.821 17.176 8.737 17.176 7.653 16.382 6.775 15.4 6.775L15.4 4.753C17.579 4.753 19.346 6.538 19.346 8.739 19.346 10.941 17.579 12.724 15.4 12.724 13.221 12.724 11.455 10.941 11.455 8.739z"/> <path stroke="none" fill="#000000" d="M33.472 4.977L35.621 4.977 35.621 6.74 33.521 6.743 33.515 7.952 35.454 7.952 35.454 9.664 33.518 9.664 33.518 10.833 35.64 10.833 35.64 12.628 33.491 12.628 31.376 12.628 31.376 4.977 33.472 4.977z"/> <path stroke="none" fill="#000000" d="M39.97 9.57L42.146 12.631 39.862 12.628 38.156 10.279 38.156 12.622 36.107 12.622 36.107 4.974 38.728 4.974 38.741 6.75 38.149 6.75 38.149 8.221 38.741 8.223C39.149 8.223 39.478 7.894 39.478 7.487 39.478 7.08 39.149 6.75 38.741 6.75L38.728 4.974C41.036 4.974 42.5 7.867 39.97 9.57z"/> <path stroke="none" fill="#000000" d="M42.415 12.205C42.415 11.82 42.72 11.512 43.106 11.512 43.49 11.512 43.796 11.82 43.796 12.205 43.796 12.586 43.49 12.894 43.106 12.894L43.106 12.73C43.402 12.73 43.631 12.51 43.631 12.205 43.631 11.894 43.402 11.676 43.106 11.676L43.179 11.837C43.344 11.837 43.457 11.868 43.457 12.057 43.457 12.189 43.39 12.243 43.262 12.252L43.436 12.554 43.262 12.554 43.103 12.252 42.99 12.252 42.99 12.143 43.182 12.143C43.262 12.143 43.308 12.127 43.308 12.035 43.308 11.962 43.216 11.962 43.146 11.962L42.99 11.962 42.99 12.143 42.99 12.252 42.99 12.554 42.832 12.554 42.832 11.837 43.179 11.837 43.106 11.676C42.804 11.676 42.579 11.894 42.579 12.205 42.579 12.51 42.804 12.73 43.106 12.73L43.106 12.894C42.72 12.894 42.415 12.586 42.415 12.205z"/> <g> <path stroke="none" fill="#000000" d="M8.837 17.466L8.599 17.466 8.554 16.832 8.544 16.832C8.31 17.329 7.843 17.539 7.339 17.539 6.243 17.539 5.697 16.675 5.697 15.724 5.697 14.774 6.243 13.91 7.339 13.91 8.071 13.91 8.666 14.305 8.794 15.067L8.461 15.067C8.417 14.666 8.003 14.194 7.339 14.194 6.418 14.194 6.027 14.964 6.027 15.724 6.027 16.486 6.418 17.257 7.339 17.257 8.111 17.257 8.56 16.716 8.544 15.978L7.36 15.978 7.36 15.695 8.837 15.695 8.837 17.466z"/> <path stroke="none" fill="#000000" d="M9.477 13.984L11.881 13.984 11.881 14.266 9.807 14.266 9.807 15.525 11.749 15.525 11.749 15.807 9.807 15.807 9.807 17.182 11.906 17.182 11.906 17.466 9.477 17.466 9.477 13.984z"/> <path stroke="none" fill="#000000" d="M12.364 13.984L12.734 13.984 14.763 16.929 14.772 16.929 14.772 13.984 15.105 13.984 15.105 17.466 14.734 17.466 12.705 14.521 12.695 14.521 12.695 17.466 12.364 17.466 12.364 13.984z"/> <path stroke="none" fill="#000000" d="M15.768 13.984L16.1 13.984 16.1 16.14C16.094 16.949 16.48 17.257 17.118 17.257 17.763 17.257 18.147 16.949 18.143 16.14L18.143 13.984 18.475 13.984 18.475 16.213C18.475 16.929 18.089 17.539 17.118 17.539 16.153 17.539 15.768 16.929 15.768 16.213L15.768 13.984z"/> <path stroke="none" fill="#000000" d="M19.167 13.984L19.498 13.984 19.498 17.466 19.167 17.466 19.167 13.984z"/> <path stroke="none" fill="#000000" d="M20.221 13.984L20.591 13.984 22.62 16.929 22.629 16.929 22.629 13.984 22.961 13.984 22.961 17.466 22.591 17.466 20.562 14.521 20.553 14.521 20.553 17.466 20.221 17.466 20.221 13.984z"/> <path stroke="none" fill="#000000" d="M23.658 13.984L26.064 13.984 26.064 14.266 23.99 14.266 23.99 15.525 25.931 15.525 25.931 15.807 23.99 15.807 23.99 17.182 26.088 17.182 26.088 17.466 23.658 17.466 23.658 13.984z"/> <path stroke="none" fill="#000000" d="M27.908 13.984L29.452 13.984C30.077 13.984 30.487 14.349 30.487 14.978 30.487 15.608 30.077 15.974 29.452 15.974L28.239 15.974 28.239 15.691 29.379 15.691C29.838 15.691 30.155 15.457 30.155 14.978 30.155 14.5 29.838 14.266 29.379 14.266L28.239 14.266 28.239 15.691 28.239 15.974 28.239 17.466 27.908 17.466 27.908 13.984z"/> <path stroke="none" fill="#000000" d="M31.643 13.984L32.014 13.984 33.38 17.466 33.024 17.466 32.598 16.384 31.013 16.384 31.117 16.1 32.487 16.1 31.814 14.314 31.117 16.1 31.013 16.384 30.594 17.466 30.239 17.466 31.643 13.984z"/> <path stroke="none" fill="#000000" d="M33.695 13.984L35.292 13.984C35.866 13.984 36.35 14.262 36.35 14.891 36.35 15.33 36.121 15.691 35.671 15.778L35.671 15.788C36.125 15.846 36.256 16.16 36.28 16.574 36.296 16.812 36.296 17.291 36.442 17.466L36.076 17.466C35.993 17.329 35.993 17.071 35.984 16.925 35.954 16.437 35.915 15.896 35.286 15.919L34.029 15.919 34.029 15.637 35.267 15.637C35.671 15.637 36.018 15.384 36.018 14.96 36.018 14.535 35.765 14.266 35.267 14.266L34.029 14.266 34.029 15.637 34.029 15.919 34.029 17.466 33.695 17.466 33.695 13.984z"/> <path stroke="none" fill="#000000" d="M36.603 13.984L39.363 13.984 39.363 14.266 38.149 14.266 38.149 17.466 37.817 17.466 37.817 14.266 36.603 14.266 36.603 13.984z"/> <path stroke="none" fill="#000000" d="M39.847 16.32C39.832 17.038 40.348 17.257 40.982 17.257 41.348 17.257 41.905 17.056 41.905 16.548 41.905 16.155 41.509 15.997 41.188 15.919L40.411 15.73C40.003 15.628 39.627 15.432 39.627 14.891 39.627 14.55 39.847 13.91 40.826 13.91 41.515 13.91 42.118 14.281 42.115 14.993L41.783 14.993C41.762 14.461 41.325 14.194 40.832 14.194 40.378 14.194 39.959 14.368 39.959 14.885 39.959 15.212 40.203 15.349 40.485 15.417L41.335 15.628C41.826 15.759 42.237 15.974 42.237 16.545 42.237 16.783 42.139 17.539 40.905 17.539 40.081 17.539 39.475 17.169 39.515 16.32L39.847 16.32z"/> </g> </g> </g> </svg> My Sample XSLT File <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:svg="http://www.w3.org/2000/svg"> <xsl:template match="/"> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"> <fo:layout-master-set> <fo:simple-page-master master-name="simple" page-height="11in" page-width="8.5in"> <fo:region-body margin="0.7in" margin-top="1.15in" margin-left=".8in"/> <fo:region-before extent="1.5in"/> <fo:region-after extent="1.5in"/> <fo:region-start extent="1.5in"/> <fo:region-end extent="1.5in"/> </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence master-reference="simple"> <fo:flow flow-name="xsl-region-body"> <fo:block> <fo:instream-foreign-object xmlns:svg="http://www.w3.org/2000/svg"> <svg:svg height="100%" width="100%" viewBox="0 0 214.819 278.002"> <xsl:for-each select="svg/g/path"> <svg:g style="stroke:none;fill:#000000;stroke:black;"> <svg:path> <xsl:variable name="s"> <xsl:value-of select="translate(@d,' ','')"/> </xsl:variable> <xsl:attribute name="d"><xsl:value-of select="translate($s,',',' ')"/></xsl:attribute> </svg:path> </svg:g> </xsl:for-each> <xsl:for-each select="svg/g/text()"> <xsl:value-of select="."/> </xsl:for-each> <xsl:for-each select="svg/g/g/path"> <svg:g style="stroke:none;fill:#000000;stroke:black;"> <svg:path> <xsl:variable name="s"> <xsl:value-of select="translate(@d,' ','')"/> </xsl:variable> <xsl:attribute name="d"><xsl:value-of select="translate($s,',',' ')"/></xsl:attribute> </svg:path> </svg:g> </xsl:for-each> <xsl:for-each select="svg/g/g/g/path"> <svg:g style="stroke:none;fill:#000000;"> <svg:path> <xsl:variable name="s1"> <xsl:value-of select="translate(@d,' ','')"/> </xsl:variable> <xsl:attribute name="d"><xsl:value-of select="translate($s1,',',' ')"/></xsl:attribute> </svg:path> </svg:g> </xsl:for-each> <xsl:for-each select="svg/g/line"> <svg:g style="stroke-linecap:butt;"> <xsl:variable name="x1"> <xsl:value-of select="@x1"/> </xsl:variable> <xsl:variable name="y1"> <xsl:value-of select="@y1"/> </xsl:variable> <xsl:variable name="x2"> <xsl:value-of select="@x2"/> </xsl:variable> <xsl:variable name="y2"> <xsl:value-of select="@y2"/> </xsl:variable> <xsl:variable name="stroke-width"> <xsl:value-of select="@stroke-width"/> </xsl:variable> <svg:line x1="$x1" y1="$y1" x2="$x2" y2="$y2" stroke-width="$stroke-width" stroke="black" /> </svg:g> </xsl:for-each> </svg:svg> </fo:instream-foreign-object> </fo:block> </fo:flow> </fo:page-sequence> </fo:root> </xsl:template> </xsl:stylesheet> My Question I have developed the XSLT file for the XML, and I need to produce a pdf output after processing the xslt file. but I am not able to get the xml data in to my pdf. Please ask me if the information what I have provided is not sufficient, as I am bit new to Stackoverflow...

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Complementary Roles of PLM and PIM

    - by Ulf Köster
    Oracle Product Value Chain Solutions (aka Enterprise PLM Solutions) are a comprehensive set of product management solutions that work together to provide Oracle customers with a broad array of capabilities to manage all aspects of product life: innovation, design, launch, and supply chain / commercialization processes beyond the capabilities and boundaries of traditional engineering-focused Product Lifecycle Management applications. They support companies with an integrated managed view across the product value chain: From Lab to Launch, From Farm to Fork, From Concept to Product to Customer, From Product Innovation to Product Design and Product Commercialization. Product Lifecycle Management (PLM) represents a broad suite of software solutions to improve product-oriented business processes and data. PLM success stories prove that PLM helps companies improve time to market, increase product-related revenue, reduce product costs, reduce internal costs and improve product quality. As a maturing suite of enterprise solutions, PLM is still evolving to realize the promise it can provide across all facets of a business and all phases of the product lifecycle. The vision for PLM includes everything from gathering early requirements for a product through multiple stages of the product lifecycle from product design, through commercialization and eventual product retirement or replacement. In discrete or process industries, PLM is typically more focused on Product Definition as items with respect to the technical view of a material or part, including specifications, bills of material and manufacturing data. With Agile PLM, this is specifically related to capabilities addressing Product Collaboration, Governance and Compliance, Product Quality Management, Product Cost Management and Engineering Collaboration. PLM today is mainly addressing key requirements in the early product lifecycle, in engineering changes or in the “innovation cycle”, and primarily adds value related to product design, development, launch and engineering change process. In short, PLM is the master for Product Definition, wherever manufacturing takes place. Product Information Management (PIM) is a product suite that has evolved in parallel to PLM. Product Information Management (PIM) can extend the value of PLM implementations by providing complementary tools and capabilities. More relevant in the area of Product Commercialization, the vision for PIM is to manage product information throughout an enterprise and supply chain to improve product-related knowledge management, information sharing and synchronization from multiple data sources. PIM success stories have shown the ability to provide multiple benefits, with particular emphasis on reducing information complexity and information management costs. Product Information in PIM is typically treated as the commercial view of a material or part, including sales and marketing information and categorization. PIM collects information from multiple manufacturing sites and multiple suppliers into its repository, but also provides integration tools to push the information back out to the other systems, serving as an active central repository with the aim to provide a holistic view on any product sold by a company (hence the name “Product Hub”). In short, PIM is the master of commercial Product Information. So PIM is quickly becoming mandatory because of its value in optimizing multichannel selling processes and relationships with customers, as you can see from the following table: Viewpoint PLM Current State PIM Key Benefits PIM adds to PLM Product Lifecycle Primarily R&D Front end Innovation Cycle Change process Primarily commercial / transactional state of lifecycle Provides a seamless information flow from design and manufacturing through the ultimate selling and servicing of products Data Primarily focused on “item” vs. “product” data Product structures Specifications Technical information Repository for all product information. Reaches out to entire enterprise and its various silos of product information and descriptions Provides a “trusted source” of accurate product information to the internal organization and trading partners Data Lifecycle Repository for all design iterations Historical information Released, current information, with version management and time stamping Provides a single location to track and audit historical product information Communication PLM release finished product to ERP PLM is the master for Product Definition Captures information from disparate sources, including in-house data stores Recognizes the reality of today’s data “mess” across information silos Provides the ability to package product information to its audience in the desired, relevant format to meet their exacting business requirements Departmental R&D Manufacturing Quality Compliance Procurement Strategic Marketing Focus on Marketing and Sales Gathering information from other Departments, multiple sites, multiple suppliers A singular enterprise solution that leverages existing information silos and data stores Supply Chain Multi-site internal collaboration Supplier collaboration Customer collaboration Works with customers, exchanges / data pools, and trading partners to provide relevant product information packaged the way the customer desires Provides ability to provide trading partners and internal customers with information in a manner they desire, continuously Tools Data Management Collaboration Innovation Management Cleansing Synchronization Hub functions Consistent, clean and complete commercial product information The goals of both PLM and PIM, put simply, are to help companies make more profit from their products. PLM and PIM solutions can be easily added as they share some of the same goals, while coming from two different perspectives: the definition of the product and the commercialization of the product. Both can serve as a form of product “system of record”, but take different approaches to delivering value. Oracle Product Value Chain solutions offer rich new strategies for executives to collectively leverage Agile PLM, Product Data Hub, together with Enterprise Data Quality for Products, and other industry leading Oracle applications to achieve further incremental value, like Oracle Innovation Management. This is unique on the market today.

    Read the article

  • SharePoint 2010 Hosting :: How to Customize SharePoint 2010 Global Navigation

    - by mbridge
    Requirements - SharePoint Foundation or SharePoint Server 2010 site - SharePoint Designer 2010 Steps 1. The first step in my process was to download from codeplex a starter masterpage http://startermasterpages.codeplex.com/ . 2. Once you downloaded the starter master page, open up your SharePoint site in SharePoint Designer 2010 and on the left in the “Site Objects “ area click on the folder “All Files” and drill down to catalogs >> masterpages . Once you are in the Masterpage folder copy and paste the _starter.master into this folder. 3. The first step in the customization process is to create your custom style sheet. To create your custom style sheet, click on the “all Files” folder and click on “Style Library.” Right click in the style library section and choose Style sheet. Once the style sheet is created, rename it style.css. Now open the style sheet you created in SharePoint Designer. 4. In this next step you will copy and paste the SharePoint core styles for the global navigation into your custom style sheet. Copy and paste the css below into the style sheet and save file .s4-tn{ padding:0px; margin:0px; } .s4-tn ul.static{ white-space:nowrap; } .s4-tn li.static > .menu-item{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#3b4f65; white-space:nowrap; border:1px solid transparent; padding:4px 10px; display:inline-block; height:15px; vertical-align:middle; } .s4-tn ul.dynamic{ /* [ReplaceColor(themeColor:"Light2")] */ background-color:white; /* [ReplaceColor(themeColor:"Dark2-Lighter")] */ border:1px solid #D9D9D9; } .s4-tn li.dynamic > .menu-item{ display:block; padding:3px 10px; white-space:nowrap; font-weight:normal; } .s4-tn li.dynamic > a:hover{ font-weight:normal; /* [ReplaceColor(themeColor:"Light2-Lighter")] */ background-color:#D9D9D9; } .s4-tn li.static > a:hover { /* [ReplaceColor(themeColor:"Accent1")] */ color:#44aff6; text-decoration:underline; } 5. Once you created the style sheet, go back to the masterpage folder and open the _starter.master file and in the Customization category click edit file. 6. Next, when the edit file opens make sure you view it in split view. Now you are going to search for the reference to our custom masterpage in the code. Make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste and then click find next. 7. Now, in the code replace You have now referenced your custom style sheet in your masterpage. 8. The next step is to locate your Global Navigation control, make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste ID="TopNavigationMenuV4” and then click find next. Once you find ID="TopNavigationMenuV4” , you should see the following block of code which is the global navigation control: ID="TopNavigationMenuV4" Runat="server" EnableViewState="false" DataSourceID="topSiteMap" AccessKey="" UseSimpleRendering="true" UseSeparateCss="false" Orientation="Horizontal" StaticDisplayLevels="1" MaximumDynamicDisplayLevels="1" SkipLinkText="" CssClass="s4-tn" 9. In the global navigation code above you should see CssClass="s4-tn" . As an additional step you can replace "s4-tn" your own custom name like CssClass="MyNav" . If you can the name of the CSS class make sure you update your custom style sheet with the new name, example below: .MyNav{ padding:0px; margin:0px; } .MyNav ul.static{ white-space:nowrap; } 10. At this point you are ready to brand your global navigation. The next step is to modify your style.css with your customizations to the default SharePoint styles. Have fun styling and make sure you save your work often. Hope it helps!!

    Read the article

  • Html.RenderAction Failed when Validation Failed

    - by Shaun
    RenderAction method had been introduced when ASP.NET MVC 1.0 released in its MvcFuture assembly and then final announced along with the ASP.NET MVC 2.0. Similar as RenderPartial, the RenderAction can display some HTML markups which defined in a partial view in any parent views. But the RenderAction gives us the ability to populate the data from an action which may different from the action which populating the main view. For example, in Home/Index.aspx we can invoke the Html.RenderPartial(“MyPartialView”) but the data of MyPartialView must be populated by the Index action of the Home controller. If we need the MyPartialView to be shown in Product/Create.aspx we have to copy (or invoke) the relevant code from the Index action in Home controller to the Create action in the Product controller which is painful. But if we are using Html.RenderAction we can tell the ASP.NET MVC from which action/controller the data should be populated. in that way in the Home/Index.aspx and Product/Create.aspx views we just need to call Html.RenderAction(“CreateMyPartialView”, “MyPartialView”) so it will invoke the CreateMyPartialView action in MyPartialView controller regardless from which main view. But in my current project we found a bug when I implement a RenderAction method in the master page to show something that need to connect to the backend data center when the validation logic was failed on some pages. I created a sample application below.   Demo application I created an ASP.NET MVC 2 application and here I need to display the current date and time on the master page. I created an action in the Home controller named TimeSlot and stored the current date into ViewDate. This method was marked as HttpGet as it just retrieves some data instead of changing anything. 1: [HttpGet] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } Next, I created a partial view under the Shared folder to display the date and time string. 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<dynamic>" %> 2:  3: <span>Now: <% 1: : ViewData["timeslot"].ToString() %></span> Then at the master page I used Html.RenderAction to display it in front of the logon link. 1: <div id="logindisplay"> 2: <% 1: Html.RenderAction("TimeSlot", "Home"); %> 3:  4: <% 1: Html.RenderPartial("LogOnUserControl"); %> 5: </div> It’s fairly simple and works well when I navigated to any pages. But when I moved to the logon page and click the LogOn button without input anything in username and password the validation failed and my website crashed with the beautiful yellow page. (I really like its color style and fonts…)   How ASP.NET MVC executes Html.RenderAction In this example all other pages were rendered successful which means the ASP.NET MVC found the TimeSolt action under the Home controller except this situation. The only different is that when I clicked the LogOn button the browser send an HttpPost request to the server. Is that the reason of this bug? I created another action in Home controller with the same action name but for HttpPost. 1: [HttpPost] 2: [ActionName("TimeSlot")] 3: public ActionResult TimeSlot(object dummy) 4: { 5: return TimeSlot(); 6: } Or, I can use the AcceptVerbsAttribute on the TimeSlot action to let it allow both HttpGet and HttpPost. 1: [AcceptVerbs("GET", "POST")] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } And then repeat what I did before and this time it worked well. Why we need the action for HttpPost here as it’s just data retrieving? That is because of how ASP.NET MVC executes the RenderAction method. In the source code of ASP.NET MVC we can see when proforming the RenderAction ASP.NET MVC creates a RequestContext instance from the current RequestContext and created a ChildActionMvcHandler instance which inherits from MvcHandler class. Then the ASP.NET MVC processes the handler through the HttpContext.Server.Execute method. That means it performs the action as a stand-alone request asynchronously and flush the result into the  TextWriter which is being used to render the current page. Since when I clicked the LogOn the request was in HttpPost so when ASP.NET MVC processed the ChildActionMvcHandler it would find the action which allow the current request method, which is HttpPost. Then our TimeSlot method in HttpGet would not be matched.   Summary In this post I introduced a bug in my currently developing project regards the new Html.RenderAction method provided within ASP.NET MVC 2 when processing a HttpPost request. In ASP.NET MVC world the underlying Http information became more important than in ASP.NET WebForm world. We need to pay more attention on which kind of request it currently created and how ASP.NET MVC processes.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Repeated disconnects on WPA PEAP network

    - by exasperated
    My school has a WPA PEAP network with GTC inner authentication. I am able to connect to the network, but once I load a website or two, the network become unresponsive (i.e. in Chromium, it gets stuck at "Sending request"), and I'm eventually disconnected. Any help will be greatly appreciated. Here's some log output. I can provide more if needed: Ubuntu 13.04 3.8.0-32-generic x86_64 lsusb: 03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) lsmod: iwldvm                241872  0  mac80211              606457  1 iwldvm iwlwifi               173516  1 iwldvm cfg80211              511019  3 iwlwifi,mac80211,iwldvm dmesg: [    3.501227] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [    3.503541] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 [    3.527153] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG disabled [    3.527162] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS enabled [    3.527170] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled [    3.527178] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [    3.527186] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_P2P disabled [    3.527192] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6235 AGN, REV=0xB0 [    3.527240] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [    3.551049] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [  375.153065] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.159727] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [  375.553201] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.559871] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 1892.110738] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 1892.117357] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5227.235372] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5227.242122] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5817.817954] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5817.824560] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5824.571917] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 5824.571929] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 5824.571935] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 6956.290061] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 6956.296671] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 6963.080560] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 6963.080566] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 6963.080570] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 7613.469241] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 7613.475870] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 7620.201265] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 7620.201278] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 7620.201285] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 8232.762453] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 8232.769065] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 8239.581772] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 8239.581784] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 8239.581792] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [13763.634808] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [13763.641427] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [16955.598953] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [16955.605574] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 lshw:    *-network        description: Wireless interface        product: Centrino Advanced-N 6235        vendor: Intel Corporation        physical id: 0        bus info: pci@0000:03:00.0        logical name: wlan0        version: 24        serial: b4:b6:76:a0:4b:3c        width: 64 bits        clock: 33MHz        capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless        configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-32-generic firmware=18.168.6.1 ip=10.250.169.96 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn        resources: irq:46 memory:f7c00000-f7c01fff iwlist scan: Cell 02 - Address: 24:DE:C6:B0:C7:D9                     Channel:36                     Frequency:5.18 GHz (Channel 36)                     Quality=29/70  Signal level=-81 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=0000004ff3fe419b                     Extra: Last beacon: 27820ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030124                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1624001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3424001B000000FF000000000000000000000000000000           Cell 04 - Address: 24:DE:C6:B0:C3:E9                     Channel:149                     Frequency:5.745 GHz                     Quality=28/70  Signal level=-82 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=000000181f60e19c                     Extra: Last beacon: 28680ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030195                     IE: Unknown: 050400010000                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1695001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3495001B000000FF000000000000000000000000000000                     IE: Unknown: DD07000B8601040817                     IE: Unknown: DD0E000B860103006170313930333032           Cell 09 - Address: 24:DE:C6:B0:C0:29                     Channel:149                     Frequency:5.745 GHz                     Quality=39/70  Signal level=-71 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=00000112fb688ede                     Extra: Last beacon: 27716ms ago ifconfig (while connected): wlan0     Link encap:Ethernet  HWaddr b4:b6:76:a0:4b:3c             inet addr:10.250.16.220  Bcast:10.250.31.255  Mask:255.255.240.0           inet6 addr: fe80::b6b6:76ff:fea0:4b3c/64 Scope:Link           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1           RX packets:230023 errors:0 dropped:0 overruns:0 frame:0           TX packets:130970 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000            RX bytes:255999759 (255.9 MB)  TX bytes:16652605 (16.6 MB) iwconfig (while connected): wlan0     IEEE 802.11abgn  ESSID:"CatChat2x"             Mode:Managed  Frequency:5.745 GHz  Access Point: 24:DE:C6:B0:C0:29              Bit Rate=6 Mb/s   Tx-Power=15 dBm              Retry  long limit:7   RTS thr:off   Fragment thr:off           Power Management:off           Link Quality=36/70  Signal level=-74 dBm             Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0           Tx excessive retries:0  Invalid misc:3   Missed beacon:0

    Read the article

  • jQuery with SharePoint solutions

    - by KunaalKapoor
    For me jQuery is the 'Plan-B' for everything.And most of my projects include the use of jQuery for something or the other, so I decided to write a small note on what works best while using jQuery along with SharePoint.I prefer to use the jQuery JavaScript library, which is far more robust, easier to use, and allows for plugins. Follow the steps below to add jQuery to your master page. For office 365, the prefered location to add jQuery files is the "Site Asserts" library.Deployment Best PracticesThey are only as good as the context it’s being referenced.  In other words, take into account your world before applying it.Script your deployment options.  Folder in SPD. Use the file system.  Make external references.  The JQuery library is on the Microsoft Ajax Content Delivery Network. You may even choose to publish to and from the document library. (pros and cons to this approach)Reference options when referencing the script.ScriptLink will make sure it’s loaded at the top of the page and only loaded once. You need Visual Studio or SPDContent Editor Web Part (CEWP).  Drop it on the page and it’s there.  Easy but dangerousCustom Actions. Great for global deployments of JQuery.  Loads it on every page. It also works in Sandbox installations.Deployment Maintenance Dont’sDon’t add scripts directly to your Master Page. That’s way too much effort because the pages are hard to maintain.Don’t add scripts directly to the CEWP.  Use a content link instead. That will allow for reuse. If you or someone deletes the CEWP you won’t lose code in the web partSecurity.  Any scripts run with the same privileges of the current user.  In other words, you can’t get in trouble.Development Best PracticesDon’t abuse the DOM.  There are better options to load the DOM without hitting it 1,000 times.User other performance boosters.Try other libraries.  Try some custom codeAvoid String conversionMinify your filesUse CAML to reduce number of returns rowsOnly update your JQuery library AFTER RIGOROUS REGRESSION TESTINGCRUD operations can come with some funSP Services wraps SharePoint’s web services for executionThe Bing SDK is pretty easy to use.  You can add it to your page with a script,  put it into a content editor web part and connect it from the address parameters in a list.Steps:1. Go to jquery.com and download the latest jQuery library to your desktop. You want to get the compressed production version, not the development version.2. Open SharePoint Designer (SPD) and connect to the root level of your site's site collection.In SPD, open the "Style Library" folder. Create a folder named "Scripts" inside of the Style Library. Drag the jQuery library JavaScript file from your desktop into the Scripts folder.In the Scripts folder, create a new JavaScript file and name it (e.g. "actions.js").3. If you are using visual studio add a folder for js, you can create a new folder at the root level or if you prefer more cleaner solutions like me, you can use the layouts folder which cleans out on deactivation/uninstall.4. Within the <head> tag of the master page, add a script reference to the jQuery library just above the content place holder named "PlaceHolderAdditonalPageHead" (and above your custom CSS references, if applicable) as follows:<script src="/Style%20Library/Scripts/{jquery library file}.js" type="text/javascript"></script>Immediately after the jQuery library reference add a script reference to your custom scripts file as follows:<script src="/Style%20Library/Scripts/actions.js" type="text/javascript"></script>Inside your script tag, you can test if jQuery is already defined and if not, then add it to the page.<script type='text/javascript'>  if (typeof jQuery == 'undefined')    document.write('<scr'+'ipt type="text/javascript" src="http://code.jquery.com/jquery-1.6.1.min.js"></sc'+'ript>');</script>For the inquisitive few... Read on if you'd like :)Why jQuery on SharePoiny is AwesomeIt’s all about that visual wow factor.  You can get past that, “But it looks like SharePoint”  Take a long list view and put it into JQuery with pagination, etc and you are the hero.  It’s also about new controls you get with JQuery that you couldn’t do before.Why jQuery with SharePoint should be AwfulAlthough it’s fairly easy to get jQuery up and running. Copy/Paste can cause a problem.  If you don’t understand what it’s doing in the Client Object Model and the Document Object Model then it will do things on your site that were completely unexpected. Many blogs will note workarounds they employed on their sites. Why it’s not working: Debugging “sucks”.You need to develop small blocks of functionality, Test it by putting in some alerts  and console.log. Set breakpoints and monitor the DOM via Firebug and some IE development toolsPerformance - It happens all the time. But you should look at the tradeoffs. More time may give you more functionality.Consistency - ”But it works fine on my computer. So test on many browsers.  Take into account client resourcesHarm the Farm -  You need to code wisely and negatively test.  Don’t be the cause of a DoS attack that’s really JQuery asking for a resource over and over and over again.  So code wisely. Do negative testing. Monitor Server Resources.They also did a demo where JQuery did an endless loop to pull data from a list. It’s a poor decision but also an easy mistake.  They spiked their server resources within a couple seconds and had to shut down the call before it brought it down.ConclusionJQuery is now another tool in your tool kit. You don’t have to use it. Use it where it makes sense and where it helps you get your job done.Don’t abuse it, you will pay for it laterIt will add to page bloat so take that into accountIt can slow your performance

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • CodePlex Daily Summary for Sunday, November 20, 2011

    CodePlex Daily Summary for Sunday, November 20, 2011Popular ReleasesFree SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadednopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Cetacean Monitoring: Cetacean Monitoring Project Release V 0.1: This is a zip with a working executable for evaluation purposes.WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Full Demo VS2008 Very Simple Demo VS2010 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblySQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 5: 1. added basic schema support 2. added server instance name and process id 3. fixed problem with object search index out of range 4. improved version comparison with previous/next difference navigation 5. remeber main window spliter and object explorer spliter positionAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...DotNetNuke® Community Edition: 06.01.01: Major Highlights Fixed problem with the core skin object rendering CSS above the other framework inserted files, which caused problems when using core style skin objects Fixed issue with iFrames getting removed when content is saved Fixed issue with the HTML module removing styling and scripts from the content Fixed issue with inserting the link to jquery after the header of the page Security Fixesnone Updated Modules/Providers ModulesHTML version 6.1.0 ProvidersnoneDotNetNuke Performance Settings: 01.00.00: First release of DotNetNuke SQL update queries to set the DNN installation for optimimal performance. Please review and rate this release... (stars are welcome)SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...C.B.R. : Comic Book Reader: CBR 0.3: New featuresAdd magnifier size and scale New file info view in the backstage Add dynamic properties on book and settings Sorting and grouping in the explorer with new design Rework on conversion : Images, PDF, Cbr/rar, Cbz/zip, Xps to the destination formats Images, Cbz and XPS ImprovmentsSuppress MainViewModel and ExplorerViewModel dependencies Add view notifications and Messages from MVVM Light for ViewModel=>View notifications Make thread better on open catalog, no more ihm freeze, less t...Desktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbRDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0New ProjectsAutonomous Robot Combat: The project is about a Robotic game arena where 6-8 robots will engage in a team combat using infrared guns. The infrared guns will also have red light LEDs to simulate muzzle flash. Once the project finishes, it will be displayed in University of Plymouth.B2BPro: B2BPro best solution for businessBattlestar Galactica Fighters: Battlestar Galactica Fighters is a 3D vertical scrolling shoot 'em up. It's developed in C# and F# for the XNA Programming exam at Master in Computer Game Developement 2011/2012 in Verona, Italy.Content Compiler 3 - Compile your XNA media outside Visual Studio!: The Content Compiler helps you to compile your media files for use with XNA without using Visual Studio. After almost three years of Development, the third version of the CCompiler is nearly finished and we decided to put it up on Codeplex to "keep it alive".CreaMotion NHibernate Class Builder: NHibernate Class Builder C# , WPF Supports all type relations Supports MsSql, MySql -- Specially developed for NHibernate Learnersecs_tqdt_kd: Electric Custom System Thông quan di?n t? Kinh DoanhIMDb Helper: IMDb Helper is a C# library that provides access to information in the IMDb website. IMDb Helper uses web requests to access the IMDb website, and regular expressions to parse the responses (it doesn't use any external library, only pure .NET). Lion Solution: This is an open source Accounting project for small business usage. Developed by: Sleiman Jneidi Hussein Zawawi Management of Master Lists in SharePoint with Choice Filter Lookup column: Management of Master Lists in SharePoint with Choice Filter Lookup column you can view the detail of the project on http://sharepointarrow.blogspot.com/2011/11/management-of-master-lists-in.htmlMRDS FezMini Robot Brick: This is an attemp to write services for MRDS to control a FezMini Robot with a wireless connection attached to COM2 on the FezMini Board.MVC Route Unit Tester: Provides convenient, easy to use methods that let you unit test the route table in your ASP.NET MVC application. Unlike many libraries, this lets you test routes both ways -- both incoming and going. You can specify an incoming request and assert that it matches a given route (or that there are no matches). You can also specify route data and assert that a given URL will be generated by your application.MyWalk: MyWalk (codename: MyLife) is a novel health application that makes tracking walking a part of daily life. NGeo: NGeo makes it easier for users of geographic data to invoke GeoNames and Yahoo! GeoPlanet / PlaceFinder services. You'll no longer have to write your own GeoNames, GeoPlanet, or PlaceFinder clients. It's developed in ASP.NET 4.0, and uses WCF ServiceModel libraries to deserialize JSON data into Plain Old C# Objects.Octopus Tools: Octopus is an automated deployment server for .NET applications, powered by NuGet. OctopusTools is a set of useful command line and MSBuild tasks designed for automating Octopus.PDV Moveis: Loja MoveisReddit#: Reddit# is a Reddit library for C# or other .Net languages.Rubik: Rubik is, simply, a stab at creating a decent implementation of a Rubik's cube in WPF, and in the process aplying MVVM to the 3D game milieu.Sample Service-Oriented Architecture: Sample of service-oriented architecture using WCF.SFinger: SFinger adds two finger scrolling to synaptics touchpad on Windows. SigemFinal: Versión final del proyecto de diseñoSlResource: silverlight resource managementSonce - Simple ON-line Circuit Editor: Circuit Editor in Silverlight, unfinished student project written in C#Tricofil: Site da Tricofil com Administrador de ConteudoTwincat Ads .Net Client: This is the client implementation of the Ads/Ams protocol created by Beckhoff. (I'm not affiliated with Beckhoff) This implementation will be in C# and will not depend on other libraries. This means it can be used in silverlight and windows phone projects. This project is not finished yet!!WomanMagazine: This is a woman Online Magazine with lot of info and entertainment resources for women

    Read the article

  • ASP.NET MVC 2 disable cache for browser back button in partial views

    - by brainnovative
    I am using Html.RenderAction<CartController>(c => c.Show()); on my master Page to display the cart for all pages. The problem is when I add an item to the cart and then hit the browser back button. It shows the old cart (from Cache) until I hit the refresh button or navigate to another page. I've tried this and it works perfectly but it disables the Cache globally for the whole page an for all pages in my site (since this Action method is used on the master page). I need to enable cache for several other partial views (action methods) for performance reasons. I wouldn't like to use client side script with AJAX to refresh the cart (and login view) on page load - but that's the only solution I can think of right now. Does anyone know better?

    Read the article

  • Unexpected start of already-primary server processes when heartbeat on secondary is stopped.

    - by vorik
    Hi, I've got an active-passive Heartbeat cluster with Apache, MySQL, ActiveMQ and DRBD. Today, I wanted to perform hardware-maintenance on the secondary node (node04), so I stopped the heartbeat service before shutting it down. Then, the primary node (node03) received a shutdown notice from the secondary node (node04). This logging comes from the primary node: node03 heartbeat[4458]: 2010/03/08_08:52:56 info: Received shutdown notice from 'node04.companydomain.nl'. heartbeat[4458]: 2010/03/08_08:52:56 info: Resources being acquired from node04.companydomain.nl. harc[27522]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/status status heartbeat[27523]: 2010/03/08_08:52:56 info: Local Resource acquisition completed. mach_down[27567]: 2010/03/08_08:52:56 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[27567]: 2010/03/08_08:52:56 info: mach_down takeover complete for node node04.companydomain.nl. heartbeat[4458]: 2010/03/08_08:52:56 info: mach_down takeover complete. harc[27620]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/ip-request-resp ip-request-resp ip-request-resp[27620]: 2010/03/08_08:52:56 received ip-request-resp drbddisk OK yes ResourceManager[27645]: 2010/03/08_08:52:56 info: Acquiring resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:56 info: Running /etc/ha.d/resource.d/drbddisk start Filesystem[27700]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/mysql start mysql[27783]: 2010/03/08_08:52:57 Starting MySQL[ OK ] apache[27853]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/monitor start monitor[28160]: 2010/03/08_08:52:58 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq start activemq[28210]: 2010/03/08_08:52:58 Starting ActiveMQ Broker... ActiveMQ Broker is already running. ResourceManager[27645]: 2010/03/08_08:52:58 ERROR: Return code 1 from /etc/ha.d/resource.d/activemq ResourceManager[27645]: 2010/03/08_08:52:58 CRIT: Giving up resources due to failure of activemq ResourceManager[27645]: 2010/03/08_08:52:58 info: Releasing resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/IPaddr 1.2.3.212 stop IPaddr[28329]: 2010/03/08_08:52:58 INFO: ifconfig eth0:0 down IPaddr[28312]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28378]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28433]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/tivoli-cluster stop ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq stop activemq[28503]: 2010/03/08_08:53:01 Stopping ActiveMQ Broker... Stopped ActiveMQ Broker. ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/monitor stop monitor[28681]: 2010/03/08_08:53:01 ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/LVSSyncDaemonSwap master stop LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster down LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncbackup up LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster released ResourceManager[27645]: 2010/03/08_08:53:02 info: Running /etc/ha.d/resource.d/apache /etc/httpd/conf/httpd.conf stop apache[28782]: 2010/03/08_08:53:03 INFO: Killing apache PID 18390 apache[28782]: 2010/03/08_08:53:03 INFO: apache stopped. apache[28771]: 2010/03/08_08:53:03 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:03 info: Running /etc/ha.d/resource.d/mysql stop mysql[28851]: 2010/03/08_08:53:24 Shutting down MySQL.....................[ OK ] ResourceManager[27645]: 2010/03/08_08:53:24 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd0 /data ext3 stop Filesystem[29010]: 2010/03/08_08:53:25 INFO: Running stop for /dev/drbd0 on /data Filesystem[29010]: 2010/03/08_08:53:25 INFO: Trying to unmount /data Filesystem[29010]: 2010/03/08_08:53:25 ERROR: Couldn't unmount /data; trying cleanup with SIGTERM Filesystem[29010]: 2010/03/08_08:53:25 INFO: Some processes on /data were signalled Filesystem[29010]: 2010/03/08_08:53:27 INFO: unmounted /data successfully Filesystem[28999]: 2010/03/08_08:53:27 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:27 info: Running /etc/ha.d/resource.d/drbddisk stop heartbeat[4458]: 2010/03/08_08:53:29 WARN: node node04.companydomain.nl: is dead heartbeat[4458]: 2010/03/08_08:53:29 info: Dead node node04.companydomain.nl gave up resources. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth0 dead. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth1 dead. hb_standby[29193]: 2010/03/08_08:53:57 Going standby [foreign]. heartbeat[4458]: 2010/03/08_08:53:57 info: node03.companydomain.nl wants to go standby [foreign] Soo... What just happened here??? Heartbeat on node04 stopped and told node03, which was the active node at the time. Somehow, node03 decided to start the cluster processes that were already running. (For the processes that are not critical, I always return a 0 from the startupscript so it does not stops the entire cluster when a non-essential part fails.) When starting ActiveMQ, it returns status 1 because it is already running. This fails the node and shuts everything down. As heartbeat is not running on the secondary node, it cannot failover to there. When I tried to run ha_takeover to restart the resources, absolutely nothing happened. Only after I restarted heartbeat on the primary node the resources could be started (after a delay of 2 minutes). These are my questions: Why does heartbeat on the primary node try to start the cluster processes again? Why did ha_takeover not work? What can I do to prevent this from happening? Server configuration: DRBD: version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by [email protected], 2010-01-20 09:14:48 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate B r---- ns:0 nr:6459432 dw:6459432 dr:0 al:0 bm:301 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 uname -a Linux node04 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64 x86_64 x86_64 GNU/Linux haresources node03.companydomain.nl \ drbddisk \ Filesystem::/dev/drbd0::/data::ext3 \ mysql \ apache::/etc/httpd/conf/httpd.conf \ LVSSyncDaemonSwap::master \ monitor \ activemq \ tivoli-cluster \ MailTo::[email protected]::DRBDFailureDrisAcc \ MailTo::[email protected]::DRBDFailureDrisAcc \ 1.2.3.212 ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log keepalive 500ms deadtime 30 warntime 10 initdead 120 udpport 694 mcast eth0 225.0.0.3 694 1 0 mcast eth1 225.0.0.4 694 1 0 auto_failback off node node03.companydomain.nl node node04.companydomain.nl respawn hacluster /usr/lib64/heartbeat/dopd apiauth dopd gid=haclient uid=hacluster Thank you very much in advance, Ger Apeldoorn

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >