Search Results

Search found 4685 results on 188 pages for 'proper'.

Page 81/188 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • apache 2.2: how can I set up a VirtualHost inside the RootDirectory?

    - by redraw
    I want to set up a VirtualHost inside the RootDirectory. For example, My project is in C:/myproject and I want to access with http://localhost/myproject EDIT: I've made an alias inside the httpd-vhosts.conf, however I don't have permissions. <VirtualHost *:80> DocumentRoot "C:/apache-2.2/htdocs" ServerName localhost Alias /test "D:\arbol\documentos\test" </VirtualHost> Is this code below the proper way to give permissions? <VirtualHost *:80> DocumentRoot "C:/apache-2.2/htdocs" ServerName localhost Alias /test "D:\arbol\documentos\test" <Directory "D:\arbol\documentos\test"> allow from all order allow,deny AllowOverride All </Directory> </VirtualHost>

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010

    - by pinaldave
    This is report to my third of very successful seminar event on SQL Server Performance Tuning. SQL Server Performance Tuning Seminar in Colombo was oversubscribed with total of 35 attendees. You can read the details over here SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010. SQL Server Performance Tuning Seminar in Hyderabad was oversubscribed with total of 25 attendees. You can read the details over here SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010. The same Seminar was offered in Pune on December 4,-5, 2010. We had another successful seminar with lots of performance talk. This seminar was attended by 30 attendees. The best part of the seminar was that along with the our agenda, we have talked about following very interesting concepts. Deadlocks Detection and Removal Dynamic SQL and Inline Code SQL Optimizations Multiple OR conditions and performance tuning Dynamic Search Condition Building and Improvement Memory Cache and Improvement Bottleneck Detections – Memory, CPU and IO Beginning Performance Tuning on Production Parametrization Improving already Super Fast Queries Convenience vs. Performance Proper way to create Indexes Hints and Disadvantages I had great time doing the seminar and sharing my performance tricks with all. The highlight of this seminar was I have explained the attendees, how I begin doing performance tuning when I go for Performance Tuning Consultations.   Pinal Dave at SQL Performance Tuning Seminar SQL Server Performance Tuning Seminar Pinal Dave at SQL Performance Tuning Seminar Pinal Dave at SQL Performance Tuning Seminar SQL Server Performance Tuning Seminar SQL Server Performance Tuning Seminar This seminar series are 100% demo oriented and no usual PowerPoint talk. They are created from my experiences of various organizations for performance tuning. I am not planning any more seminar this year as it was great but I am booked currently for next 60 days at various performance tuning engagements. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Sharing a texture resource from DX11 to DX9 to WPF, need to wait for DeviceContext.Flush() to finish

    - by Rei Miyasaka
    I'm following these instructions on TheCodeProject for rendering from DirectX to WPF using D3DImage. The trouble is that now that I have no swap chain to call Present() on -- which according to the article shouldn't be a problem, but it definitely wasn't copying my back buffer. An additional step that I have to take before I can copy the texture to WPF is to share it with a second D3D9Ex device, since D3DImage only works with DX9 (which is understandable, as WPF is built on DX9). To that end, I've modified some SlimDX code to work with DirectX 11. I tried calling DeviceContext.Flush() (the Immediate one) at the end of each render cycle, which kind of works -- most of the time it'll show my renderings, but maybe for maybe 3 or 4 out of 60 frames each second, it'll draw my clear color instead. This makes sense -- Flush() is non-blocking; it doesn't wait for the GPU to do its thing the way SwapChain.Present does. Any idea what the proper solution is? I have a feeling it has something to do with my texture parameters for the back buffer, but I don't know.

    Read the article

  • CodePlex Daily Summary for Monday, March 01, 2010

    CodePlex Daily Summary for Monday, March 01, 2010New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds S...AWS SimpleDB Browser: A basic GUI browser tool for inspection and querying of a SimpleDB.Desktop Dimmer: A simple application for dimming the desktop around windows, videos, or other media.Disk Defuzzer: Compare chaos of files and folders with customizable SQL queries. This little application scans files in any two folders, generates data in an A...Dynamic Configuration: Dynamic configuration is a (very) small library to provide an API compatible replacement for the System.Configuration.ConfigurationManager class so...Expression Encoder 3 Visual Basic Samples: Visual Basic Sample code that calls the Expression Encoder 3 object model.Extended Character Keyboard: An lightweight onscreen keyboard that allows you to enter special characters like "á" and "û". Also supports adding of 7 custom buttons.FileHasher: This project provides a simple tool for generating and verifying file hashes. I created this to help the QA team I work with. The project is all C#...Fluent Assertions: Fluent interface for writing more natural specifying assertions with more clarity than the traditional assertion syntax such as offered by MSTest, ...Foursquare BlogEngine Widget: A Basic Widget for BlogEngine which displays the last foursquare Check-insGraffiti CMS Events Plugin: Plugin for Graffiti CMS that allows creating Event posts and rendering an Event CalendarHeadCounter: HeadCounter is a raid attendance and loot tracking application for World of Warcraft.HRM Core (QL Nhan Su): This is software about Human Resource Management in Viet Nam ------------ Đây là phần mềm Quản lý nhân sự tiền lương ở Việt Nam (Nghiệp vụ ở Việt Nam)IronPython Silverlight Sharpdevelop Template: This IronPython Silverlight SharpDevelop Template makes it easier for you to make Silverlight applications in IronPython with Sharpdevelop.kingbox: my test code for study vs 2005link_attraente: Projeto Conclusão de CursoORMSharp.Net: ORMSharp.Net https://code.google.com/p/ormsharp/ http://www.sqlite.org/ http://sqlite.phxsoftware.com/ http://sourceforge.net/projects/sqlite-dotnet2/Orz framework: Orz framework is more like a helpful library, if you are develop with DotNet framework 3.0, it will be very useful to you. Orz framework encapsul...OTManager: OTManagerSharePoint URL Ping Tool: The Url Ping Tool is a farm feature in SharePoint that provide additional performance and tracing information that can be used to troubleshoot issu...SunShine: SunShine ProjectToolSuite.ValidationExpression: assembly with regular expression for the RegularExpressionValidator controlTwitual Studio: A Visual Studio 2010 based Twitter client. Now you have one less reason for pressing Alt+Tab. Plus you still look like you're working!Velocity Hosting Tool: A program designed to aid a HT Velocity host in hosting and recording tournaments.Watermarker: Adds watermark on pictures to prevent copy. Icon taken from PICOL. Can work with packs of images.Zack's Fiasco - ASP.NET Script Includer: Script includer to * include scripts (JS or CSS) once and only once. * include the correct format by differentiating between release and build. Th...New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for ASP.NET Name ...All-In-One Code Framework (简体中文): All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Latest Download Link: http://c...AWS SimpleDB Browser: SimpleDbBrowser.zip Initial Release: The initial release of the SimpleDbBrowser. Unzip the file in the archive and place them all in a folder, then run the .exe. No installer is used...BattLineSvc: V1: First release of BattLineSvcCC.Votd Screen Saver: CC.Votd 1.0.10.301: More bug fixes and minor enhancements. Note: Only download the (Screen Saver) version if you plan to manually install. For most users the (Install...Dynamic Configuration: DynamicConfiguration Release 1: Dynamic Configuration DLL release.eIDPT - Cartão de Cidadão .NET Wrapper: EIDPT VB6 Demo Program: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.eIDPT - Cartão de Cidadão .NET Wrapper: eIDPT VB6 Demo Program Source: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.ESPEHA: Espeha 10: 1. Help available on F1 and via context menu '?' 2. Width of categiries view is preserved througb app starts 3. Drag'nd'drop for tasks view allows ...Extended Character Keyboard: OnscreenSCK Beta V1.0: OnscreenSCK Beta Version 1.0Extended Character Keyboard: OnscreenSCK Beta V1.0 Source: OnscreenSCK Beta Version 1.0 Source CodeFileHasher: Console Version v 0.5: This release provides a very basic and minimal command-line utility for generating and validating file hashes. The supported command-line paramete...Furcadia Framework for Third Party Programs: 0.2.3 Epic Wrench: Warning: Untested on Linux.FurcadiaLib\Net\NetProxy.cs: Fixed a bug I made before update. FurcadiaFramework_Example\Demo\IDemo.cs: Ignore me. F...Graffiti CMS Events Plugin: Version 1.0: Initial Release of Events PluginHeadCounter: HeadCounter 1.2.3 'Razorgore': Added "Raider Post" feature for posting details of a particular raider. Added Default Period option to allow selection of Short, Long or Lifetime...Home Access Plus+: v3.0.0.0: Version 3.0.0.0 Release Change Log: Reconfiguration of the web.config Ability to add additional links to homepage via web.config Ability to add...Home Access Plus+: v3.0.1.0: Version 3.0.1.0 Release Change Log: Fixed problem with moving File Changes: ~/bin/chs extranet.dll ~/bin/chs extranet.pdbHome Access Plus+: v3.0.2.0: Version 3.0.2.0 Release Change Log: Fixed problem with stylesheet File Changes: ~/chs.masterHRM Core (QL Nhan Su): HRMCore_src: Source of HRMCoreIRC4N00bz: IRC4N00bz v1.0.0.2: There wasn't much updated this weekend. I updated 2 'raw' events. One is all raw messages and the other is events that arn't caught in the dll. ...IronPython Silverlight Sharpdevelop Template: Version 1 Template: Just unzip it into the Sharpdevelop python templates folder For example: C:\Program Files\SharpDevelop\3.0\AddIns\AddIns\BackendBindings\PythonBi...MDownloader: MDownloader-0.15.4.56156: Fixed handling exceptions; previous handling could lead to freezing items state; Fixed validating uploading.com links;OTManager: Activity Log: 2010.02.28 >> Thread Reopened 2010.02.28 >> Re-organized WBD Features/WMBD Features 2010.02.28 >> Project status is active againPicasa Downloader: PicasaDownloader (41175): NOTE: The previous release was accidently the same as the one before that (forgot to rebuild the installer). Changelog: Fixed workitem 10296 (Sav...PicNet Html Table Filter: Version 2.0: Testing w/ JQuery 1.3.2Program Scheduler: Program Scheduler 1.1.4: Release Note: *Bug fix : If the log window is docked and user moves the log window , main window will move too. *Added menu to log window to clear...QueryToGrid Module for DotNetNuke®: QueryToGrid Module version 01.00.00: This is the initial release of this module. Remember... This is just a proof of concept to add AJAX functionality to your DotNetNuke modules.Rainweaver Framework: February 2010 Release: Code drop including an Alpha release of the Entity System. See more information in the Documentation page.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.1: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...Team Foundation Server Revision Labeller for CruiseControl.NET: TFS Labeller for CruiseControl.NET - TFS 2008: ReleaseFirst release of the Team Foundation Server Labeller for CruiseControl.NET. This specific version is bound to TFS 2008 DLLs.ToolSuite.ValidationExpression: 01.00.01.000: first release of the time validation class; the assembly file is ready to use, the documentation ist not complete;VCC: Latest build, v2.1.30228.0: Automatic drop of latest buildWatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.7.00: Whats New FileBrowser: Non Admin Users will only see a User Sub folder (..\Portals\0\userfiles\UserName) CKFinder: Non Admin Users will only see ...Watermarker: Watermarker: first public version. can build watermark only in left top corner on one image at once.While You Were Away - WPF Screensaver: Initial Release: This is the code released when the article went live.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETDotNetNuke® Community EditionMost Active ProjectsRawrBlogEngine.NETMapWindow GISCommon Context Adapterspatterns & practices – Enterprise LibrarySharpMap - Geospatial Application Framework for the CLRSLARToolkit - Silverlight Augmented Reality ToolkitDiffPlex - a .NET Diff GeneratorRapid Entity Framework. (ORM). CTP 2jQuery Library for SharePoint Web Services

    Read the article

  • Parallelism in .NET – Introduction

    - by Reed
    Parallel programming is something that every professional developer should understand, but is rarely discussed or taught in detail in a formal manner.  Software users are no longer content with applications that lock up the user interface regularly, or take large amounts of time to process data unnecessarily.  Modern development requires the use of parallelism.  There is no longer any excuses for us as developers. Learning to write parallel software is challenging.  It requires more than reading that one chapter on parallelism in our programming language book of choice… Today’s systems are no longer getting faster with each generation; in many cases, newer computers are actually slower than previous generation systems.  Modern hardware is shifting towards conservation of power, with processing scalability coming from having multiple computer cores, not faster and faster CPUs.  Our CPU frequencies no longer double on a regular basis, but Moore’s Law is still holding strong.  Now, however, instead of scaling transistors in order to make processors faster, hardware manufacturers are scaling the transistors in order to add more discrete hardware processing threads to the system. This changes how we should think about software.  In order to take advantage of modern systems, we need to redesign and rewrite our algorithms to work in parallel.  As with any design domain, it helps tremendously to have a common language, as well as a common set of patterns and tools. For .NET developers, this is an exciting time for parallel programming.  Version 4 of the .NET Framework is adding the Task Parallel Library.  This has been back-ported to .NET 3.5sp1 as part of the Reactive Extensions for .NET, and is available for use today in both .NET 3.5 and .NET 4.0 beta. In order to fully utilize the Task Parallel Library and parallelism, both in .NET 4 and previous versions, we need to understand the proper terminology.  For this series, I will provide an introduction to some of the basic concepts in parallelism, and relate them to the tools available in .NET.

    Read the article

  • Implementing Scrolling Background in LibGDX game

    - by Vishal Kumar
    I am making a game in LibGDX. After working for whole a day..I could not come out with a solution on Scrolling background. My Screen width n height is 960 x 540. I have a png image of 1024 x 540. I want to scroll the background in such a way that it contuosly goes back with camera x-- as per camera I tried many alternatives... drawing the image twice ..did a lot of calculations and many others.... but finally end up with this dirty code if(bg_x2 >= - Assets.bg.getRegionWidth()) { //calculated it to position bg .. camera was initially at 15 bg_x2 = (16 -4*camera.position.x); bg_x1=bg_x2+Assets.bg.getRegionWidth(); } else{ bg_x1 = (16 -4*camera.position.x)%224; // this 16 is not proper //I think there can be other ways bg_x2=bg_x1+Assets.bg.getRegionWidth(); } //drawing twice batch.draw(Assets.bg, bg_x2, bg_y); batch.draw(Assets.bg, bg_x1, bg_y); The Simple idea is SCROLLING BACKGROUND WITH SIZE SOMEWHAT MORE THAN SCREEN SIZE SO THAT IT LOOK SMOOTH. Even after a lot of search, i didn't find an online solution. Please help.

    Read the article

  • error X3501: 'main': entrypoint not found

    - by Pasha
    I am trying to learn DX10 by following this tutorial. However, my shader won't compile. Below is the detailed error message. Build started 9/10/2012 10:22:46 PM. 1>Project "D:\code\dx\Engine\Engine\Engine.vcxproj" on node 2 (Build target(s)). C:\Program Files (x86)\Windows Kits\8.0\bin\x86\fxc.exe /nologo /E"main" /Fo "D:\code\dx\Engine\Debug\color.cso" /Od /Zi color.fx 1>FXC : error X3501: 'main': entrypoint not found compilation failed; no code produced 1>Done Building Project "D:\code\dx\Engine\Engine\Engine.vcxproj" (Build target(s)) -- FAILED. Build FAILED. Time Elapsed 00:00:00.05 I can easily compile the downloaded code, but I want to know how to fix this error myself. My color.fx looks like this //////////////////////////////////////////////////////////////////////////////// // Filename: color.fx //////////////////////////////////////////////////////////////////////////////// ///////////// // GLOBALS // ///////////// matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; ////////////// // TYPEDEFS // ////////////// struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Vertex Shader //////////////////////////////////////////////////////////////////////////////// PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; } //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 ColorPixelShader(PixelInputType input) : SV_Target { return input.color; } //////////////////////////////////////////////////////////////////////////////// // Technique //////////////////////////////////////////////////////////////////////////////// technique10 ColorTechnique { pass pass0 { SetVertexShader(CompileShader(vs_4_0, ColorVertexShader())); SetPixelShader(CompileShader(ps_4_0, ColorPixelShader())); SetGeometryShader(NULL); } }

    Read the article

  • Start small, grow fast your SOA footprint by Edwin Biemond, Ronald van Luttikhuizen and Demed L’Her

    - by JuergenKress
    A set of pragmatic best practices for deploying a simple and sound SOA footprint that can grow with business demand. The paper contains details about Administrative considerations & Infrastructure considerations & Development considerations& Architectural considerations.  Edwin Biemond Ronald van Luttikhuizen Demed L’Her We are very interested to publish papers jointly with our partner community. Here is a list of possible SOA whitepapers that I am very interested in seeing published (note that the list is not exhaustive and I welcome any other topic you would like to volunteer). The format for these whitepapers would ideally be a 5 to 12 pages document, possibly with a companion sample (to be hosted on http://java.net/projects/oraclesoasuite11g ). It is not a marketing stuff. We will get them published on OTN, with proper credits and use social media (Twitter, Facebook, etc.) to promote them. For information, the "quickstart guide" was downloaded more than 11,000 titles over just 2 months, following a similar approach. These papers are a great way to get exposure and build your resume. We would prefer if we could get 2 people to collaborate on these papers (ideally 1 partner or customer and 1 oracle person). This guarantees some level of peer review and gives greater legitimacy to the paper. If you are Interested? Please contact Demed L’Her Thank you! SOA Partner Community For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Start small grow fast,Edwin Biemond,Ronald van Luttikhuizen,Demed L’Her,SOA Suite,Oracle,OTN,SOA Partner Community,Jürgen Kress

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • SQLAuthority Book Review – DBA Survivor: Become a Rock Star DBA

    - by pinaldave
    DBA Survivor: Become a Rock Star DBA – Thomas LaRock Link to Amazon Link to Flipkart First of all, I thank all my readers when I wrote that I could not get this book in any local book stores, because they offered me to send a copy of this good book. A very special mention goes to Sripada and Jayesh for they gave so much effort in finding my home address and sending me the hard copy. Before, I did not have the copy of the book, but now I have two of it already! It surprises me how my readers were able to find my home address, which I have not publicly shared. Quick Review: This is indeed a one easy-to-read and fun book. We all work day and night with technology yet we should not forget to show our love and care for our family at home. For our souls that starve for peace and guidance, this one book is the “it” book for all the technology enthusiasts. Though this book was specifically written for DBAs, the reach is not limited to DBAs only because the lessons incorporated in it actually applies to all. This is one of the most motivating technical books I have read. Detailed Review: Let us go over a few questions first: Who wants to be as famous as rockstars in the field of Database Administration? How can one learn what it takes to become a top notch software developer? If you are a beginner in your field, how will you go to next level? Your boss may be very kind or like Dilbert’s Boss, what will you do? How do you keep growing when Eco-system around you does not support you? You are almost at top but there is someone else at the TOP, what do you do and how do you avoid office politics? As a database developer what should be your basic responsibility? and many more… I was able to completely read book in one sitting and I loved it. Before I continue with my opinion, I want to echo the opinion of Kevin Kline who has written the Forward of the book. He has truly suggested that “You hold in your hands a collection of insights and wisdom on the topic of database administration gained through many years of hard-won experience, long nights of study, and direct mentorship under some of the industry’s most talented database professionals and information technology (IT) experts.” Today, IT field is getting bigger and better, while talking about terabytes of the database becomes “more” normal every single day. The gods and demigods of database professionals are taking care of these large scale databases and are carefully maintaining them. In this world, there are only a few beginnings on the first step. There are many experts in different technology fields who are asked to address the issues with databases. There is YOU and ME, who is just new to this work. So we ask ourselves WHERE to begin and HOW to begin. We adore and follow the religion of our rockstars, but oftentimes we really have no idea about their background and their struggles. Every rockstar has his success story which needs to be digested before learning his tricks and tips. This book starts with the same note and teaches the two most important lessons for anybody who wants to be a DBA Rockstar –  to focus on their single goal of learning and to excel the technology. The story starts with three simple guidelines – Get Prepared, Get Trained, Get Certified. Once a person learns the skills, and then, it would be about time that he needs to enrich or to improve those skills you have learned. I am sure that the right opportunity will come finding themselves and they will not have to go run behind it. However, the real challenge for any person is the first day or first week. A new employee, no matter how much experienced he is, sometimes has no clue about what should one do at new job. Chapter 2 and chapter 3 precisely talk about what one should do as soon as the new job begins. It is also written with keeping the fact in focus that each job can be very much different but there are few infrastructure setups and programming concepts are the same. Learning basics of database was really interesting. I like to focus on the roots of any technology. It is important to understand the structure of the database before suggesting what indexes needs to be created, the same way this book covers the most essential knowledge one must learn by most database developers. I think the title of the fourth chapter is my favorite sentence in this book. I can see that I will be saying this again and again in the future – “A Development Server Is a Production Server to a Developer“. I have worked in the software industry for almost 8 years now and I have seen so many developers sitting on their chairs and waiting for instructions from their lead about how to improve the code or what to do the next. When I talk to them, I suggest that the experiment with their server and try various techniques. I think they all should understand that for them, a development server is their production server and needs to pay proper attention to the code from the beginning. There should be NO any inappropriate code from the beginning. One has to fully focus and give their best, if they are not sure they should ask but should do something and stay active. Chapter 5 and 6 talks about two essential skills for any developer and database administration – what are the ethics of developers when they are working with production server and how to support software which is running on the production server. I have met many people who know the theory by heart but when put in front of keyboard they do not know where to start. The first thing they do opening the browser and searching online, instead of opening SQL Server Management Studio. This can very well happen to anybody who is experienced as well. Chapter 5 and 6 addresses that situation as well includes the handy scripts which can solve almost all the basic trouble shooting issues. “Where’s the Buffet?” By far, this is the best chapter in this book. If you have ever met me, you would know that I love food. I think after reading this chapter, I felt Thomas has written this just keeping me in mind. I think there will be many other people who feel the same way, too. Even my wife who read this chapter thought this was specifically written for me. I will not talk any more about this chapter as this is one must read chapter. And of course this is about real ‘FOOD‘. I am an SQL Server Trainer and Consultant and I totally agree with the point made in the chapter 8 of this book. Yes, it says here that what is necessary to train employees and people. Millions of dollars worth the labor is continuously done in the world which has faults and incorrect. Once something goes wrong, very expensive consultant comes in and fixes the problem. This whole cycle which can be stopped and improved if proper training is done. There is plenty of free trainings available as well, if one cannot afford paid training. “Connect. Learn. Share” – I think this is a great summary and bird’s eye view of this book. Networking is the key. Everything which is discussed in this book can be taken to next level if one properly uses this tips and continuously grow with it. Connecting with others, helping learn each other and building the good knowledge sharing environment should be the goal of everyone. Before I end the review I want to share a real experience. I have personally met one DBA who has worked in a single department in a company for so long that when he was put in a different department in his company due to closing that department, he could not adjust and quit the job despite the same people and company around him. Adjusting in the new environment gets much tougher as one person gets more and more experienced. This book precisely addresses the same issue along with their solutions. I just cannot stop comparing the book with my personal journey. I found so many things which are coincidently in the book is written as how we developer and DBA think. I must express special thanks to Thomas for taking time in his personal life and write this book for us. This book is indeed a book for everybody who wants to grow healthy in the tough and competitive environment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Why do XSLT editors insert tab or space characters into XSLT to format it?

    - by pgfearo
    All XSLT editors I've tried till now add tab or space characters to the XSLT to indent it for formatting. This is done even in places within the XSLT where these characters are significant to the XSLT processor. XSLT modified for formatting in this way can produce output very different to that of the original XSLT if it had no formatting. To prevent this, xsl:text elements or other XSLT must be added to a sequence constructor to help separate formatting from content, this additional XSLT impacts on maintainability. Formatting characters also adversely impact on general usability of the tool in a number of ways (this is why word-processors don't use them I guess) and add to the size of the file. As part of a larger project I've had to develop a light-weight XSLT editor, it's designed to format XSLT properly, but without tab or space characters, just a dynamic left-margin for each new line. The XSLT therefore doesn't need additional elements to separate formatting tab or space characters from content. The problem with this is that if XSLT from this editor is opened in other XSLT editors, characters will be added for formatting reasons and the XSLT may therefore no longer behave as intended. Why then do existing XSLT editors use tabs or spaces for formatting in the first place? I feel there must be valid reasons, perhaps historical, perhaps practical. An answer will help me understand whether I need to put compatibility options in place in my XSLT editor somehow, whether I should simply revert to using tabs or spaces for both XSLT content and formatting (though this seems like a backwards step to me), or even whether enough XSLT users might be able to persuade their tools vendors to include alternative formatting methods to tabs or spaces. Note: I provided an XSLT sample demonstrating formatting differences in this answer to the question: Tabs versus spaces—what is the proper indentation character for everything, in every situation, ever?

    Read the article

  • Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    - by HosamKamel
    Microsoft has released the beta release of Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”, the Eclipse plugin and cross-platform command line assets that were acquired from Teamprise back in November. You can download the bits here, and participate in the associated Microsoft Connect community here. Changes done in this release : All of the architectural changes in TFS 2010 has been reacted, which primarily shows up in our support for Team Project Collections but it also means that the Eclipse plug-in supports all the configurations for project portal and reporting services that are possible (including not having any configured at all) Added the enhanced work item linking and hierarchy capabilities.  You can now define typed links, query for work items based on links, and work with work item hierarchies. Added support for the new WF-based team build Have reacted to a lot of underlying changes in the source control version model with respect to how branching, merging, and renames happen. History now follows branches and merges. Branches are proper first class citizens in the source control explorer. You can check a detailed post written  by bharry here Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Would you re-design completely under .Net?

    - by dboarman
    A very extensive application began as an Access-based system (for database storage). Forms were written in VB5 and/or VB6. As .Net became a fixture in the development community, certain modules have been rewritten. This seems very awkward and potentially costly just to maintain because of the cross-technologies and extra work to keep the two technologies happy with each other. Of course, the application uses a mix of ODBC OleDb and MySql. Would you think spending the time and resources to completely re-develop the application under .Net would be more cost effective? In an effort to develop a more stable application, wouldn't it make sense to use .Net? Or continue chasing Access bugs, adding new features in .Net (which may or may not create new bugs between .Net and Access), and rewriting old Access modules into .Net modules under time constraints that prevent proper design and development? Update The application uses OleDb and MySql - I corrected my previous statement. Also, to lend further support to rewriting: I have since found out that when the "porting" to .Net began, the VBA/VB6 code that existed was basically translated to the .Net equivalent. From my understanding, nothing was done to improve performance, or take advantage of new libraries or technologies. In my opinion, this creates a very fragile and unstable application. With every new update, this becomes more and more visible. As a help desk technician, I have noticed an increase in problems reported. The customers using the software have noticed an increase in problems and are commenting on it.

    Read the article

  • Enabling 32-Bit Applications on IIS7 (also affects 32-bit oledb or odbc drivers) [Solved]

    - by Humprey Cogay, C|EH
    We just bought a new Web Server, after installing Windows 2008 R2(which is a 64bit OS and IIS7), SQL Server Standard 2008 R2 and IBM Client Access for V5R3 with its Dot Net Data Providers, I tried deploying our new project which is fully functional on an IIS6 Based Web Server, I encountered this Error The 'IBMDA400.DataSource.1' provider is not registered on the local machine. To remove the doubt that I still lack some Software Pre-Requesites or version conflicts  since I encountered some erros while installing my IBM Client Access, I created a Connection Tester which is Windows App that accepts a connection string as a parameter and verifies if that parameter is valid. After entering the Proper Conn String I tried hitting the button and the Test was Succesful. So now I trimmed my suspects to My Web App and IIS7. After Googling around I found this post by a Rakki Muthukumar(Microsoft Developer Support Engineer for ASP.NET and IIS7) http://blogs.msdn.com/b/rakkimk/archive/2007/11/03/iis7-running-32-bit-and-64-bit-asp-net-versions-at-the-same-time-on-different-worker-processes.aspx So I tried scouting on IIS7's management console and found this little tweak under the Application Pool where my App is a member of. After changing this parameter to TRUE Yahoo (although I'm a Google kind of person) the Web App Works .......

    Read the article

  • SQL SERVER – 2008 – Missing Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify missing indexes on any database. Please note, if you should not create all the missing indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to Avg_Estimated_Impact when you are going to create index. The index creation script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Missing Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 dm_mid.database_id AS DatabaseID, dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact, dm_migs.last_user_seek AS Last_User_Seek, OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS [TableName], 'CREATE INDEX [IX_' + OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN '_' ELSE '' END + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','') + ']' + ' ON ' + dm_mid.statement + ' (' + ISNULL (dm_mid.equality_columns,'') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (dm_mid.inequality_columns, '') + ')' + ISNULL (' INCLUDE (' + dm_mid.included_columns + ')', '') AS Create_Statement FROM sys.dm_db_missing_index_groups dm_mig INNER JOIN sys.dm_db_missing_index_group_stats dm_migs ON dm_migs.group_handle = dm_mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details dm_mid ON dm_mig.index_handle = dm_mid.index_handle WHERE dm_mid.database_ID = DB_ID() ORDER BY Avg_Estimated_Impact DESC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – A Case Study on “Hekaton” against RPM – SQL Server 2014 CTP1

    - by Pinal Dave
    In this new world of social media, apps and mobile devices, we are all now getting impatient. Automatic updates have spoiled few of our habits. When a new feature is released everybody wants to immediately adopt the feature and start using it. Though this is true in the world of apps and smart phones, but it is still not possible in the developer’s world. When new features are around, before we start using it, we need to spend quite a lots of time to understand it and test it. Once we are sold on the feature we refer the feature to our manager and eventually the entire organization makes decisions on upgrading to use the new feature. Similarly, when the new feature of In-Memory OLTP was announced, pretty much every SQL Server DBA wanted to implement that on their server. Through the implementation of the feature is not hard, it is not that easy as well. One has to do proper research about their own environment and workload before implementing this feature. Microsoft has recently released a Case Study on In-Memory OLTP feature. Here is the abstract from the white paper itself. I/O latch can cause session delays that impact application performance. This white paper describes the procedures and common I/O latch issues when migrating to Hekaton in SQL Server 2014. It also includes challenges that occurred during the migration and the performance analysis at different stages.  If you are going to implement In-Memory OLTP database, this is a good case study to refer. Download white paper from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL

    Read the article

  • Professionalism of online username / handle

    - by Thanatos
    I have in the past, and continue currently, used the handle "thanatos" on a lot of Internet sites, and if that isn't available (which happens ~50% of the time), "deathanatos". "Thanatos" is the name of the Greek god or personification of death (not to be confused with Hades, the Greek god of the underworld). "Dea" is a natural play-on-words to make the handle work in situations where the preferred handle has already been taken, without having to resort to numbers and remaining pronounceable. I adopted the handle many years ago — at the time, I was reading Edith Hamilton's Mythology, and Piers Anthony's On a Pale Horse, both still favorites of mine, and the name was born out of that. When I created the handle, I was fairly young, and valued privacy while online, not giving out my name. As I've become a more competent programmer, I'm starting to want to release some of my private works under FOSS licenses and such, and sometimes under my own name. This has started to tie this handle with my real name. I've become increasingly aware of my "web image" in the last few years, as I've been job hunting. As a programmer, I have a larger-than-average web presence, and I've started to wonder: Is this handle name professional? Does a handle name matter in a professional sense? Should I "rebrand"? (While one obviously wants to avoid hateful or otherwise distasteful names, is a topic such as "death" (to which my name is tied) proper? What could be frowned upon?) To try to make this a bit more programmer specific: Programmers are online — a lot — and some of us (and some who are not us) tend to put emphasis on a "web presence". I would argue that a prudent programmer (or anyone in an occupation that interacts online a lot) would be aware of their web presence. While not strictly limited to just programmers, for better or worse, it is a part of our world.

    Read the article

  • SQL – NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer

    - by Pinal Dave
    I recently wrote a four-part series on how I started to learn about and begin my journey with NuoDB. Big Data is indeed a big world and the learning of the Big Data is like spaghetti – no one knows in reality where to start, so I decided to learn it with the help of NuoDB. You can download NuoDB and continue your journey with me as well. Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB …and in this blog post we will try to answer the most asked question about NuoDB. “I like the NuoDB Explorer but can I connect to NuoDB from my preferred Graphical User Interface?” Honestly, I did not expect this question to be asked of me so many times but from the question it is clear that we developers absolutely want to learn new things and along with that we do want to continue to use our most efficient developer tools. Now here is the answer to the question: “Absolutely, you can continue to use any of the following most popular SQL clients.” NuoDB supports the three most popular 3rd-party SQL clients. In all the leading development environments there are always more than one database installed and managing each of them with a different tool is often a very difficult task. Developers like to use one tool, which can control most of the databases. Once developers are familiar with one database tool it is very difficult for them to switch to another tool. This is particularly difficult when we developers find that tool to be the key reason for our efficiency. Let us see how to install each of the NuoDB supported 3rd party tools along with a quick tutorial on how to go about using them. SQuirreL SQL Client First download SQuirreL Universal SQL client. On the Windows platform you can double-click on the file and it will install the SQuirrel client. Once it is installed, open the application and it will bring up the following screen. Now go to the Drivers tab on the left side and scroll it down. You will find NuoDB mentioned there. Now right click over it and click on Modify Driver. Now here is where you need to make sure that you make proper entries or your client will not work with the database. Enter following values: Name: NuoDB Example URL: jdbc:com:nuodb://localhost:48004/test Website URL: http://www.nuodb.com Now click on the Extra Class Path tab and Add the location of the nuodbjdbc.jar file. If you are following my blog posts and have installed NuoDB in the default location, you will find the default path as C:\Program Files\NuoDB\jar\nuodbjdbc.jar. The class name of the driver is automatically populated. Once you click OK you will see that there is a small icon displayed to the left of NuoDB, which shows that you have successfully configured and installed the NuoDB driver. Now click on the tab of Alias tab and you can notice that it is empty. Now click on the big Plus icon and it will open screen of adding an alias. “Alias” means nothing more than adding a database to your system. The database name of the original installation can be anything and, if you wish, you can register the database with any other alternative name. Here are the details you should fill into the Alias screen below. Name: Test (or your preferred alias) Driver: NuoDB URL: jdbc:com:nuodb://localhost:48004/test (This is for test database) User Name: dba (This is the username which I entered for test Database) Password: goalie (This is the password which I entered for test Database) Check Auto Logon and Connect at Startup and click on OK. That’s it! You are done. On the right side you will see a table name and on the left side you will see various tabs with all the relevant details from respective table. You can see various metadata, schemas, data types and other information in the table. In addition, you can also generate script and do various important tasks related to database. You can see how easy it is to configure NuoDB with the SQuirreL Client and get going with it immediately. SQL Workbench/J This is another wonderful client tool, which works very well with NuoDB. The best part is that in the Driver dropdown you will see NuoDB being mentioned there. Click here to download  SQL Workbench/J Universal SQL client. The download process is straight forward and the installation is a very easy process for SQL Workbench/J. As soon as you open the client, you will notice on following screen the NuoDB driver when selecting a New Connection Profile. Select NuoDB from the drop down and click on OK. In the driver information, enter following details: Driver: NuoDB (com.nuodb.jdbc.Driver) URL: jdbc:com.nuodb://localhost/test Username: dba Password: goalie While clicking on OK, it will bring up the following pop-up. Click Yes to edit the driver information. Click on OK and it will bring you to following screen. This is the screen where you can perform various tasks. You can write any SQL query you want and it will instantly show you the results. Now click on the database icon, which you see right on the left side of the word User=dba.  Once you click on Database Explorer, you can perform various database related tasks. As a developer, one of my favorite tasks is to look at the source of the table as it gives me a proper view of the structure of the database. I find SQL Workbench/J very efficient in doing the same. DbVisualizer DBVisualizer is another great tool, which helps you to connect to NuoDB and retrieve database information in your desired format. A developer who is familiar with DBVisualizer will find this client to be very easy to work with. The installation of the DBVisualizer is very pretty straight forward. When we open the client, it will bring us to the following screen. As a first step we need to set up the driver. Go to Tools >> Driver Manager. It will bring up following screen where we set up the diver. Click on Create Driver and it will open up the driver settings on the right side. On the right side of the area where it displays Driver Settings please enter the following values- Name: NuoDB URL Format: jdbc:com.nuodb://localhost:48004/test Now under the driver path, click on the folder icon and it will ask for the location of the jar file. Provide the path as a C:\Program Files\NuoDB\jar\nuodbjdbc.jar and click OK. You will notice there is a green button displayed at the bottom right corner. This means the driver is configured properly. Once driver is configured properly, we can go to Create Database Connection and create a database. If the pop up show up for the Wizard. Click on No Wizard and continue to enter the settings manually. Here is the Database Connection screen. This screen can be bit tricky. Here are the settings you need to remember to enter. Name: NuoDB Database Type: Generic Driver: NuoDB Database URL: jdbc:com.nuodb://localhost:48004/test Database Userid: dba Database Password: goalie Once you enter the values, click on Connect. Once Connect is pressed, it will change the button value to Reconnect if the connection is successfully established and it will show the connection details on lthe eft side. When we further explore the NuoDB, we can see various tables created in our test application. We can further click on the right side screen and see various details on the table. If you click on the Data Tab, it will display the entire data of the table. The Tools menu also has some very interesting and cool features like Driver Manager, Data Monitor and SQL History. Summary Well, this was a relatively long post but I find it is extremely essential to cover all the three important clients, which we developers use in our daily database development. Here is my question to you? Which one of the following is your favorite NuoDB 3rd-Party Database Client? (Pick One) SQuirreL SQL Client SQL Workbench/J DbVisualizer I will be very much eager to read your experience about NuoDB. You can download NuoDB from here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Can't run init script on boot after another init script

    - by Colin McQueen
    I have three init scripts and the Broker init script runs fine, but when I try to run the Consumer init script and then the Data Collector init script, the only process that is running is the Broker. I added the symbolic links to the run levels using update-rc.d for each script and I also changed the number prefixes in the symbolic links to try and run the scripts in the proper order but that did not work. I am able to run the scripts from the terminal and they work fine but they need to all be started on boot. Any ideas as to why my other scripts are not running? Also inside my Consumer and Data Collector I am running: su user1 -c 'java -jar foo.jar' to start the services. Also the Consumer Java class sits and waits for a message from the queue, so the Java code does not stop until I specify the stop argument for the init script. The Broker has to start first, then the Consumer, then the Data Collector. Adding the symbolic links for the runlevels: sudo update-rc.d Broker defaults 10 90 sudo update-rc.d Consumer defaults 15 85 sudo update-rc.d DataCollector defaults 20 80

    Read the article

  • Overriding GetHashCode in a mutable struct - What NOT to do?

    - by Kyle Baran
    I am using the XNA Framework to make a learning project. It has a Point struct which exposes an X and Y value; for the purpose of optimization, it breaks the rules for proper struct design, since its a mutable struct. As Marc Gravell, John Skeet, and Eric Lippert point out in their respective posts about GetHashCode() (which Point overrides), this is a rather bad thing, since if an object's values change while its contained in a hashmap (ie, LINQ queries), it can become "lost". However, I am making my own Point3D struct, following the design of Point as a guideline. Thus, it too is a mutable struct which overrides GetHashCode(). The only difference is that mine exposes and int for X, Y, and Z values, but is fundamentally the same. The signatures are below: public struct Point3D : IEquatable<Point3D> { public int X; public int Y; public int Z; public static bool operator !=(Point3D a, Point3D b) { } public static bool operator ==(Point3D a, Point3D b) { } public Point3D Zero { get; } public override int GetHashCode() { } public override bool Equals(object obj) { } public bool Equals(Point3D other) { } public override string ToString() { } } I have tried to break my struct in the way they describe, namely by storing it in a List<Point3D>, as well as changing the value via a method using ref, but I did not encounter they behavior they warn about (maybe a pointer might allow me to break it?). Am I being too cautious in my approach, or should I be okay to use it as is?

    Read the article

  • SQL SERVER – 2008 – Unused Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify unused indexes on any database. Please note, if you should not drop all the unused indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to User Scan, User Lookup and User Update when you are going to drop index. The generic understanding is if this values are all high and User Seek is low, the index needs tuning. The index drop script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Unused Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 o.name AS ObjectName , i.name AS IndexName , i.index_id AS IndexID , dm_ius.user_seeks AS UserSeek , dm_ius.user_scans AS UserScans , dm_ius.user_lookups AS UserLookups , dm_ius.user_updates AS UserUpdates , p.TableRows , 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement' FROM sys.dm_db_index_usage_stats dm_ius INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.schemas s ON o.schema_id = s.schema_id INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1 AND dm_ius.database_id = DB_ID() AND i.type_desc = 'nonclustered' AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 ORDER BY (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Wordpress Multisite- How to restrict a non-wordpress page only to one domain

    - by yibe
    I am running three blogs from a wordpress multisite. Currently, I am developing a separate, non wordpress application. I want to link this application with blog1. I put the application in the subfoler of my root directory (The Wordpress multisite is installed in the root). I linked the application in the menu of blog1. So far so good. But, it is possible to access the application from all the three blogs. I want the application to be accessible only from blog1-domain/my-app/ but when I tried blog2-domain/my-app, it just opened normally. Generally, the application is accessible from all the three domains followed by /my-app. I don't think it is proper to go this way. Then, what can I do to restrict my app to be accessible only from a my blog1-domain/my-app ? Or am I not using a good blog structure?. I am open to all kind of suggestions. Thank you for your help.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >