Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 601/704 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Show raw Text Code from a URL with CodePaste.NET

    - by Rick Strahl
    I introduced CodePaste.NET more than 2 years ago. In case you haven't checked it out it's a code-sharing site where you can post some code, assign a title and syntax scheme to it and then share it with others via a short URL. The idea is super simple and it's not the first time this has been done, but it's focused on Microsoft languages and caters to that crowd. Show your own code from the Web There's another feature that I tweeted about recently that's been there for some time, but is not used very much: CodePaste.NET has the ability to show raw text based code from a URL on the Web in syntax colored format for any of the formats provided. I use this all the time with code links to my Subversion repository which only displays code as plain text. Using CodePaste.NET allows me to show syntax colored versions of the same code. For example I can go from this URL: http://www.west-wind.com:8080/svn/WestwindWebToolkit/trunk/Westwind.Utilities/SupportClasses/PropertyBag.cs To a nicely colored source code view at this Url: http://codepaste.net/ShowUrl?url=http%3A%2F%2Fwww.west-wind.com%3A8080%2Fsvn%2FWestwindWebToolkit%2Ftrunk%2FWestwind.Utilities%2FSupportClasses%2FPropertyBag.cs&Language=C%23 which looks like this:   Use the Form or access URLs directly To get there navigate to the Web Code icon on the CodePaste.NET site and paste your original URL and select a language to display: The form creates a link shown above which has two query string parameters: url - The URL for the raw text on the Web language -  The code language used for syntax highlighting Note that parameters must be URL encoded to work especially the # in C# because otherwise the # will be interpreted by the browser as a hash tag to jump to in the target URL. The URL must be Web accessible so that CodePaste can download it and then apply the syntax coloring. It doesn't work with localhost urls for example. The code returned must be returned in plain text - HTML based text doesn't work. Hope some of you find this a useful feature. Enjoy…© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Configure clean URLs using Laravel using a rewrite rule to index.php

    - by yannis hristofakis
    Recently I've started learning Laravel , I have none experience with framework before. I'm encountering the following problem .I'm trying to configure the .htaccess file so I can have clean URLs but the only thing I get are 404 Not Found error pages. I have created a virtual host - you can see below the configuration file - and changed the .htaccesss file on the public directory. /etc/apache2/sites-available <VirtualHost *:80> ServerAdmin [email protected] ServerName laravel.lar DocumentRoot "/home/giannis/Desktop/laravel/public" <Directory "/home/giannis/Desktop/laravel/public"> Options Indexes FollowSymLinks MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccesss file: laravel/public # Apache configuration file # http://httpd.apache.org/docs/current/mod/quickreference.html # Note: ".htaccess" files are an overhead for each request. This logic should # be placed in your Apache config whenever possible. # http://httpd.apache.org/docs/current/howto/htaccess.html # Turning on the rewrite engine is necessary for the following rules and # features. "+FollowSymLinks" must be enabled for this to work symbolically. <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On </IfModule> # For all files not found in the file system, reroute the request to the # "index.php" front controller, keeping the query string intact <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> In order to test it, I have created a view named about and made the proper routing. If I link to http://laravel.lar/index.php/about/ I'm routing to the about page instead if I link to http://laravel.lar/about/ I get a 404 Not Found error. I'm using a Debian based system.

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • SQL Developer: Why Do You Require Semicolons When Executing SQL in the Worksheet?

    - by thatjeffsmith
    There are many database tools out there that support Oracle database. Oracle SQL Developer just happens to be the one that is produced and shipped by the same folks that bring you the database product. Several other 3rd party tools out there allow you to have a collection of SQL statements in their editor and execute them without requiring a statement delimiter (usually a semicolon.) Let’s look at a quick example: select * from scott.emp select * from hr.employees delete from HR_COPY.BEER where HR_COPY.BEER.STATE like '%West Virginia% In some tools, you can simply place your cursor on say the 2nd statement and ask to execute that statement. The tool assumes that the blank line between it and the next statement, a DELETE, serves as a statement delimiter. This is not bad in and of itself. However, it is very important to understand how your tools work. If you were to try the same trick by running the delete statement, it would empty my entire BEER table instead of just trimming out the breweries from my home state. SQL Developer only executes what you ask it to execute You can paste this same code into SQL Developer and run it without problems and without having to add semicolons to your statements. Highlight what you want executed, and hit Ctrl-Enter If you don’t highlight the text, here’s what you’ll see: See the statement at the cursor vs what SQL Developer actually executed? The parser looks for a query and keeps going until the statement is terminated with a semicolon – UNLESS it’s highlighted, then it assumes you only want to execute what is highlighted. In both cases you are being explicit with what is being sent to the database. Again, there’s not necessarily a ‘right’ or ‘wrong’ debate here. What you need to be aware of is the differences and to learn new workflows if you are moving from other database tools to Oracle SQL Developer. I say, when in doubt, back away from the tool, especially if you’re in production. Oh, and to answer the original question… Because we’re trying to emulate SQL*Plus behavior. You end statements in SQL*Plus with delimiters, and the default delimiter is a semicolon.

    Read the article

  • Oracle Coherence 3.5 : Create Internet-scale applications using Oracle's high-performance data grid

    - by frederic.michiara
    Oracle Coherence Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal. For the ones looking at an easy reading and first good approach to Oracle Coherence, I would recommend reading the following book : Overview of Oracle Coherence 3.5 Build scalable web sites and Enterprise applications using a market-leading data grid product Design and implement your domain objects to work most effectively with Coherence and apply Domain Driven Designs (DDD) to Coherence applications Leverage Coherence events and continuous queries to provide real-time updates to client applications Successfully integrate various persistence technologies, such as JDBC, Hibernate, or TopLink, with Coherence Filled with numerous examples that provide best practice guidance, and a number of classes you can readily reuse within your own applications This book is targeted to Architects and developers, and as in our team we're more about Solutions Architects than developers I found interest in this book as it help to understand better Oracle Coherence and its value. The only point I may not agree with the authors is that Oracle Coherence is not an alternative to Oracle RAC in providing High Availability, but combining both Oracle RAC and Oracle Coherence will help Architects and Customers to reach higher level of service and high-availability. This book is available on https://www.packtpub.com/oracle-coherence-3-5/book Need to find out about Table of contents : https://www.packtpub.com/toc/oracle-coherence-35-table-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/6125_Oracle%20Coherence_SampleChapter.pdf Read also articles from the Authors on http://www.packtpub.com/ : Working with Aggregators in Oracle Coherence 3.5 Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5 Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes Installing Coherence 3.5 and Accessing the Data Grid: Part 1 Installing Coherence 3.5 and Accessing the Data Grid: Part 2 For more information on Oracle Coherence : What Oracle Coherence Can Do for You... : http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html Oracle Coherence on OTN : http://www.oracle.com/technology/products/coherence/index.html Oracle Coherence Knowledge Base : http://coherence.oracle.com/display/COH/Oracle+Coherence+Knowledge+Base+Home

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Make the Firefox Awesome Bar Semi-Transparent Like Google Chrome

    - by Matthew Guay
    Would you like to make the Firefox Awesome Bar drop-down menu semi-transparent like in Google Chrome?  Here’s a quick trick that can make your Firefox Awesome Bar a bit more awesome. When you type an address or search query into the address bar in Google Chrome, the drop-down list of history and search suggestions that appears is slightly transparent.  Nothing extreme, but it adds a nice touch. Firefox’s Awesome bar, on the other hand, is fully opaque by default. We can change that with a simple change.  Exit Firefox, then open your Firefox profile folder by entering the following in the address bar in Explorer or in the Run command: %appdata%\Mozilla\Firefox\Profiles\ Open the default folder, and then open the Chrome folder in it. Now, open the userChrome.css file in an editor such as Notepad.  If you do not have a userChrome.css file, open the userChrome-example.css file instead. Now, add the following to the end of the file: #PopupAutoCompleteRichResult[type="autocomplete-richlistbox"]{    opacity: 0.9 !important;} You can change the opacity value, but 0.9 seemed the closest to Chrome’s transparency while keeping the text readable. Save the file as userChrome.css in that same folder.  If you’re editing with Notepad, make sure to select to save as All Files so the file won’t be saved with a .txt extension. Open Firefox, and now your Awesome Bar’s drop-down list will be transparent.  Actually, it may look even more awesome than Google Chrome’s address bar! Conclusion With this simple trick, you can make your Firefox Awesome bar a bit more awesome.  With tweaks like this, it’s no wonder Firefox is still so popular. Special thanks to Daniel Spiewak for the tip! Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPMake your Gnome Terminal Background (mostly)Transparent on UbuntuStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • SQL SERVER – Various Leap Year Logics

    - by pinaldave
    Earlier I wrote one article on Leap Year and created one video about Leap Year. My point of view was to demonstrate how we can use SQL Server 2012 features to identify Leap year. How ever during the conversation I had some really good conversation. Here are updates for those who have missed reading the excellent comments on the blog. Incorrect Logic There are so many people still think Leap Year is the event which is consistently happening at every four year and the way to find it is divide the year with 4 and if the remainder is 0. That year is leap year. Well, it is not correct. Comment by David Bridge Check out this excerpt from wikipedia page http://en.wikipedia.org/wiki/Leap_year “most years that are evenly divisible by 4 are leap years…” “…Some exceptions to this rule are required since the duration of a solar year is slightly less than 365.25 days. Years that are evenly divisible by 100 are not leap years, unless they are also evenly divisible by 400, in which case they are leap years. For example, 1600 and 2000 were leap years, but 1700, 1800 and 1900 were not. Similarly, 2100, 2200, 2300, 2500, 2600, 2700, 2900 and 3000 will not be leap years, but 2400 and 2800 will be.” If you use logic of divide by 4 and remainder is 0 to find leap year, you will may end up with inaccurate result. The correct way to identify the year is to figure out the days of February and if the count is 29, the year is for sure leap year. Valid Alternate Solutions Comment by sainswor99insworth IIF((@Year%4=0 AND @Year%100 != 0) OR @Year%400=0, 1,0) Comment by Madhivanan Madhivanan has written a blog post about an year ago where he listed multiple ways to find leap year. Comment by Jayan DECLARE @year INT SET @year = 2012 IF (((@year % 4 = 0) AND (@year % 100 != 0)) OR (@year % 400 = 0)) PRINT ’1' ELSE print ’0' Comment by David DECLARE @Year INT = 2012 SELECT ISDATE('2/29/' + CAST(@Year AS CHAR(4))) Comment by David Bridge Incidentally – Another approach would be to take one day off March 1st and see if it is 29. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • Music before bells and whistles

    - by Tony Davis
    Why is it that Windows has so much difficulty in finding content on its file system? This is not an insurmountable technical problem; on my laptop, I have a database within which I can instantly find text or names within millions of records, within 300 milliseconds. I have a copy of Google Desktop that can find phrases within emails or documents, almost as quickly. It is an important, though mundane, part of an operating system to be able to find files. The first thing I notice within Windows is that the facility to find files or text within files is called 'search' rather than 'find'. Hmm. This doesn’t bode well. What’s this? It does a brute-force search for file names? Here we are in an age when we can breed mice that glow in the dark, and manufacture computers that fit in our shirt pockets, and we find an operating system that is still entirely innocent of managing and indexing content in hierarchical data. I can actually read the files of my PC into a database, mimic the directory/folder hierarchies and then find files in a flash; but when I do the same with Windows Vista, we are suddenly back in a 1960s time warp. Finding files based on their name is bad enough, but finding files based on the content that they contain is more or less asking for an opportunity to wait 20 minutes in order to see a "file not found" message. Sadly, with Windows 7, Microsoft seems to have fallen into the familiar trap of adding bells and whistles before finishing the song. It's certainly true that Microsoft has added new features and a certain polish to Windows Search 4.0, the latest incarnation. It works more like a web search and offers a new search syntax, called Advanced Query Syntax, which allows you to search on file author, file size, date ranges (e.g. date:=7/4/09still does not work reliably. I've experienced first-hand its stubborn refusal, despite a full index, to acknowledge the existence of a file I know exists, based on a search for a specific term within that file that I know is in there somewhere; a file that Google Desktop search, or old wingrep, finds in seconds. When users hark back to the halcyon days of Windows XP search, you know something is seriously amiss. Shouldn't applications get the functionality right before applying animated menus and Teletubby graphics, or is advancing age making me grumpy? I’d be pleased to hear your views, as always. Cheers, Tony.

    Read the article

  • Some of my favourite Visual Studio 2012 things&ndash;Teams

    - by Aaron Kowall
    Getting the balance right for when and how many team projects to create has always been a bit of a balance.  On large initiatives, there are often teams who work toward a common system.  These teams often have quite a bit of autonomy, but need to roll up to some higher level initiative.  In TFS 2010, people were often tempted to create separate Team Projects for each of the sub-teams and then do some magic with reporting and cross-team queries to get the consolidated view.  My recommendation was always to use Areas as a means of separating work across the team, but that always resulted in a large number of queries that need to be maintained and just seemed confusing.  When doing anything you had to remember to filter the query or view by Area in order to get correct results. Along with the awesome web access portal that comes in TFS 2012 (which I will cover details of in another post) the product group has introduced the concept of Teams.  A team is a sub-group within a TFS 2012 Team Project which allows us to more easily divide work along team boundaries. Technically, a Team is defined by an Area Path and a TFS Group, both of which could be done in TFS 2012.  However, by allowing for creation of a ‘Team’ in TFS 2012, the web portal is able to do a bunch of ‘magic’ for us.  We can view the project site (backlog, taskboard, etc) for the the team, we can assign items to the team and we can view the burndown for the team.  Basically, all the stuff that we had to prepare manually we now get created and managed for us with a nice UI. When you create a Team Project in TFS 2012, a ‘Default’ team is created with the same name as the Team Project.  So, if you only have 1 team working on the project, you are set.  If you want to divide the work into additional teams, you can create teams by using the Team Web Client. Teams are created using the ‘Administer Server’ icon in the top right of the web site.   You can select the team site by using the team chooser: Once you have selected a team, the Product Backlog, TaskBoard, Burndown Charts, etc. are all filtered to that team. NOTE: You always have the ability to choose the ‘Default’ team to see items for the entire project. PS: It’s been a long while since I shared on this blog.  To help with that I’m in a blogging challenge with some other developer and agilist friends.  Please check out their blogs as well: Steve Rogalsky: http://winnipegagilist.blogspot.ca Dylan Smith: http://www.geekswithblogs.net/optikal Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com   Technorati Tags: TFS 2012,Agile,Team

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 24 (sys.dm_db_index_operational_stats)

    - by Tamarick Hill
    The sys.dm_db_index_operational_stats Dynamic Management Function returns information about the IO, locking, and access methods for the indexes that you currently have on your SQL Server Instance. This function takes four input parameters which are (1) database_id, (2) object_id, (3) index_id, and (4) partition_number. Let’s have a look at the results from this function against our AdventureWorks2012 database. This function returns a ton of columns, so not only will I not attempt to describe each of the columns, I wont even attempt to display all of them here. My query below will give you a subset of the columns returned from this function. SELECT database_id, object_id, index_id, partition_number, leaf_insert_count, leaf_delete_count, leaf_update_count, leaf_ghost_count, nonleaf_insert_count, nonleaf_delete_count, nonleaf_update_count, range_scan_count, forwarded_fetch_count, row_lock_count, row_lock_wait_count, page_lock_count, page_lock_wait_count, Index_lock_promotion_attempt_count, index_lock_promotion_count, page_compression_attempt_count, page_compression_success_count FROM sys.dm_db_index_operational_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL) The first four columns in the result set represent the values that we passed in as our input parameters. If you use NULL’s as I did, then you will see results for every index on your system. I specified a database_id so my result set only shows those records pertaining to my AdventureWorks2012 database. The next columns in the result set provide you with information on how may inserts, deletes, or updates that have taken place on your leaf and nonleaf index levels. The nonleaf levels would refer to the intermediate and root index levels. In the middle of these you see a leaf_ghost_count column, which represents the number of records that have been logically deleted and marked as “ghosted”  and are waiting on the background ghost cleanup process to physically remove them. The range_scan_count column represents the number of range or table scans that have been performed against an index. The forwarded_fetch_count column represents the number of rows that were returned from a forwarding row pointer. The row_lock_count and row_lock_wait_count represent the number of row locks that have been requested for an index and the number of times SQL has had to wait on a row lock respectively. The page_lock_count and page_lock_wait_count represent the number of page locks that have been requested for an index and the number of times SQL has had to wait on a page lock respectively. The index_lock_promotion_attempt_count represents the number of times the database engine has attempted to promote a lock to the index level. The index_lock_promotion_count column displays how many times that index lock promotion was successful. Lastly the page_compression_attempt_count and page_compression_success_count represents how many times a page was attempted to be compressed and how many times the attempt was successful. As you can see there is a ton of information returned from this DMV. The DMV we reviewed on yesterday (sys.dm_db_index_usage_stats) provided you with good information on when and how indexes have been used, but this DMF takes an even deeper dive into these statistics. If you are interested in performing a very detailed analysis on the operational stats of your indexes, this is not only a good place to start, but more than likely the best place. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174281.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 25 (sys.dm_db_missing_index_details)

    - by Tamarick Hill
    The sys.dm_db_missing_index_details Dynamic Management View is used to return information about missing indexes on your SQL Server instances. These indexes are ones that the optimizer has identified as indexes it would like to use but did not have. You may also see these same indexes indicated in other tools such as query execution plans or the Database tuning advisor. Let’s execute this DMV so we can review the information it provides us. I do not have any missing index information for my AdventureWorks2012 database, but for the purposes of illustrating the result set of this DMV, I will present the results from my msdb database. SELECT * FROM sys.dm_db_missing_index_details The first column presented is the index_handle which uniquely identifies a particular missing index. The next two columns represent the database_id and the object_id for the particular table in question. Next is the ‘equality_columns’ column which gives you a list of columns (comma separated) that would be beneficial to the optimizer for equality operations. By equality operation I mean for any queries that would use a filter or join condition such as WHERE A = B. The next column, ‘inequality_columns’, gives you a comma separated list of columns that would be beneficial to the optimizer for inequality operations. An inequality operation is anything other than A = B. For example, “WHERE A != B”, “WHERE A > B”, “WHERE A < B”, and “WHERE A <> B” would all qualify as inequality. Next is the ‘included_columns’ column which list all columns that would be beneficial to the optimizer for purposes of providing a covering index and preventing key/bookmark lookups. Lastly is the ‘statement’ column which lists the name of the table where the index is missing. This DMV can help you identify potential indexes that could be added to improve the performance of your system. However, I will advise you not to just take the output of this DMV and create an index for everything you see. Everything listed here should be analyzed and then tested on a Development or Test system before implementing into a Production environment. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms345434.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Analysis Services Tabular books #ssas #tabular

    - by Marco Russo (SQLBI)
    Many people are looking for books about Analysis Services Tabular. Today there are two books available and they complement each other: Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model by Marco Russo, Alberto Ferrari and Chris Webb Applied Microsoft SQL Server 2012 Analysis Services: Tabular Modeling by Teo Lachev The book I wrote with Alberto and Chris is a complete guide to create tabular models and has a good coverage about DAX, including how to use it for enriching a semantic model with calculated columns and measures and how to use it for querying a Tabular model. In my experience, DAX as a query language is a very interesting option for custom analytical applications that requires a fast calculation engine, or simply for standard reports running in Reporting Services and accessing a Tabular model. You can freely preview the table of content and read some excerpts from the book on Safari Books Online. The book is in printing and should be shipped within mid-July, so finally it will be very soon on the shelf of all the people already preordered it! The Teo Lachev’s book, covers the full spectrum of Tabular models provided by Microsoft: starting with self-service BI, you have users creating a model with PowerPivot for Excel, publishing it to PowerPivot for SharePoint and exploring data by using Power View; then, the PowerPivot for Excel model can be imported in a Tabular model and published in Analysis Services, adding more control on the model through row-level security and partitioning, for example. Teo’s book follows a step-by-step approach describing each feature that is very good for a beginner that is new to PowerPivot and/or to BISM Tabular. If you need to get the big picture and to start using the products that are part of the new Microsoft wave of BI products, the Teo’s book is for you. After you read the book from Teo, or if you already have a certain confidence with PowerPivot or BISM Tabular and you want to go deeper about internals, best practices, design patterns in just BISM Tabular, then our book is a suggested read: it contains several chapters about DAX, includes discussions about new opportunities in data model design offered by Tabular models, and also provides examples of optimizations you can obtain in DAX and best practices in data modeling and queries. It might seem strange that an author write a review of a book that might seem to compete with his one, but in reality these two books complement each other and are not alternatives. If you have any doubt, buy both: you will be not disappointed! Moreover, Amazon usually offers you a deal to buy three books, including the Visualizing Data with Microsoft Power View, another good choice for getting all the details about Power View.

    Read the article

  • Why You Should Attend MySQL Connect, and Register Now

    - by Bertrand Matthelié
    MySQL Connect is taking place on September 29 and 30 in San Francisco. The early bird discount enabling you to save US$ 500 is only running for a few more days, until July 13. Are you still wondering if you should sign up? Here are 10 reasons why you definitely should: Learn from other companies how they tackled similar challenges to the ones you’re facing. Find out what they learned along the way, and how you can save time, money and a lot of troubles by avoiding repeating the same mistakes and applying the best practices they’ve developed. You’ll get the chance to hear from organizations including PayPal, Verizon, Twitter, Facebook, Ticketmaster, Ning, Mozilla, CERN, Yahoo! and more! Don’t miss this unique opportunity to meet the engineers developing and supporting the MySQL products in a single location. You’ll be able to ask them all your questions, which can represent a huge time and money saver. Acquire detailed knowledge about InnoDB, the MySQL Optimizer, High Availability strategies, improving performance and scalability, enhancing security and numerous other topics. You’ll hear it straight "from the horse’s mouth" as well as from other MySQL experts in the ecosystem. Get a better understanding about Oracle’s MySQL strategy and about the MySQL roadmap, so you can better plan where to use the MySQL database and MySQL Cluster for your next web, cloud-based and other applications. Get hands-on experience about improving performance with the MySQL Performance Schema, about using MySQL Utilities, MySQL Cluster and a lot more with eight different Hands-On Labs. Express your ideas, engage into discussions and help influence the MySQL roadmap during Birds-of-a-feather sessions about replication, backup, query optimizations and other topics. Meet partners and learn about third party tools that could be useful in your architecture. Immerse yourself into the MySQL universe and hang out with MySQL experts for two days. The discussions as well as the relationships you will create can be priceless and help you execute on your next projects in a much better and faster way. Register Now to save US$500 by taking advantage of the Early bird discount running until July 13. We’ll have parallel tracks so you should consider sending a few team members to make the most of the event. Are you attending or planning to attend Oracle OpenWorld or JavaOne? You can add MySQL Connect to your registration for only US$100! Finally, it’s always a lot of fun to attend a MySQL conference. The passion and the energy are contagious…and you’ll likely get plenty of new ideas. You will find all information about the program in the MySQL Connect Content Catalog. We look forward to seeing you there! You can also read interviews with Tomas Ulin and Ronald Bradford about MySQL Connect. Sponsorship and exhibit opportunities are still available for the conference. You will find more information here.

    Read the article

  • This Task Is Currently Locked by a Running Workflow and Cannot Be Edited

    - by Jayant Sharma
    Problem: In SharePoint Workflow, "This task is currently locked by a running workflow and cannot be edited" is the common exception, that we face. Solution: Generally this exception occurs 1.  when the number of items in the Task List gets highThis exception says that the workflow is not able to deliver the all the events at a given time and so the tasks get locked.  Out Of Box, the default event delivery throttle value is 15.  Event delivery throttle value Specifies the number of workflows that can be processed at the same time across all front-end Web serverslook at following link.(http://blogs.msdn.com/b/vincent_runge/archive/2008/09/16/about-the-workflow-eventdelivery-throttle-parameter.aspx)If the value returned by query is superior to the throttle (15 by default), any new workflow event will not be processed immediately. so we need to change it by stsadm command like...stsadm -o setproperty -pn workflow-eventdelivery-throttle -pv "20"(http://technet.microsoft.com/en-us/library/cc287939(office.12).aspx) 2. When we modify a Workflow Task from Custom TaskEdit Page.   when we try to modify the workflow task from outside workflow default Page, like custom workflow taskedit page. then is exception occurs.suppose we have custom task edit page with dropdown  and values are submitted/ Progress/ completed etc and we want to complete task from here. it will throw exception on SPWorkflowTask.AlterTask method, which changes the TaskStatus.When I debug, to find the root cause I actully found that the workflow is not locked. The InternalState flag of the workflow does not include the Locked flag bits(http://msdn.microsoft.com/en-us/library/dd928318(v=office.12).aspx) When I found this link http://geek.hubkey.com/2007/09/locked-workflow.htmlIt is exactly what I wanted. It says that "when the WorkflowVersion of the task list item is not equal to 1" then the error occurs. The solution that is propsed here works fantastically if ((int)task[SPBuiltInFieldId.WorkflowVersion] != 1){    SPList parentList = task.ParentList.ParentWeb.Lists[new Guid(task[SPBuiltInFieldId.WorkflowListId].ToString())];    SPListItem parentItem = parentList.Items.GetItemById((int)task[SPBuiltInFieldId.WorkflowItemId]);    SPWorkflow workflow = parentItem.Workflows[new Guid(task[SPBuiltInFieldId.WorkflowInstanceID].ToString())];    if (!workflow.IsLocked)    {       task[SPBuiltInFieldId.WorkflowVersion] = 1;       task.SystemUpdate();      break;    }} It will reset the workflow version to 1 again.Conclusion: This Exception is completely confusing. So, we need to find at first whether our workflow is really locked or not. If it is really locked then use 1st method. If not, then check the workflow version and set it to 1 again.Jayant Sharma

    Read the article

  • Oracle Optimized Solutions at Oracle OpenWorld 2012

    - by ferhatSF
    Have you registered for Oracle OpenWorld 2012 in San Francisco from September 30 to October 4? Visit the Oracle OpenWorld 2012 site today for registration and more information. Come join us to hear how Oracle Optimized Solutions can help you save money, reduce integration risks, and improve user productivity. Oracle Optimized Solutions are designed, pre-tested, tuned and fully documented architectures for optimal performance and availability. They provide written guidelines to help size, configure, purchase and deploy enterprise solutions that address common IT problems. Built with flexibility in mind, Oracle Optimized Solutions can be deployed as complete solutions or easily tailored to meet your specific needs - they are proven to save money, reduce integration risks and improve user productivity. Here is a preview of the planned Oracle OpenWorld sessions(*) on Oracle Optimized Solutions. October 1, 2012 Monday Time Session ID Title Location 12:15 PM CON7916 Accelerate Oracle E-Business Suite Deployment with SPARC SuperCluster Moscone West - 2001 03:15 PM GEN9691 General Session: Accelerate Your Business with the Oracle Hardware Advantage Moscone North - Hall D 04:45 PM CON4821 Building a Flexible Enterprise Cloud Infrastructure on Oracle SPARC Systems Moscone West - 2001 October 2, 2012 Tuesday Time Session ID Title Location 10:15 AM CON4561 Backup-and-Recovery Best Practices with Oracle Engineered Systems Products Moscone South - 252 11:45 AM CON3851 Optimizing JD Edwards EnterpriseOne on SPARC T4 Servers for Best Performance Moscone West - 2000 01:15 PM GEN11472 General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 01:15 PM CON4600 Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization Moscone South - 252 05:00 PM CON9465 Next-Generation Directory: Oracle Unified Directory Moscone West - 3008 05:00 PM CON4088 Accelerate Your SAP Landscape with the Oracle SPARC SuperCluster Moscone West - 2001 05:00 PM CON7743 High-Performance Security for Oracle Applications Using SPARC T4 Systems Moscone West - 2000 05:00 PM CON3857 Archive Strategies for 100 Percent Data Availability Moscone South - 270 October 3, 2012 Wednesday Time Session ID Title Location 10:15 AM CON6528 Configure Oracle Hybrid Columnar Compression to Optimize Query Database Performance up to 10x Moscone South - 252 11:45 AM CON2590 Breakthrough in Private Cloud Management on SPARC T-Series Servers Moscone South - 270 01:15 PM CON4289 Oracle Optimized Solution for Siebel CRM at ACCOR Moscone West - 2000 05:00 PM CON7570 Improve PeopleSoft HCM Performance and Reliability with SPARC SuperCluster Moscone South - 252 * Schedule subject to change In addition, there will be Oracle Optimized Solutions Hands-On-Labs sessions planned. Please enroll ahead of time as space is limited: Oracle Optimized Solutions: Hands on Labs in Oracle OpenWorld Place: Marriott Marquis - Salon 14/15 Date and Time Session ID Title Monday October 1, 2012 01:45 PM HOL9868 Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c Monday October 1, 2012 03:15 PM HOL9907 Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Wednesday October 3, 2012 05:00 PM HOL9870 x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Thursday October 4, 2012 11:15 AM HOL9869 0 to Database Backup and Recovery in 60 Minutes Oracle Optimized Solutions executives and experts will also be at hand for discussions and follow ups. And don’t forget to catch live demonstrations of our complete Oracle Optimized Solutions while at Oracle OpenWorld 2012 in San Francisco. We recommend the use of the Schedule Builder tool to plan your visit to the conference and for pre-enrollment in sessions of your interest. We hope to see you there!

    Read the article

  • Blank Cacti Graphs

    - by tortib
    I'm running Ubuntu 14.04 LTS and I'm having an issue with Cacti 0.8.8b not displaying any data in graphs. The graphs are being created and I see files in /var/lib/cacti/rra. My crontab entry for root is the following: */1 * * * * sudo -u www-data php -q /usr/share/cacti/site/poller.php > /dev/null The output of ls -la /var/lib/cacti/rra is the following: # ls -la /var/lib/cacti/rra/ total 1008 drwxrwx--- 2 www-data www-data 4096 Aug 20 19:27 . drwxr-xr-x 3 www-data www-data 4096 Aug 17 01:41 .. -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_cpu_nice_34.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_cpu_system_35.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_cpu_user_36.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:27 tortib_com_hdd_used_43.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:23 tortib_com_hdd_used_44.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_load_15min_38.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_load_1min_37.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_load_5min_39.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_mem_buffers_40.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_mem_cache_41.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_mem_free_42.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:24 tortib_com_traffic_in_45.rrd -rw-rw-r-- 1 www-data www-data 94816 Aug 20 19:25 tortib_com_traffic_in_46.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:26 tortib_com_traffic_in_47.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_users_48.rrd I tried to run the poller as root from the command line but it doesn't output anything useful nor does it graph any data. The device in cacti shows that that it's able to query snmp and ping is alive. The graphs are still empty though. snmpwalk 127.0.0.1 -v2c -c public works as it should. It walks all MIBs. I'm quite perplexed as to why this isn't working any longer. It was graphing data but then it just stopped. And when it was graphing data it was graphing it intermittently. Thank you for reading this problem and helping.

    Read the article

  • Vote of Disconfidence to Entity Framework

    - by Ricardo Peres
    A friend of mine has found the following problem with Entity Framework 4: Two simple classes and one association between them (one to many): One condition to filter out soft-deleted entities (WHERE Deleted = 0): 100 records in the database; A simple query: 1: var l = ctx.Person.Include("Address").Where(x => (x.Address.Name == "317 Oak Blvd." && x.Address.Number == 926) || (x.Address.Name == "891 White Milton Drive" && x.Address.Number == 497)); Will produce the following SQL: 1: SELECT 2: [Extent1].[Id] AS [Id], 3: [Extent1].[FullName] AS [FullName], 4: [Extent1].[AddressId] AS [AddressId], 5: [Extent202].[Id] AS [Id1], 6: [Extent202].[Name] AS [Name], 7: [Extent202].[Number] AS [Number] 8: FROM [dbo].[Person] AS [Extent1] 9: LEFT OUTER JOIN [dbo].[Address] AS [Extent2] ON ([Extent2].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent2].[Id]) 10: LEFT OUTER JOIN [dbo].[Address] AS [Extent3] ON ([Extent3].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent3].[Id]) 11: LEFT OUTER JOIN [dbo].[Address] AS [Extent4] ON ([Extent4].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent4].[Id]) 12: LEFT OUTER JOIN [dbo].[Address] AS [Extent5] ON ([Extent5].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent5].[Id]) 13: LEFT OUTER JOIN [dbo].[Address] AS [Extent6] ON ([Extent6].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent6].[Id]) 14: ... 15: WHERE ((N'317 Oak Blvd.' = [Extent2].[Name]) AND (926 = [Extent3].[Number])) 16: ... And will result in 680 MB of memory being taken! Now, Entity Framework has been historically known for producing less than optimal SQL, but 680 MB for 100 entities?! According to Microsoft, the problem will be addressed in the following version, there is a Connect issue open. There is even a whitepaper, Performance Considerations for Entity Framework 5, which talks about some of the changes and optimizations coming on version 5, but by reading it, I got even more concerned: “Once the cache contains a set number of entries (800), we start a timer that periodically (once-per-minute) sweeps the cache.” Say what?! The next version of Entity Framework will spawn timer threads?! When Code First came along, I thought it was a step in the right direction. Sure, it didn’t include some things that NHibernate did for quite some time – for example, different strategies for Id generation that do not rely on IDENTITY columns, which makes INSERT batching impossible, or support for enumerated types – but I thought these would come with the time. Now, enumerated types have, but so did… timer threads! I’m afraid Entity Framework is becoming a monster.

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • Rapid Planning: Next Generation MRP

    - by john.bermudez
    MRP has been a mainstay of manufacturing systems for 40 years. MRP evolved from simple inventory planning systems to become the heart of the MRPII systems which eventually became ERP. While the applications surrounding it have become broader, more sophisticated and web-based, MRP continues to operate in the loneliness of the Saturday night batch window quietly exploding bills of materials and logging exceptions for hours. During this same 40 years, manufacturing business processes have seen countless changes and improvements including JIT, TQM, Six Sigma, Flow Manufacturing, Lean Manufacturing and Supply Chain Management. Although much logic has been added to MRP to deal with new manufacturing processes, it has not been able to keep up with the real-time pace of today's supply chain. As a result, planners have devised ingenious ways to trick MRP to handle new processes but often need to dump the output into spreadsheets of their own design in the hope of wrestling thousands of exceptions to ground. Oracle's new Rapid Planning application is just what companies still running MRP have been waiting for! The newest member of the Value Chain Planning product line, Rapid Planning is designed to empower planners with comprehensive supply planning that runs online in minutes, not hours. It enables a planner simulate the incremental impact of a new order or re-run an entire plan in a separate sandbox. Rapid Planning does a complete multi-level bill of material explosion like MRP but plans orders considering material and capacity constraints. Considering material and capacity constraints in planning can help you quickly reduce inventory and improve on-time shipments. Rapid Planning is an APS application that leverages years of Oracle development experience and customer feedback. Rather than rely exclusively on black-box heuristics, Rapid Planning is designed to give planners the computing power to use their industry experience and business knowledge to improve MRP. For example, Rapid Planning has a powerful worksheet user interface with built-in query capability that allows the planner to locate the orders she is interested in and use a mass update function to make quick work of large changes. The planner can save these queries and unique user interface to personalize their planning environment. Most importantly, Rapid Planning is designed to do supply planning in today's dynamic supply chain environment. It can be used to supplement MRP or replace MRP entirely. It generates plans that provide order-by-order details with aggregate key performance indicators that enable planners to quickly assess the overall business impact of a plan. To find out more about how Rapid Planning can help improve your MRP, please contact me at [email protected] or your Oracle Account Manager.

    Read the article

  • Twitte API for Java - Hello Twitter Servlet (TOTD #178)

    - by arungupta
    There are a few Twitter APIs for Java that allow you to integrate Twitter functionality in a Java application. This is yet another API, built using JAX-RS and Jersey stack. I started this effort earlier this year and kept delaying to share because wanted to provide a more comprehensive API. But I've delayed enough and releasing it as a work-in-progress. I'm happy to take contributions in order to evolve this API and make it complete, useful, and robust. Drop a comment on the blog if you are interested or ping me at @arungupta. How do you get started ? Just add the following to your "pom.xml": <dependency> <groupId>org.glassfish.samples</groupId> <artifactId>twitter-api</artifactId> <version>1.0-SNAPSHOT</version></dependency> The implementation of this API uses Jersey OAuth Filters for authentication with Twitter and so the following dependencies are required if any API that requires authentication, which is pretty much all the APIs ;-) <dependency> <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-client</artifactId>     <version>${jersey.version}</version> </dependency> <dependency>     <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-signature</artifactId>     <version>${jersey.version}</version> </dependency> Once the dependencies are added to your project, inject Twitter  API in your Servlet (or any other Java EE component) as: @Inject Twitter twitter; Here is a simple non-secure invocation of the API to get you started: SearchResults result = twitter.search("glassfish", SearchResults.class);for (SearchResultsTweet t : result.getResults()) { out.println(t.getText() + "<br/>");} This code returns the tweets that matches the query "glassfish". The source code for the complete project can be downloaded here. Download it, unzip, and mvn package will build the .war file. And then deploy it on GlassFish or any other Java EE 6 compliant application server! The source code for the API also acts as the javadocs and can be checked out from here. A more detailed sample using security and several other API from this library is coming soon!

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >