Search Results

Search found 7827 results on 314 pages for 'counter cache'.

Page 164/314 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How ISP or dns server find the nameserevr [on hold]

    - by IT researcher
    I saw some articles about how DNS propagation happens.I know that ISP or DNS server(such as google public dns) cache the ip address of website which it uses to convert domain name to ip address. But my doubt is from where these ISP or dns serevr know which nameserver to go for particular domain name. for example a domain.com has two name servers ns1.domain.com and ns2.domain.com. But how the ISP server or dns server know that it uses these name server and i have to send request to this name server.So where does this record mainatined?

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Faq and Best tips Regarding Learning Database ?

    - by AdityaGameProgrammer
    For a programmer with no prior exposure to databases What would be a good database to learn Oracle vs SQLserver vs MySQLvs PostgreSQL? I have come across lot of discussion MySQL and PostgreSQL and frankly I am confused on which to start with. Are these very different, in the sense if one had to switch, would the exposure to one be counter-productive to learning the other? Is working with databases heavily platform dependent? What exactly do people mean by Data base programming vs. administration? Do people chose databases based on the programming language used for the application developed? In general, Working with databases is it implicit that we work with some server? Does the choice of databases differ when it comes to game development? If so what factors does it differ by? What are the Best Tips that you have found to be useful when learning databases Edit: Some FAQ i had and found the same on SO What should every developer know about databases? Which database if learning from scratch in 2010? For a beginner, is there much difference between MySQL and PostgreSQL What RDBMS should I learn/use? (MySql/SQL Server/Oracle etc.) To what extent should a developer learn database? How are database programmers different from other programmers? what kind of database are used in games?

    Read the article

  • Ubuntu Sudo apt-get -f install

    - by Justin
    I was trying to install a program. And It said that my Dependencies were unmet. And that I should run, sudo apt-get -f install. I have moved everything I didn't need in /etc/apt/sources.list.d/ into the trash. My source.list is all Natty while I am running Oneiric. So maybe I need a new source.list? But here are the things I have: justin@justin-000:~$ sudo apt-get -f install [sudo] password for justin: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-image-3.0.0-13-generic Suggested packages: fdutils linux-doc-3.0.0 linux-source-3.0.0 linux-tools The following NEW packages will be installed: linux-image-3.0.0-13-generic 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. 2 not fully installed or removed. Need to get 0 B/36.5 MB of archives. After this operation, 117 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 270736 files and directories currently installed.) Unpacking linux-image-3.0.0-13-generic (from .../linux-image-3.0.0-13- generic_3.0.0-13.22_i386.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-3.0.0-13 generic_3.0.0-13.22_i386.deb (--unpack): corrupted filesystem tarfile - corrupted package archive No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic run-parts: executing /etc/kernel/postrm.d/zz-extlinux 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic P: Checking for EXTLINUX directory... found. P: Writing config for /boot/vmlinuz-3.0.0-12-generic... P: Writing config for /boot/vmlinuz-2.6.38-11-generic... P: Installing debian theme... done. run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.0.0-13-generic_3.0.0-13.22_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) justin@justin-000:~$ sudo apt-get update justin@justin-000:~$ sudo apt-get update Ign dl.google.com stable InRelease Ign dl.google.com stable InRelease Get:1dl.google.com stable Release.gpg [198 B] Ign us.archive.ubuntu.com oneiric InRelease Ign us.archive.ubuntu.com oneiric-security InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Get:2 dl.google.com stable Release.gpg [198 B] Get:3 dl.google.com stable Release [1,347 B] Get:4 dl.google.com stable Release [1,338 B] Hit us.archive.ubuntu.com oneiric Release.gpg Hit us.archive.ubuntu.com oneiric-security Release.gpg Get:5/dl.google.com stable/main i386 Packages [1,220 B] Hit tp://us.archive.ubuntu.com oneiric-updates Release.gpg Ign tp://dl.google.com stable/main TranslationIndex Get:6 tp://dl.google.com stable/main i386 Packages [464 B] Ign ttp://ppa.launchpad.net oneiric InRelease Hit ttp://us.archive.ubuntu.com oneiric Release Ign ttp://ppa.launchpad.net oneiric InRelease Ign ttp://dl.google.com stable/main TranslationIndex Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-security Release Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-updates Release Hit ttp://us.archive.ubuntu.com oneiric/main Sources Hit ttp://us.archive.ubuntu.com oneiric/restricted Sources Hit ttp://us.archive.ubuntu.com oneiric/universe Sources Hit ttp://us.archive.ubuntu.com oneiric/multiverse Sources Hit ttp://us.archive.ubuntu.com oneiric/main i386 Packages Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit ://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/main Sources Hit ttp://us.archive.ubuntu.com oneiric-security/restricted Sources Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric-security/universe Sources Hit tp://us.archive.ubuntu.com oneiric-security/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-security/main i386 Packages Hit tp://us.archive.ubuntu.com oneiric-security/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse i386 Packages Hit htp://ppa.launchpad.net oneiric/main Sources Hit ttp://ppa.launchpad.net oneiric/main i386 Packages Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/universe TranslationIndex Ign htp://dl.google.com stable/main Translation-en_US Hit htp://us.archive.ubuntu.com oneiric-updates/main Sources Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit tp://us.archive.ubuntu.com oneiric-updates/universe Sources Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-updates/main i386 Packages Ign htp://dl.google.com stable/main Translation-en Hit ttp://ppa.launchpad.net oneiric/main Sources Hit htp://ppa.launchpad.net oneiric/main i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Ign htp://dl.google.com stable/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit hp://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit htp://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Ign htp://dl.google.com stable/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/main Translation-en Hit ttp://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/main Translation-en Hit hp://us.archive.ubuntu.com oneiric-security/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htt://ppa.launchpad.net oneiric/main Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main Translation-en Fetched 4,765 B in 2s (2,158 B/s) Reading package lists... Done justin@justin-000:~$

    Read the article

  • What is New in ASP.NET 4 Web Development Overview

    - by Aamir Hasan
     Microsoft Recently Microsoft introduce Visual  studio 2010 which have new feature's Name of some new Features are given below. In ASP.NET 4.O has focus on performance and Search Engine Optimization. I'll be taking a look at what I think are the most important new features in ASP.NET 4.Output cache extensibility Session state compression View state mode for individual control Page.MetaKeyword and Page.MetaDescription properties Response.RedirectPermanent method Routing in ASP.NET Increase the URL character length New syntax for Html Encode Predictable Client IDs Web.config file refactoring Auto-Start ASP.NET applications Improvements on Microsoft Ajax LibraryReference:ASP.NET 4 and Visual Studio 2010 Web Development Overview 

    Read the article

  • Le C++ expressif n° 1 : introduction, un article d'Eric Niebler traduit par Timothée Bernard

    Dissimulé dans C++ se cache un autre langage - d'innombrables autres langages, en fait - tous sont meilleurs que le C++ pour résoudre certains types de problèmes. Ces domain-specific languages (abrégé DSL) sont par exemple des langages pour l'algèbre linéaire ou des langages de requêtes, ils ne peuvent faire qu'une seule chose, mais ils le font bien. On peut créer et utiliser ces langages directement dans le C++, en utilisant la puissance et la flexibilité du C++ pour remplacer les parties communes du langage par les parties spécifiques au domaine que nous utilisons. Dans cette série d'article, Eric Niebler regarde de près les domain-specific languages, dans quels domaines ils sont utiles et comment on peut facilement les implémenter en C++ avec l'aide de

    Read the article

  • CodePlex Daily Summary for Wednesday, June 09, 2010

    CodePlex Daily Summary for Wednesday, June 09, 2010New Projects.NET Transactional File Manager: Transactional File Manager is a .NET API that supports including file system operations such as file copy, move, delete in a transaction. It's an i...3D World Studio Content Pipeline for Windows Phone 7: This is a port of PhotonicGames' project: http://xna3dws.codeplex.com/releases/view/42994 for the Windows Phone 7 tools (XNA 4.0 CTP).Advanced Script Editor for 3D Rad: Advanced Script Editor makes it easier for 3D Rad coders to write scripts. Developed in C#, it features a functions list, a favourites list, object...Ajax ASP.Net Forum: A fast & lightweight open source free forum developed in ASP.Net 3.5, AJAX, CSS, SQL & Javascript Cache (filter-sort-move through table records at ...Axon: Axon is the home automation system that I will be running in my home. It will be a collection of different technologies and projects, often experim...BigBallz: Projeto de site de Bolões para campeonatos diversos. A princípio pensado para copa do mundo de futebol de 2010BigfootMVC: MVC Framework for DotNetNukeBigfootSQL: A StringBuilder for SQL. BigfootSQL was built with simplicity in mind. It assumes that you are comfortable writing SQL but dislike effort required ...Bxf (Basic XAML Framework): Basic Xaml Framework (Bxf) is a simple, streamlined set of UI components designed to demonstrate the minimum framework functionality required to ma...elZerf - elektronische Zeiterfassung: elektronisches Zeiterfassungsystem im Rahmen der Seminararbeit im Modul Web-Anwendungsprogrammierung.IntoFactories.Net - Samples: Project to host samples created by members of the IntoFactories.NET Team blog.Lanchonete: Sistema para controle de lanchonetes. Medieval Dynasties: Medieval Dynasties is a game written in C# 3.5 and XNA 3.1 at the moment. It is inspired by Crusader Kings, Total War and Civilization.PMMsg: A project to replace the standard messaging client on the Windows Mobile platform. Mainly geared towards Windows Mobile 6.5.3 VGA devices. Also an...PunkPong: PunkPong is an open source "Pong" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard or mouse. This cross-platform a...Renegade Legion Fighter Calculator: In working on assigning fighters to squadrons, flights, and groups for a campaign, I was struck by the sheer amount of calculations I had to make. ...Sharpotify - Spotify .Net Library: Sharpotify is a Spotify library in C#. It is based in Jotify and SharPot projects. It is not a libspotify wrapper, It is a full .Net Spotify protoc...Silverlight load on demand with MEF: With MEF, a Silverlight control can be split in several packages(xap files). Each package can contain one or more pages and it will download on dem...SOLID by example: Source code examples to undestood solid design principles. Most of them were taken from http://www.lostechies.com/SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: A version of RS.EXE that you can use with Forms Authentication in Native Mode. Use the following arguments to specify credentials (just like Basic ...Stripper: Stripper Remove Diacritics and other unwanted caracter to fabric a more standardized file naming.study: studyUncoverPIC: UncoverPIC is a Silverlight Game strongly inspired to the famous Arcade Game "GalsPanic" (see http://en.wikipedia.org/wiki/Gals_Panic ). It was dev...Unity3D Untitled MMO: Unity3D Untitled MMO FrameworkUnnamedShop: UnnamedShopXBStudio.asp.net.automation: A Unit Testing Automation library for asp.netXBStudio.Web: XBStudio Web ApplicationNew Releases3D World Studio Content Pipeline for Windows Phone 7: Initial Release (0.1): This is the first release of the project, with plenty of hackery and kludges to go around, but it mostly works! Let me know if you hit any bugs.Acies: Acies - Alpha Build 0.0.10: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...Advanced Script Editor for 3D Rad: Advanced Script Editor - Version 2.6: Despite various previous releases on the 3D Rad forum, this is the first release on CodePlex.Ajax ASP.Net Forum: First Release: First Release prior to CodePlex Publish (send to admins)So, it doesn't all finish VERSION: 0.1.2 FEATURES Main Home Where all the Forums (called ...Artist Follower for Microsoft Access: Artist Follower 0.5.1: Artists Follower changes: Just one form to manage artists and links!!!Artist Follower for Microsoft Access: Artists Follower 0.5.0: This is the first release of Artist Follower.ASP.NET MVC SiteMap provider - MvcSiteMapProvider: MvcSiteMapProvider 2.0.0 CTP1: This is a community technology preview of MvcSiteMapProvider version 2.0. It is not backwards compatible with older MvcSiteMapProvider versions. ...B&W Port Scanner: Black`n`White Port Scanner 4.0: Version 4 includes: - Improved vulnerability detection tools - Report Creation - Improved Stability - Much better port information database - Nume...BaseCalendar: BaseControls 1.1: BaseControls 1.1 contains the BaseCalendar ASP.NET control. Changes: Rendering TH by default inside THEAD. Added option (ShowMinNumWeeks) to r...BigfootSQL: BigfootSQL Source Code: BigfootSQL C# Version 01Commerce Server 2009 Orders using Pipelines in a Console Application: ConsoleApplication To PLace Orders: ConsoleApplication To PLace Orders with Commerce Server 2009 foundationCommunity Forums NNTP bridge: Community Forums NNTP Bridge V33: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V34: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.2.0: New minor release containing: Integration test component for runtime testing Refactored and cleaned solution files First unit testsExtend SmallBasic: Teaching Extensions v.020: Moved Tortoise.approve to ProgramWindow.TakeScreenShot()fleet It: v0.06 Alpha: v0.06 Alpha - Features Caching implemented for fleets Various Bug fixes Implemented Settings. Resolved logical issue with Getting fleets U...Frotz.NET: Frotz.NET B2: In addition to B1 changes: - Added ZTools to enable debugging view of zcode files - Added rudimentary scroll back buffer. B1 Changes: - Got Adapt...FsObserver: FsObserver 2.0: This is basically the same as FsObserver 1.0 but the "-help" documentation has been cleaned up somewhat and the code has been refactored so that it...GPdotNET - Genetic Programming Tool: GPdotNETv1.0: GPdotNET v.1.0 - more details on http.bhrnjica.wordpress.com/gpdotnetHERB.IQ: Alpha 0.1 Source code release 8: Alpha 0.1 Source code release 8imdb movie downloader: myImdb 0.9.3: myImdb 0.9.3imdb movie downloader: myImdb 0.9.4: myImdb 0.9.4jccc .NET smart framework: jccc .NET smart framework version 1.2010.06.07: jccc .NET smart framework version 1.2010.06.07 added oracle databases supportLongBar: LongBar 2.1 Build 313: - Fixed library and updates to work with updated live services - Options: You can disable shadow nowMDownloader: MDownloader-0.15.17.59623: Fixed FileFactory provider. Improvied postpone policies. Added network request limiter.MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1: Installer for MediaCoder.NET v1.0 beta1.1. It can now convert files with spaces in the path or filename. I have also created filter for the SaveFil...MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1 Source Code: Source Code for MediaCoder.NET v1.0 beta 1.1.mesoBoard: mesoBoard - 0.9.1 beta: Fixed file download permissions Released under the New BSD License.MPCLI: Alpha Release (0.1.0.0): This release has core functionality and is considered a potential candidate for a feature complete release of this library. However, suggestions fo...N2 CMS: 2.0: N2 is a lightweight CMS framework for ASP.NET. It helps professional developers build great web sites that anyone can update. Major Changes (1.5 -...NHTrace: NHTrace-47571: NHTrace-47571NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.125: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NSoup: NSoup 0.2: NSoup 0.2 corresponds to jsoup version 1.1.1. List of changes can be viewed here.Opalis Community Releases: Integration Pack for Data Manipulation: The Integration Pack for Data Manipulation enables you to perform a wider variety of data manipulation tasks as well as aggregate data into common ...Performance Analysis of Logs (PAL) Tool: PAL v2.0 Beta 1: Fixed Counter Sorting: Fixed a minor bug where duplicate counter expression paths were not being removed. Analysis Added: Added LogicalDisk Read/...RoTwee: RoTwee (12.0.0.0): Trial version. 17925 Make it possible to change window sizeSharpotify - Spotify .Net Library: Sharpotify.Library 1.0: Sharpotify Library: Stable release. You can connect with spotify, search, browse (tracks, albums, artists), get a music stream, create and edit you...Silverlight load on demand with MEF: mal.Web.Silverlight.MEF 1.0.0.0: mal.Web.Silverlight.MEF 1.0.0.0sMAPtool: sMAPtool v0.7e (without Maps): + Added: color value expansion bar for hmap (right click to select color scheme) + Added: more complex hmap editing, uses now 4 point bounding rect...SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: Initial release: Enjoy!Stripper: Stripper 0.1.1 (CLi): Stripper Remove Diacritics and other unwanted caracters to fabric a more standardized file naming. Especially French caracter and maybe other lang...Unity3D Untitled MMO: v1: versionUrzaGatherer: UrzaGatherer 2.01a: New version with some minors bugs corrected.VCC: Latest build, v2.1.30608.0: Automatic drop of latest buildVCC: Latest build, v2.1.30608.1: Automatic drop of latest buildWatermarker.NET: 0.1.3811: A newer version with some improvements. I release this as a .zip archive, because settings are added here, so there will be .exe and .config files.Yet Another GPS: Alfa Release: Alfa working releaseMost Popular ProjectsDozer Enterprise Library for .NETEmployee Management SystemWiiMote PhysicsVisualStudio 2010 JavaScript OutliningSpider CompilerConcurrent CacheOil Slick Live FeedsCSUFVGDC Summer JamWinGetSiteMap Utility for DNN Blog ModuleMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodejQuery Library for SharePoint Web ServicesRawrNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAndrew's XNA HelpersBlogEngine.NETStyleCopCustomer Portal Accelerator for Microsoft Dynamics CRM

    Read the article

  • Getting a Web Resource Url in non WebForms Applications

    - by Rick Strahl
    WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources. For example you can do: ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this: WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212 While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages. Where's my Page object Dude? Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant. While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good… So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance? We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method. For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine: public static string GetWebResourceUrl(Type type, string resource ) { Page page = new Page(); return page.ClientScript.GetWebResourceUrl(type, resource); } A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance: public class WebUtils { private static Page CachedPage { get { if (_CachedPage == null) _CachedPage = new Page(); return _CachedPage; } } private static Page _CachedPage; public static string GetWebResourceUrl(Type type, string resource) { return CachedPage.ClientScript.GetWebResourceUrl(type, resource); } } You can now use GetWebResourceUrl in a Razor page like this: <!DOCTYPE html> <html <head> <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)"> </script> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html> And voila - there you have WebResources served from a non-Page based application. WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  MVC   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Installing gtk-config and/or fsv in Ubuntu 10.10

    - by Wayne Werner
    Hi, I'm trying to install the File System Visualizer (think "It's a UNIX System! I know this!" from Jurassic Park) on Ubuntu 10.10. I've got the .tar.gz downloaded, and extracted. However, when I ./configure, I get this output: loading cache ./config.cache checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found checking for working makeinfo... missing checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for ranlib... ranlib checking for POSIXized ISC... no checking for dirent.h that defines DIR... yes checking for opendir in -ldir... no checking for ANSI C header files... yes checking whether time.h and sys/time.h may both be included... yes checking for strings.h... yes checking for sys/time.h... yes checking for unistd.h... yes checking for working const... yes checking for mode_t... yes checking for uid_t in sys/types.h... yes checking for pid_t... yes checking for size_t... yes checking for comparison_fn_t... yes checking for st_blocks in struct stat... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for working alloca.h... yes checking for alloca... yes checking for working fnmatch... yes checking for strftime... yes checking for getcwd... yes checking for gettimeofday... yes checking for mktime... yes checking for strcspn... yes checking for strdup... yes checking for strspn... yes checking for strtod... yes checking for strtoul... yes checking for scandir... yes checking for inline... inline checking for off_t... yes checking for unistd.h... (cached) yes checking for getpagesize... yes checking for working mmap... yes checking for argz.h... yes checking for limits.h... yes checking for locale.h... yes checking for nl_types.h... yes checking for malloc.h... yes checking for string.h... yes checking for unistd.h... (cached) yes checking for sys/param.h... yes checking for getcwd... (cached) yes checking for munmap... yes checking for putenv... yes checking for setenv... yes checking for setlocale... yes checking for strchr... yes checking for strcasecmp... yes checking for strdup... (cached) yes checking for __argz_count... yes checking for __argz_stringify... yes checking for __argz_next... yes checking for stpcpy... yes checking for LC_MESSAGES... yes checking whether NLS is requested... yes checking whether included gettext is requested... no checking for libintl.h... yes checking for gettext in libc... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for gtk-config... no checking for GTK - version >= 1.2.1... no *** The gtk-config script installed by GTK could not be found *** If GTK was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GTK_CONFIG environment variable to the *** full path to gtk-config. configure: error: Cannot find proper GTK+ version Obviously it's looking for gtk-config. However, apparently it doesn't exist in the repos anymore. Then this post mentioned that gtkglarea solved their problem, as mentioned in this file. Of course that poster neatly forgets to mention exactly what and how gtkglarea solved their problem, and Google is mostly devoid of information on the problem. So I come here asking for help! I would like to install fsv, but it tells me gtk-config doesn't exist. How can I fix this problem in Ubuntu 10.10? Thanks!

    Read the article

  • Cannot install any software from the Software Center due to ttf-mscorefonts-installer package error

    - by Dei
    When I try to install any software from ubuntu software center it comes with error: An unhandled error occured There seems to be a programming error in aptdaemon. This is the software that allows you to install/remove software and to perform other package management related tasks. details Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 961, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1085, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 226, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:I wasn't able to locate file for the ttf-mscorefonts-installer package. This might mean you need to manually fix this package. Please help me!

    Read the article

  • Homebrew LEGO CD Duplicator Copies CDs On The Cheap

    - by Jason Fitzpatrick
    If you’d like to bulk copy CDs/DVDs without the sticker shock of a $500+ commercial duplicator, this DIY LEGO duplicator is a homebrew solution. Paul Rea wanted to rip and copy CDs and DVDs without shelling out for a commercial duplicator and without the hassle of being bound to that commercial duplicator’s propriety software. His homebrew solution–a combination of LEGO, a rotating base, an Arduino controller, and little ingenuity–handles his ripping and copying needs with ease. Watch the video above to see it in action then hit up the link below for the build log and Arduino code. CD Duplicator [PaulRea.net via Make] HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Minidlna Directory Issues

    - by Somnambulist
    I've done my searching and can't find an answer to THIS specific issue. I have my minidlna set up and running - but it's not really done properly. First off, when I open the server on my bluray player, all of my movies are listed twice - when they are certainly not saved on my external twice. Second, when I open the server - rather than reading "Movies" "TV" "Music", etc - It just mashes all of my movies, tv, and some other folders all together with no real organization. I never had this problem when I had my Windows set up, so I know it's something configured improperly more-so than my external drive giving me gruff. Here's my minidlna.conf file: # This is the configuration file for the MiniDLNA daemon, a DLNA/UPnP-AV media # server. # # Unless otherwise noted, the commented out options show their default value. # # On Debian, you can also refer to the minidlna.conf(5) man page for # documentation about this file. media_dir=/media/somnambulist/Ghost In You # This option can be specified more than once if you want multiple directories # scanned. # # If you want to restrict a media_dir to a specific content type, you can # prepend the directory name with a letter representing the type (A, P or V), # followed by a comma, as so: # * "A" for audio (eg. media_dir=A,/var/lib/minidlna/music) # * "P" for pictures (eg. media_dir=P,/var/lib/minidlna/pictures) # * "V" for video (eg. media_dir=V,/var/lib/minidlna/videos) # # WARNING: After changing this option, you need to rebuild the database. Either # run minidlna with the '-R' option, or delete the 'files.db' file # from the db_dir directory (see below). # On Debian, you can run, as root, 'service minidlna force-reload' instead. #media_dir=/var/lib/minidlna media_dir=V,/media/somnambulist/Ghost In You/Movies media_dir=V,/media/somnambulist/Ghost In You/TV media_dir=P,/home/somnambulist/Pictures # Path to the directory that should hold the database and album art cache. db_dir=/home/somnambulist/serverart # Path to the directory that should hold the log file. log_dir=/home/somnambulist/serverlog # Minimum level of importance of messages to be logged. # Must be one of "off", "fatal", "error", "warn", "info" or "debug". # "off" turns of logging entirely, "fatal" is the highest level of importance # and "debug" the lowest. #log_level=warn # Use a different container as the root of the directory tree presented to # clients. The possible values are: # * "." - standard container # * "B" - "Browse Directory" # * "M" - "Music" # * "P" - "Pictures" # * "V" - "Video" # if you specify "B" and client device is audio-only then "Music/Folders" will be used as root root_container=B # Network interface(s) to bind to (e.g. eth0), comma delimited. #network_interface= # IPv4 address to listen on (e.g. 192.0.2.1). #listening_ip= # Port number for HTTP traffic (descriptions, SOAP, media transfer). port=8200 # URL presented to clients. # The default is the IP address of the server on port 80. #presentation_url=http://example.com:80 # Name that the DLNA server presents to clients. friendly_name=Somnambulist Media Server # Serial number the server reports to clients. serial=12345678 # Model name the server reports to clients. #model_name=Windows Media Connect compatible (MiniDLNA) # Model number the server reports to clients. model_number=1 # Automatic discovery of new files in the media_dir directory. #inotify=yes # List of file names to look for when searching for album art. Names should be # delimited with a forward slash ("/"). album_art_names=Cover.jpg/cover.jpg/AlbumArtSmall.jpg/albumartsmall.jpg/AlbumArt.jpg/albumart.jpg/Album.jpg/album.jpg/Folder.jpg/folder.jpg/Thumb.jpg/thumb.jpg # Strictly adhere to DLNA standards. # This allows server-side downscaling of very large JPEG images, which may # decrease JPEG serving performance on (at least) Sony DLNA products. #strict_dlna=no # Support for streaming .jpg and .mp3 files to a TiVo supporting HMO. #enable_tivo=no # Notify interval, in seconds. #notify_interval=895 # Path to the MiniSSDPd socket, for MiniSSDPd support. #minissdpdsocket=/run/minissdpd.sock` And here's the error I get in terminal when I run: sudo service minidlna restart sudo service minidlna force-reload Force restart error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] Force-reload error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] rm: cannot remove ‘/home/somnambulist/serverart/files.db’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Slumdog Millionaire/Slumdog.Millionaire.Cover.jpg’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Zack and Miri Make a Porno/ZackAndMiriMakeAPornoCover.jpg’: Permission denied [2013/08/12 21:19:46] minidlna.c:744: warn: Failed to clean old file cache. [ OK ] I've spent hours on this at this point, read through various files - and even had a friend who is relatively Ubuntu-savvy try to help me via chat - no such luck. Thanks in advance for any help.

    Read the article

  • repair broken packages-"dpkg: error: conflicting actions -f (--field) and -r (--remove)"

    - by yinon
    Ubuntu 12.04 LTS. if more information will be needed, tell me and'll give. the main problem is: tzach@tzach-pc:~$ sudo apt-get install docky [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done docky is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tzach@tzach-pc:~$ and also: tzach@tzach-pc:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. **The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using ******* so we tryied the guide here in messege #9: http://ubuntuforums.org/showthread.php?t=947124 we run all the first 4 commands and the last one-"sudo apt-get autoremove" gave us: tzach@tzach-pc:~$ sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: **ca-certificates-java** : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless **openjdk-7-jre-lib** : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using -f. so we run the last command twice: sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java and sudo dpkg --remove -force --force-remove-reinstreq openjdk-7-jre-lib but both of them gives: tzach@tzach-pc:~$ sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java [sudo] password for tzach: dpkg: error: conflicting actions -f (--field) and -r (--remove) Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! EDIT FOR green7-output of "sudo apt-get -f install": tzach@tzach-pc:~$ sudo apt-get -f install [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java Suggested packages: default-jre equivs sun-java6-fonts ttf-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following packages will be REMOVED: ttf-mscorefonts-installer The following NEW packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java 0 upgraded, 5 newly installed, 1 to remove and 355 not upgraded. 5 not fully installed or removed. Need to get 0 B/29.6 MB of archives. After this operation, 88.5 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: warning: there's no installed package matching ttf-mscorefonts-installer:amd64 Setting up tzdata (2012e-0ubuntu0.12.04) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing tzdata (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: tzdata E: Sub-process /usr/bin/dpkg returned an error code (1) EDIT2 FOR green7: tzach@tzach-pc:~$ sudo apt-get remove --purge tzdata [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless libc6 : Depends: tzdata but it is not going to be installed libc6:i386 : Depends: tzdata:i386 libical0 : Depends: tzdata but it is not going to be installed openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed python-dateutil : Depends: tzdata but it is not going to be installed ubuntu-minimal : Depends: tzdata but it is not going to be installed util-linux : Depends: tzdata (>= 2006c-2) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). EDIT3 FOR green7: tzach@tzach-pc:~$ sudo apt-get install openjdk-7-jre-headless [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-7-jre-headless : Depends: tzdata-java but it is not going to be installed Depends: java-common (>= 0.28) but it is not going to be installed Recommends: icedtea-7-jre-cacao (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed Recommends: icedtea-7-jre-jamvm (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). some things in the text also supposed to be bolded. but not critic (: Thanks for the editing! Thanks a lot for your assistance.

    Read the article

  • Clean up after Visual Studio

    - by psheriff
    As programmer’s we know that if we create a temporary file during the running of our application we need to make sure it is removed when the application or process is complete. We do this, but why can’t Microsoft do it? Visual Studio leaves tons of temporary files all over your hard drive. This is why, over time, your computer loses hard disk space. This blog post will show you some of the most common places where these files are left and which ones you can safely delete..NET Left OversVisual Studio is a great development environment for creating applications quickly. However, it will leave a lot of miscellaneous files all over your hard drive. There are a few locations on your hard drive that you should be checking to see if there are left-over folders or files that you can delete. I have attempted to gather as much data as I can about the various versions of .NET and operating systems. Of course, your mileage may vary on the folders and files I list here. In fact, this problem is so prevalent that PDSA has created a Computer Cleaner specifically for the Visual Studio developer.  Instructions for downloading our PDSA Developer Utilities (of which Computer Cleaner is one) are at the end of this blog entry.Each version of Visual Studio will create “temporary” files in different folders. The problem is that the files created are not always “temporary”. Most of the time these files do not get cleaned up like they should. Let’s look at some of the folders that you should periodically review and delete files within these folders.Temporary ASP.NET FilesAs you create and run ASP.NET applications from Visual Studio temporary files are placed into the <sysdrive>:\Windows\Microsoft.NET\Framework[64]\<vernum>\Temporary ASP.NET Files folder. The folders and files under this folder can be removed with no harm to your development computer. Do not remove the "Temporary ASP.NET Files" folder itself, just the folders underneath this folder. If you use IIS for ASP.NET development, you may need to run the iisreset.exe utility from the command prompt prior to deleting any files/folder under this folder. IIS will sometimes keep files in use in this folder and iisreset will release the locks so the files/folders can be deleted.Website CacheThis folder is similar to the ASP.NET Temporary Files folder in that it contains files from ASP.NET applications run from Visual Studio. This folder is located in each users local settings folder. The location will be a little different on each operating system. For example on Windows Vista/Windows 7, the folder is located at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\WebsiteCache. If you are running Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\WebsiteCache. Check these locations periodically and delete all files and folders under this directory.Visual Studio BackupThis backup folder is used by Visual Studio to store temporary files while you develop in Visual Studio. This folder never gets cleaned out, so you should periodically delete all files and folders under this directory. On Windows XP, this folder is located at <sysdrive>:\Documents and Settings\<UserName>\My Documents\Visual Studio 200[5|8]\Backup Files. On Windows Vista/Windows 7 this folder is located at <sysdrive>:\Users\<UserName>\Documents\Visual Studio 200[5|8]\.Assembly CacheNo, this is not the global assembly cache (GAC). It appears that this cache is only created when doing WPF or Silverlight development with Visual Studio 2008 or Visual Studio 2010. This folder is located in <sysdrive>:\ Users\<UserName>\AppData\Local\assembly\dl3 on Windows Vista/Windows 7. On Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\assembly. If you have not done any WPF or Silverlight development, you may not find this particular folder on your machine.Project AssembliesThis is yet another folder where Visual Studio stores temporary files. You will find a folder for each project you have opened and worked on. This folder is located at <sysdrive>:\Documents and Settings\<UserName>Local Settings\Application Data\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies on Windows XP. On Microsoft Vista/Windows 7 you will find this folder at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies.Remember not all of these folders will appear on your particular machine. Which ones do show up will depend on what version of Visual Studio you are using, whether or not you are doing desktop or web development, and the operating system you are using.SummaryTaking the time to periodically clean up after Visual Studio will aid in keeping your computer running quickly and increase the space on your hard drive. Another place to make sure you are cleaning up is your TEMP folder. Check your OS settings for the location of your particular TEMP folder and be sure to delete any files in here that are not in use. I routinely clean up the files and folders described in this blog post and I find that I actually eliminate errors in Visual Studio and I increase my hard disk space.NEW! PDSA has just published a “pre-release” of our PDSA Developer Utilities at http://www.pdsa.com/DeveloperUtilities that contains a Computer Cleaner utility which will clean up the above-mentioned folders, as well as a lot of other miscellaneous folders that get Visual Studio build-up. You can download a free trial at http://www.pdsa.com/DeveloperUtilities. If you wish to purchase our utilities through the month of November, 2011 you can use the RSVP code: DUNOV11 to get them for only $39. This is $40 off the regular price.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Developer Machine Clean Up” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • Tutorial: Criando um Componente para o UCM

    - by Denisd
    Então você já instalou o UCM, seguindo o tutorial: http://blogs.oracle.com/ecmbrasil/2009/05/tutorial_de_instalao_do_ucm.html e também já fez o hands-on: http://blogs.oracle.com/ecmbrasil/2009/10/tutorial_de_ucm.html e agora quer ir além do básico? Quer começar a criar funcionalidades para o UCM? Quer se tornar um desenvolvedor do UCM? Quer criar o Content Server à sua imagem e semelhança?! Pois hoje é o seu dia de sorte! Neste tutorial, iremos aprender a criar um componente para o Content Server. O nosso primeiro componente, embora não seja tão simples, será feito apenas com recursos do Content Server. Em um futuro tutorial, iremos aprender a usar classes java como parte de nossos componentes. Neste tutorial, vamos desenvolver um recurso de Favoritos, aonde os usuários poderão marcar determinados documentos como seus Favoritos, e depois consultar estes documentos em uma lista. Não iremos montar o componente com todas as suas funcionalidades, mas com o que vocês verão aqui, será tranquilo aprimorar este componente, inclusive para ambientes de produção. Componente MyFavorites Algumas características do nosso componente favoritos: - Por motivos de espaço, iremos montar este componente de uma forma “rápida e crua”, ou seja, sem seguir necessariamente as melhores práticas de desenvolvimento de componentes. Para entender melhor a prática de desenvolvimento de componentes, recomendo a leitura do guia Working With Components. - Ele será desenvolvido apenas para português-Brasil. Outros idiomas podem ser adicionados posteriormente. - Ele irá apresentar uma opção “Adicionar aos Favoritos” no menu “Content Actions” (tela Content Information), para que o usuário possa definir este arquivo como um dos seus favoritos. - Ao clicar neste link, o usuário será direcionado à uma tela aonde ele poderá digitar um comentário sobre este favorito, para facilitar a leitura depois. - Os favoritos ficarão salvos em uma tabela de banco de dados que iremos criar como parte do componente - A aba “My Content Server” terá uma opção nova chamada “Meus Favoritos”, que irá trazer uma tela que lista os favoritos, permitindo que o usuário possa deletar os links - Alguns recursos ficarão de fora deste exercício, novamente por motivos de espaço. Mas iremos listar estes recursos ao final, como exercícios complementares. Recursos do nosso Componente O componente Favoritos será desenvolvido com alguns recursos. Vamos conhecer melhor o que são estes recursos e quais são as suas funções: - Query: Uma query é qualquer atividade que eu preciso executar no banco, o famoso CRUD: Criar, Ler, Atualizar, Deletar. Existem diferentes jeitos de chamar a query, dependendo do propósito: Select Query: executa um comando SQL, mas descarta o resultado. Usado apenas para testar se a conexão com o banco está ok. Não será usado no nosso exercício. Execute Query: executa um comando SQL que altera informações do banco. Pode ser um INSERT, UPDATE ou DELETE. Descarta os resultados. Iremos usar Execute Query para criar, alterar e excluir os favoritos. Select Cache Query: executa um comando SQL SELECT e armazena os resultados em um ResultSet. Este ResultSet retorna como resultado do serviço e pode ser manipulado em IDOC, Java ou outras linguagens. Iremos utilizar Select Cache Query para retornar a lista de favoritos de um usuário. - Service: Os serviços são os responsáveis por executar as queries (ou classes java, mas isso é papo para um outro tutorial...). O serviço recebe os parâmetros de entrada, executa a query e retorna o ResultSet (no caso de um SELECT). Os serviços podem ser executados através de templates, páginas IDOC, outras aplicações (através de API), ou diretamente na URL do browser. Neste exercício criaremos serviços para Criar, Editar, Deletar e Listar os favoritos de um usuário. - Template: Os templates são as interfaces gráficas (páginas) que serão apresentadas aos usuários. Por exemplo, antes de executar o serviço que deleta um documento do favoritos, quero que o usuário veja uma tela com o ID do Documento e um botão Confirma, para que ele tenha certeza que está deletando o registro correto. Esta tela pode ser criada como um template. Neste exercício iremos construir templates para os principais serviços, além da página que lista todos os favoritos do usuário e apresenta as ações de editar e deletar. Os templates nada mais são do que páginas HTML com scripts IDOC. A nossa sequência de atividades para o desenvolvimento deste componente será: - Criar a Tabela do banco - Criar o componente usando o Component Wizard - Criar as Queries para inserir, editar, deletar e listar os favoritos - Criar os Serviços que executam estas Queries - Criar os templates, que são as páginas que irão interagir com os usuários - Criar os links, na página de informações do conteúdo e no painel My Content Server Pois bem, vamos começar! Confira este tutorial na íntegra clicando neste link: http://blogs.oracle.com/ecmbrasil/Tutorial_Componente_Banco.pdf   Happy coding!  :-)

    Read the article

  • Sorting: TransientVO Vs Query/EO based VO

    - by Vijay Mohan
    In ADF, you can do a sorting on VO rows by invoking setSortBy("VOAttrName") API, but the tricky part is that, this API actually appends a clause to VO query at runtime and the actual sorting is performed after doing VO.executeQuery(), this goes fine for Query/EO based VO. But, how about the transient VO, wherein the rows are populated programmatically..?There is a way to it..:)you can actually specify the query mode on your transient VO, so that the sorting happens on already populated VO rows.Here are the steps to go about it..//Populate your transient VO rows.//VO.setSortBy("YourVOAttrName");//VO.setQueryMode(ViewObject.QUERY_MODE_SCAN_VIEW_ROWS);//VO.executeQuery();So, here the executeQuery() is actually the trigger which calls for VO rows sorting.QUERY_MODE_SCAN_VIEW_ROWS flag makes sure that the sorting is performed on the already populated VO cache.

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Reader for Android Updates; Now with Feed Widgets and More

    - by ETC
    Android phone owners rocking the official Google Reader app will be pleased to see the new update includes much requested features such as polished feed widgets, unread counter widgets, and a handy “mark previous as read” button. Widgets have long been one of the most requested feature for Google Reader for Android. This update rolls them out in two forms. News ticker widgets show you current headlines for your Google Reader folders (as seen in the screenshot here); folder widgets function just as unread counters and only take up a 1×1 space. In addition to the widgets another much requested feature made an appearance. While scrolling through your feed you can now mark all the previous entries as read. Hit up the link below to read more or visit the Android Market on your phone to update the application. Updates to the Google Reader App for Android [The Official Google Reader Blog] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Is RAC One Node Certified for E-Business Suite?

    - by Steven Chan
    Oracle Real Application Clusters (RAC) is a cluster database with a shared cache architecture that supports the transparent deployment of a single database across a pool of servers.  RAC is certified with both Oracle E-Business Suite Release 11i and 12.  We publish best-practices documentation for specific combinations of EBS + RAC versions.  For example, if you were planning on implementing RAC for EBS 12, you would use this documentation:Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1)Many of the largest E-Business Suite users in the world run RAC today, including Oracle; see this Oracle R12 case study for details.A number of customers have recently asked whether RAC One Node can be used with the E-Business Suite.  From the RAC website:Oracle RAC One Node is a new option available with Oracle Database 11g Release 2. Oracle RAC One Node is a single instance of an Oracle RAC-enabled database running on one node in a cluster.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >