Search Results

Search found 7271 results on 291 pages for 'master notes'.

Page 188/291 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How to: Check which table is the biggest, in SQL Server

    - by AngelEyes
    The company I work with had it's DB double its size lately, so I needed to find out which tables were the biggest. I found this on the web, and decided it's worth remembering! Taken from http://www.sqlteam.com/article/finding-the-biggest-tables-in-a-database, the code is from http://www.sqlteam.com/downloads/BigTables.sql   /************************************************************************************** * *  BigTables.sql *  Bill Graziano (SQLTeam.com) *  [email protected] *  v1.1 * **************************************************************************************/ DECLARE @id INT DECLARE @type CHARACTER(2) DECLARE @pages INT DECLARE @dbname SYSNAME DECLARE @dbsize DEC(15, 0) DECLARE @bytesperpage DEC(15, 0) DECLARE @pagesperMB DEC(15, 0) CREATE TABLE #spt_space   (      objid    INT NULL,      ROWS     INT NULL,      reserved DEC(15) NULL,      data     DEC(15) NULL,      indexp   DEC(15) NULL,      unused   DEC(15) NULL   ) SET nocount ON -- Create a cursor to loop through the user tables DECLARE c_tables CURSOR FOR   SELECT id   FROM   sysobjects   WHERE  xtype = 'U' OPEN c_tables FETCH NEXT FROM c_tables INTO @id WHILE @@FETCH_STATUS = 0   BEGIN       /* Code from sp_spaceused */       INSERT INTO #spt_space                   (objid,                    reserved)       SELECT objid = @id,              SUM(reserved)       FROM   sysindexes       WHERE  indid IN ( 0, 1, 255 )              AND id = @id       SELECT @pages = SUM(dpages)       FROM   sysindexes       WHERE  indid < 2              AND id = @id       SELECT @pages = @pages + Isnull(SUM(used), 0)       FROM   sysindexes       WHERE  indid = 255              AND id = @id       UPDATE #spt_space       SET    data = @pages       WHERE  objid = @id       /* index: sum(used) where indid in (0, 1, 255) - data */       UPDATE #spt_space       SET    indexp = (SELECT SUM(used)                        FROM   sysindexes                        WHERE  indid IN ( 0, 1, 255 )                               AND id = @id) - data       WHERE  objid = @id       /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */       UPDATE #spt_space       SET    unused = reserved - (SELECT SUM(used)                                   FROM   sysindexes                                   WHERE  indid IN ( 0, 1, 255 )                                          AND id = @id)       WHERE  objid = @id       UPDATE #spt_space       SET    ROWS = i.ROWS       FROM   sysindexes i       WHERE  i.indid < 2              AND i.id = @id              AND objid = @id       FETCH NEXT FROM c_tables INTO @id   END SELECT TOP 25 table_name = (SELECT LEFT(name, 25)                             FROM   sysobjects                             WHERE  id = objid),               ROWS = CONVERT(CHAR(11), ROWS),               reserved_kb = Ltrim(Str(reserved * d.low / 1024., 15, 0) + ' ' + 'KB'),               data_kb = Ltrim(Str(data * d.low / 1024., 15, 0) + ' ' + 'KB'),               index_size_kb = Ltrim(Str(indexp * d.low / 1024., 15, 0) + ' ' + 'KB'),               unused_kb = Ltrim(Str(unused * d.low / 1024., 15, 0) + ' ' + 'KB') FROM   #spt_space,        MASTER.dbo.spt_values d WHERE  d.NUMBER = 1        AND d.TYPE = 'E' ORDER  BY reserved DESC DROP TABLE #spt_space CLOSE c_tables DEALLOCATE c_tables

    Read the article

  • Best of “The Moth” 2013

    - by Daniel Moth
    As previously (2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012) the time has come again to look back over the year’s activities on this blog, and as predicted there were 3 themes 1. It has been just 15 months since I changed role from what at Microsoft we call an “Individual Contributor” (IC) to a managerial role where ICs report to me. Part of being a manager entails sharing career tips with your team and some of those I have put up on my blog over the last year (and hope to continue to next year): Effectiveness and Efficiency, Lead, Follow, or Get out of the way, and Perfect is the enemy of “Good Enough”. 2. It has also been a 15 months that I joined the Visual Studio Diagnostics team, and we have shipped many capabilities in Visual Studio 2013. I helped the members of my team blog about every single one and create videos of many, and then I created a table of contents pointing to all of their blog posts, so if you are interested in what I have been working on over the last year please follow the links from the master blog post here: Visual Studio 2013 Diagnostics Investments. We are busy working on future Visual Studio releases/updates and I will link to those when we are ready… 3. Finally, I used some of my free time (which is becoming eve so scarce) to do some device development and as part of that I shared a few thoughts and code: Debug.Assert replacement for Phone and Store apps, asynchrony is viral, and MyMessageBox for Phone and Store apps. To see what 2014 will bring to this blog, please subscribe using the link on the left… Happy New Year! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E

    - by Sebastian Bugiu
    I had Ubuntu 11.10 and in the last few weeks I experienced an obscure problem: after I had the computer running for a few days I could no longer connect to google.com or anything related to google. All sites worked with all browsers (Firefox, chrome, opera) except google. It remained in the connecting phase for a few minutes and either timed out or finally connected with this huge delay. Even if I entered other sites such as this one, if it had anything to do with google such adsense or gstatic or whatever with g in it, that site took a long time to load waiting in connecting to gstatic.com . Anything google related took minutes to work, but everything else worked instantly! I tried rebooting or using other machine(with windows on it) and this worked, so it's not network related. But after a few days it started not working again... So I upgraded to the Precise Pangolin hoping this behavior would go away. It didn't! After a few days I get the same behavior as in 11.10. What am I supposed to do? Reboot every other day? I didn't have this problem with neither 10.10 or 11.04. I found the Realtek RTL8168/8111E issue with the r8169 driver but this is not exactly the same card so probably trying r8168 won't help. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Toshiba America Info Systems Device ff1c Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at 4000 [size=256] Memory at d0010000 (64-bit, prefetchable) [size=4K] Memory at d0000000 (64-bit, prefetchable) [size=64K] Capabilities: [40] Power Management version 7 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Count=2 Masked- Capabilities: [cc] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 09-00-00-00-ff-ff-00-00 Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Photoshop Retro Vintage Design Tutorials

    - by Aditi
    Gone are the days when designers only wanted to create high glossy web2.0 gradient rich website designs. Now a days designers are coming up with rugged, retro & vintage themes for their website designs. Colorful or subtle with that worn out look the website seems like a masterpiece. It is not hard to pick up on such Photoshop techniques to master the art of making themes that are retro & vibrant. We have complied a list of tutorials you would like to learn from..rest is in your hands & creativity. Photochrom Vintage Postcard effect Turn your high definition photos into vintage postcards and use them in your website concepts. Learn More Add Retro Look to your Images Give that 1970’s retro look to your images and web concepts. It’s a very easy process using either patterns, brushes, colors or gradients, layer modes and variable opacity. Learn More Brushed metal effect, Just like World War Airplanes texture This is one of a kind photoshop tutorial that teaches how to use  noise and blur filters to create a brushed metal effect unlike other gradient based effects, Also it covers a few layer styles to create airplane graphic. Learn More Transform a New Image into Illustration, Retro Poster Style With the help of this tutorial you can create brilliant poster style or illustrative images and concepts for your new website. This tutorial is superb example of image enhancement & creative use of blending options in photoshop. Learn More Retro Neon Style Text Tutorial Just like the old days, the rainbow neon curvy text format that can be seen on many posters etc, can now be made for your use on website. This tutorial gives you a easy step by step procedure. Learn More Retro Dotted Photo Tutorial Find how to make a dotted poster of your image, pure retro feel. Learn More

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. When we developed the course – we made sure that this course remains practical and demo heavy instead of just theories on this subject. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • virtualbox, MAAS: help needed

    - by Roberto Attias
    Ok, I made some progress wrt the original question (still below). I found /etc/maas/dhcpd.conf contained option domain-name-servers 10.0.3.15, and changed it to 192.168.0.11. After restarting the daemon, I now see "node" getting the right DNS, unfortunately this doesn't fix the main problem, which I believe is the reference to 169.254.169.254. It does introduce a new question: while the remaining information from /etc/maas/dhcp.conf is present in the maas GUI, there is no field to enter the dns address. Why? Anyway, my original problem still stands... Any idea? Original question follows. In VirtualBox, I have: master VM: ubuntu 12.04.3 server eth0: Internal Network, IP= 192.168.0.11 eth1: NAT, IP= 10.0.3.15 eth2: Host-only, IP= 192.168.56.102 running MAAS region and cluster controlller, with DHCP and DNS enabled node VM: eth0: Internal Network node VM boots in PXEboot. DHCP succeeds, and the boot process starts, but during boot I see some issues. One of them is "disk drive not ready yet or not present" for / and /tmp. I've googled this issue, and some people say it happens when the fisical disk is a SSD, which is my case. Anywaythe system seems to recover from this eventually. Immediately after it starts printing a lot of messages of the form: 2013-10-01 16:52:37,142 - url_helper.py[WARNING]: Calling 'http://169.254.168.254/2009-04-04/meta-data/instance-id failed [x/y]: url error [[Errno 113] No route to host] That IP address is clearly bogous, not sure where it came from. Before that point, I had seen the following network configuration: address: 192.168.0.100 broadcast: 192.168.0.255 netmask: 255.255.255.0 gateway: 192.168.0.1 dns0 : 10.0.3.15 dns1 : 0.0.0.0 Not sure if related, but the dns doesn't seem right, as node doesn't have an interface to reach 10.0.3.15. If that's the problem, what should I change to have the DNS point to 192.168.0.11? Thanks, Roberto

    Read the article

  • An end to the static &hellip;

    - by Dave Oliver
    Last October I learnt my company wanted to put together a new blog/social networking policy. I decided that out of respect for my employer I wouldn’t blog until this was sorted out. This was perhaps was an easy decision to make as I was separating from my ex-wife at the time and frankly needed the time to concentrate on other things. So now the company has a brand new policy and I’m back into the dating game I thought I would blow off the cobwebs and get back to what I enjoy doing. First and foremost SQL Server 2008 R2 is almost here and to mark that fact I will be in London on Thursday at the Microsoft UK Tech-Day’s event. The subjects I most want to see are … Power Pivot – this is such an exciting technology! I’ve been a fan of Qlikview for years so it will be good to see how it compares SQL Azure – Cloud Computing is big right now, so it will be interesting to see what the RTM product can do. I have afew ideas for its use and will be interesting to see if SQL Azure is the right product … more on this in the next few weeks. Master Data Services – This is one of those technologies that Microsoft hasn’t been making much noise about … and frankly should have because it is a game changer. Hmmm, queue future “What is … ?” post StreamInsight – An exciting events technology, again another “What is … ?” post is around the corner on that. So, you thought that SQL Server 2008 R2 was just a release to make sure the years between SQL Server 2008 and SQL Server 2010 weren’t so long? I am however disappointed that Clustering across Subnets didn’t make it and not sure if Control Points made it but all will be revealed later on this week. Till then I will have to wait! Technorati Tags: Microsoft,Techdays,SQL Server 2008 R2

    Read the article

  • Razor – Hiding a Section in a Layout

    - by João Angelo
    Layouts in Razor allow you to define placeholders named sections where content pages may insert custom content much like the ContentPlaceHolder available in ASPX master pages. When you define a section in a Razor layout it’s possible to specify if the section must be defined in every content page using the layout or if its definition is optional allowing a page not to provide any content for that section. For the latter case, it’s also possible using the IsSectionDefined method to render default content when a page does not define the section. However if you ever require to hide a given section from all pages based on some runtime condition you might be tempted to conditionally define it in the layout much like in the following code snippet. if(condition) { @RenderSection("ConditionalSection", false) } With this code you’ll hit an error as soon as any content page provides content for the section which makes sense since if a page inherits a layout then it should only define sections that are also defined in it. To workaround this scenario you have a couple of options. Make the given section optional with and move the condition that enables or disables it to every content page. This leads to code duplication and future pages may forget to only define the section based on that same condition. The other option is to conditionally define the section in the layout page using the following hack: @{ if(condition) { @RenderSection("ConditionalSection", false) } else { RenderSection("ConditionalSection", false).WriteTo(TextWriter.Null); } } Hack inspired by a recent stackoverflow question.

    Read the article

  • Anunciando: Grandes Melhorias para Web Sites da Windows Azure

    - by Leniel Macaferi
    Estou animado para anunciar algumas grandes melhorias para os Web Sites da Windows Azure que introduzimos no início deste verão.  As melhorias de hoje incluem: uma nova opção de hospedagem adaptável compartilhada de baixo custo, suporte a domínios personalizados para websites hospedados em modo compartilhado ou em modo reservado usando registros CNAME e A-Records (o último permitindo naked domains), suporte para deployment contínuo usando tanto CodePlex e GitHub, e a extensibilidade FastCGI. Todas essas melhorias estão agora online em produção e disponíveis para serem usadas imediatamente. Nova Camada Escalonável "Compartilhada" A Windows Azure permite que você implante e hospede até 10 websites em um ambiente gratuito e compartilhado com múltiplas aplicações. Você pode começar a desenvolver e testar websites sem nenhum custo usando este modo compartilhado (gratuito). O modo compartilhado suporta a capacidade de executar sites que servem até 165MB/dia de conteúdo (5GB/mês). Todas as capacidades que introduzimos em Junho com esta camada gratuita permanecem inalteradas com a atualização de hoje. Começando com o lançamento de hoje, você pode agora aumentar elasticamente seu website para além desta capacidade usando uma nova opção "shared" (compartilhada) de baixo custo (a qual estamos apresentando hoje), bem como pode usar a opção "reserved instance" (instância reservada) - a qual suportamos desde Junho. Aumentar a capacidade de qualquer um desses modos é fácil. Basta clicar na aba "scale" (aumentar a capacidade) do seu website dentro do Portal da Windows Azure, escolher a opção de modo de hospedagem que você deseja usar com ele, e clicar no botão "Salvar". Mudanças levam apenas alguns segundos para serem aplicadas e não requerem nenhum código para serem alteradas e também não requerem que a aplicação seja reimplantada/reinstalada: A seguir estão mais alguns detalhes sobre a nova opção "shared" (compartilhada), bem como a opção existente "reserved" (reservada): Modo Compartilhado Com o lançamento de hoje, estamos introduzindo um novo modo de hospedagem de baixo custo "compartilhado" para Web Sites da Windows Azure. Um website em execução no modo compartilhado é implantado/instalado em um ambiente de hospedagem compartilhado com várias outras aplicações. Ao contrário da opção de modo free (gratuito), um web-site no modo compartilhado não tem quotas/limite máximo para a quantidade de largura de banda que o mesmo pode servir. Os primeiros 5 GB/mês de banda que você servir com uma website compartilhado é grátis, e então você passará a pagar a taxa padrão "pay as you go" (pague pelo que utilizar) da largura de banda de saída da Windows Azure quando a banda de saída ultrapassar os 5 GB. Um website em execução no modo compartilhado agora também suporta a capacidade de mapear múltiplos nomes de domínio DNS personalizados, usando ambos CNAMEs e A-records para tanto. O novo suporte A-record que estamos introduzindo com o lançamento de hoje oferece a possibilidade para você suportar "naked domains" (domínios nús - sem o www) com seus web-sites (por exemplo, http://microsoft.com além de http://www.microsoft.com). Nós também, no futuro, permitiremos SSL baseada em SNI como um recurso nativo nos websites que rodam em modo compartilhado (esta funcionalidade não é suportada com o lançamento de hoje - mas chagará mais tarde ainda este ano, para ambos as opções de hospedagem - compartilhada e reservada). Você paga por um website no modo compartilhado utilizando o modelo padrão "pay as you go" que suportamos com outros recursos da Windows Azure (ou seja, sem custos iniciais, e você só paga pelas horas nas quais o recurso estiver ativo). Um web-site em execução no modo compartilhado custa apenas 1,3 centavos/hora durante este período de preview (isso dá uma média de $ 9.36/mês ou R$ 19,00/mês - dólar a R$ 2,03 em 17-Setembro-2012) Modo Reservado Além de executar sites em modo compartilhado, também suportamos a execução dos mesmos dentro de uma instância reservada. Quando rodando em modo de instância reservada, seus sites terão a garantia de serem executados de maneira isolada dentro de sua própria VM (virtual machine - máquina virtual) Pequena, Média ou Grande (o que significa que, nenhum outro cliente da Windows azure terá suas aplicações sendo executadas dentro de sua VM. Somente as suas aplicações). Você pode executar qualquer número de websites dentro de uma máquina virtual, e não existem quotas para limites de CPU ou memória. Você pode executar seus sites usando uma única VM de instância reservada, ou pode aumentar a capacidade tendo várias instâncias (por exemplo, 2 VMs de médio porte, etc.). Dimensionar para cima ou para baixo é fácil - basta selecionar a VM da instância "reservada" dentro da aba "scale" no Portal da Windows Azure, escolher o tamanho da VM que você quer, o número de instâncias que você deseja executar e clicar em salvar. As alterações têm efeito em segundos: Ao contrário do modo compartilhado, não há custo por site quando se roda no modo reservado. Em vez disso, você só paga pelas instâncias de VMs reservadas que você usar - e você pode executar qualquer número de websites que você quiser dentro delas, sem custo adicional (por exemplo, você pode executar um único site dentro de uma instância de VM reservada ou 100 websites dentro dela com o mesmo custo). VMs de instâncias reservadas têm um custo inicial de $ 8 cents/hora ou R$ 16 centavos/hora para uma pequena VM reservada. Dimensionamento Elástico para Cima/para Baixo Os Web Sites da Windows Azure permitem que você dimensione para cima ou para baixo a sua capacidade dentro de segundos. Isso permite que você implante um site usando a opção de modo compartilhado, para começar, e em seguida, dinamicamente aumente a capacidade usando a opção de modo reservado somente quando você precisar - sem que você tenha que alterar qualquer código ou reimplantar sua aplicação. Se o tráfego do seu site diminuir, você pode diminuir o número de instâncias reservadas que você estiver usando, ou voltar para a camada de modo compartilhado - tudo em segundos e sem ter que mudar o código, reimplantar a aplicação ou ajustar os mapeamentos de DNS. Você também pode usar o "Dashboard" (Painel de Controle) dentro do Portal da Windows Azure para facilmente monitorar a carga do seu site em tempo real (ele mostra não apenas as solicitações/segundo e a largura de banda consumida, mas também estatísticas como a utilização de CPU e memória). Devido ao modelo de preços "pay as you go" da Windows Azure, você só paga a capacidade de computação que você usar em uma determinada hora. Assim, se o seu site está funcionando a maior parte do mês em modo compartilhado (a $ 1.3 cents/hora ou R$ 2,64 centavos/hora), mas há um final de semana em que ele fica muito popular e você decide aumentar sua capacidade colocando-o em modo reservado para que seja executado em sua própria VM dedicada (a $ 8 cents/hora ou R$ 16 centavos/hora), você só terá que pagar os centavos/hora adicionais para as horas em que o site estiver sendo executado no modo reservado. Você não precisa pagar nenhum custo inicial para habilitar isso, e uma vez que você retornar seu site para o modo compartilhado, você voltará a pagar $ 1.3 cents/hora ou R$ 2,64 centavos/hora). Isto faz com que essa opção seja super flexível e de baixo custo. Suporte Melhorado para Domínio Personalizado Web sites em execução no modo "compartilhado" ou no modo "reservado" suportam a habilidade de terem nomes personalizados (host names) associados a eles (por exemplo www.mysitename.com). Você pode associar múltiplos domínios personalizados para cada Web Site da Windows Azure. Com o lançamento de hoje estamos introduzindo suporte para registros A-Records (um recurso muito pedido pelos usuários). Com o suporte a A-Record, agora você pode associar domínios 'naked' ao seu Web Site da Windows Azure - ou seja, em vez de ter que usar www.mysitename.com você pode simplesmente usar mysitename.com (sem o prefixo www). Tendo em vista que você pode mapear vários domínios para um único site, você pode, opcionalmente, permitir ambos domínios (com www e a versão 'naked') para um site (e então usar uma regra de reescrita de URL/redirecionamento (em Inglês) para evitar problemas de SEO). Nós também melhoramos a interface do usuário para o gerenciamento de domínios personalizados dentro do Portal da Windows Azure como parte do lançamento de hoje. Clicando no botão "Manage Domains" (Gerenciar Domínios) na bandeja na parte inferior do portal agora traz uma interface de usuário personalizada que torna fácil gerenciar/configurar os domínios: Como parte dessa atualização nós também tornamos significativamente mais suave/mais fácil validar a posse de domínios personalizados, e também tornamos mais fácil alternar entre sites/domínios existentes para Web Sites da Windows Azure, sem que o website fique fora do ar. Suporte a Deployment (Implantação) contínua com Git e CodePlex ou GitHub Um dos recursos mais populares que lançamos no início deste verão foi o suporte para a publicação de sites diretamente para a Windows Azure usando sistemas de controle de código como TFS e Git. Esse recurso fornece uma maneira muito poderosa para gerenciar as implantações/instalações da aplicação usando controle de código. É realmente fácil ativar este recurso através da página do dashboard de um web site: A opção TFS que lançamos no início deste verão oferece uma solução de implantação contínua muito rica que permite automatizar os builds e a execução de testes unitários a cada vez que você atualizar o repositório do seu website, e em seguida, se os testes forem bem sucedidos, a aplicação é automaticamente publicada/implantada na Windows Azure. Com o lançamento de hoje, estamos expandindo nosso suporte Git para também permitir cenários de implantação contínua integrando esse suporte com projetos hospedados no CodePlex e no GitHub. Este suporte está habilitado para todos os web-sites (incluindo os que usam o modo "free" (gratuito)). A partir de hoje, quando você escolher o link "Set up Git publishing" (Configurar publicação Git) na página do dashboard de um website, você verá duas opções adicionais quando a publicação baseada em Git estiver habilitada para o web-site: Você pode clicar em qualquer um dos links "Deploy from my CodePlex project" (Implantar a partir do meu projeto no CodePlex) ou "Deploy from my GitHub project"  (Implantar a partir do meu projeto no GitHub) para seguir um simples passo a passo para configurar uma conexão entre o seu website e um repositório de código que você hospeda no CodePlex ou no GitHub. Uma vez que essa conexão é estabelecida, o CodePlex ou o GitHub automaticamente notificará a Windows Azure a cada vez que um checkin ocorrer. Isso fará com que a Windows Azure faça o download do código e compile/implante a nova versão da sua aplicação automaticamente.  Os dois vídeos a seguir (em Inglês) mostram quão fácil é permitir esse fluxo de trabalho ao implantar uma app inicial e logo em seguida fazer uma alteração na mesma: Habilitando Implantação Contínua com os Websites da Windows Azure e CodePlex (2 minutos) Habilitando Implantação Contínua com os Websites da Windows Azure e GitHub (2 minutos) Esta abordagem permite um fluxo de trabalho de implantação contínua realmente limpo, e torna muito mais fácil suportar um ambiente de desenvolvimento em equipe usando Git: Nota: o lançamento de hoje suporta estabelecer conexões com repositórios públicos do GitHub/CodePlex. Suporte para repositórios privados será habitado em poucas semanas. Suporte para Múltiplos Branches (Ramos de Desenvolvimento) Anteriormente, nós somente suportávamos implantar o código que estava localizado no branch 'master' do repositório Git. Muitas vezes, porém, os desenvolvedores querem implantar a partir de branches alternativos (por exemplo, um branch de teste ou um branch com uma versão futura da aplicação). Este é agora um cenário suportado - tanto com projetos locais baseados no git, bem como com projetos ligados ao CodePlex ou GitHub. Isto permite uma variedade de cenários úteis. Por exemplo, agora você pode ter dois web-sites - um em "produção" e um outro para "testes" - ambos ligados ao mesmo repositório no CodePlex ou no GitHub. Você pode configurar um dos websites de forma que ele sempre baixe o que estiver presente no branch master, e que o outro website sempre baixe o que estiver no branch de testes. Isto permite uma maneira muito limpa para habilitar o teste final de seu site antes que ele entre em produção. Este vídeo de 1 minuto (em Inglês) demonstra como configurar qual branch usar com um web-site. Resumo Os recursos mostrados acima estão agora ao vivo em produção e disponíveis para uso imediato. Se você ainda não tem uma conta da Windows Azure, você pode inscrever-se em um teste gratuito para começar a usar estes recursos hoje mesmo. Visite o O Centro de Desenvolvedores da Windows Azure (em Inglês) para saber mais sobre como criar aplicações para serem usadas na nuvem. Nós teremos ainda mais novos recursos e melhorias chegando nas próximas semanas - incluindo suporte para os recentes lançamentos do Windows Server 2012 e .NET 4.5 (habilitaremos novas imagens de web e work roles com o Windows Server 2012 e NET 4.5 no próximo mês). Fique de olho no meu blog para detalhes assim que esses novos recursos ficarem disponíveis. Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Boot failure on installation from a burned iso image

    - by jdamae
    I'm encountering boot failure while trying to install a Linux distro from a CD. I'm using an older PC; here are its specs: HP Pavilion a255c 2.66GHz CPU, 512MB RAM with a BIOS revision of 6/30/2003 I reclaimed an older drive (Seagate ST340810A) that seems to be working, as it's recognized in the BIOS (auto-detected). So this is not the original HDD, but a replacement. I downloaded a mini.iso of Ubuntu 10.10 that I want to install, and burned the image to a CD for install. My boot sequence is: First Boot Device [CDROM]. I disabled devices 2-4 so I can just force it to read first from the CD-ROM. This old PC also has a separate CD writer which is a Sec.Slave. The Sec.Master is the Toshiba DVD/ROM DSM-171 drive where I placed the burned Linux CD. With these settings I cannot get it to boot. I get the message "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER" when I start the pc with the cd (burned iso image). Would I be able to boot off a usb flash drive? Would that work?

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • Ideas for my MSc project and Google Summer of Code 2011

    - by Chris Wilson
    I'm currently putting together ideas for my master's project which I'll be working on over the summer, and I would like to be able to use this time to help Ubuntu in some way. I have the freedom to come up with pretty much any project in the field of software development/engineering provided it Is a substantial piece of software (for reference, I will be working on it for five full months) Solves a problem for more people than just myself I was hoping to use this project as an opportunity to get some experience with the underbelly of Linux, so that I can mention on my CV that I have 'experience in developing for *NIX in C++', which I'm noticing more and more companies are looking for these days, probably because stuff's moving to cloud servers and that's where Linux rules the roost. My problem is that, since I don't have the experience to begin with, I'm not sure what to do for such a project, and I was wondering if anyone could help me with this. I've noticed from Daniel Holbach's blog that Ubuntu participated in the Google Summer of Code 2010, and that project ideas for that can be found here. However, I have not been able to find anything related to Ubuntu and GSoC 2011, but I have noticed from the GSoC timeline that the list of mentoring organisations will not be published until March 18th. I have two questions here. Has Ubuntu applied to be a part of Summer of Code 2011, and what is the status of the 2010 project list linked to earlier. Were they all implemented or are there still some that can be picked up now, should I not participate in GSoC? I'd like to do something for Ubuntu, but I'd rather not spend my time reinventing the wheel.

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • Upgrade to 2008 R2

    - by DavidWimbush
    I don't like it, Carruthers. It's just too quiet. Well, I've done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don't actually get upgraded until you recover them but there was no noticable delay. It's gone from 2005 Workgroup - limited to 4GB of memory - to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It's soo much faster. The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service's credentials for database access when running reports. It didn't preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade. All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

    Read the article

  • 4 Top Tips from the Exceptional DBA Award judges

    - by Rebecca Amos
    There's still time to celebrate your achievements as a DBA – or those of a DBA you know – by submitting a nomination for the Exceptional DBA Awards 2011. To help you get started, here are some top tips from the judges on what they're looking for from this year's winner [hint: it's very likely you're already exceptional!]: "An Exceptional DBA must be able to communicate effectively and clearly with both technical people and the client." Steve Jones. "Exceptional DBAs are like police officers: we're here to serve and protect. Both serving and protecting are vital parts of the job, and we can't just focus on one." Brent Ozar "DBA work can be routine. Exceptional DBAs are enthusiastic about their work and are rarely bored, as there is always something new to learn and master." Brad McGehee. "Remember that cost is an important factor for your company. The ability to save your company money with a different technical solution will make you an Exceptional DBA, and can make you exceptionally well liked." Rodney Landrum. So whether you've brought a team together for a project, taken steps to protect the security of your servers, or learnt a new topic to understand an element of your job better, it's likely you’re already taking the steps that make you the Exceptional DBA the judges are looking for. To get more insider info from the judges, download your free poster of their top tips, and then get started on your entry: www.exceptionaldba.com.

    Read the article

  • SQL SERVER – How to Set Variable and Use Variable in SQLCMD Mode

    - by Pinal Dave
    Here is the question which I received the other day on SQLAuthority Facebook page. Social media is a wonderful thing and I love the active conversation between blog readers and myself – actually I think social media adds lots of human factor to any conversation. Here is the question - “I am using sqlcmd in SSMS – I am not sure how to declare variable and pass it, for example I have a database and it has table, how can I make the table variable dynamic and pass different value everytime?” Fantastic question, and here is its very simple answer. First of all, enable sqlcmd mode in SQL Server Management Studio as described in following image. Now in query editor type following SQL. :SETVAR DatabaseName “AdventureWorks2012″ :SETVAR SchemaName “Person” :SETVAR TableName “EmailAddress“ USE $(DatabaseName); SELECT * FROM $(SchemaName).$(TableName); Note that I have set the value of the database, schema and table as a sqlcmd variable and I am executing the query using the same parameters. Well, that was it, sqlcmd is a very simple language to master and it also aids in doing various tasks easily. If you have any other sqlcmd tips, please leave a comment and I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd

    Read the article

  • SQL SERVER – Disk Space Monitoring – Detecting Low Disk Space on Server

    - by Pinal Dave
    A very common question I often receive is how to detect if the disk space is running low on SQL Server. There are two different ways to do the same. I personally prefer method 2 as that is very easy to use and I can use it creatively along with database name. Method 1: EXEC MASTER..xp_fixeddrives GO Above query will return us two columns, drive name and MB free. If we want to use this data in our query, we will have to create a temporary table and insert the data from this stored procedure into the temporary table and use it. Method 2: SELECT DISTINCT dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will give us three columns: drive logical name, drive letter and free space in MB. We can further modify above query to also include database name in the query as well. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO This will give us additional data about which database is placed on which drive. If you see a database name multiple times, it is because your database has multiple files and they are on different drives. You can modify above query one more time to even include the details of actual file location. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, mf.physical_name PhysicalFileLocation, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will now additionally include the physical file location as well. As I mentioned earlier, I prefer method 2 as I can creatively use it as per the business need. Let me know which method are you using in your production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Learning about sparse columns and filtered indexes

    - by Christian
    I’ve been brushing up on sparse columns and filtered indexes recently and two resources stood out for me as indispensable so I’d thought I’d share them. Those of you studying for Microsoft Certified Master: SQL Server will no doubt have found the Readiness Videos published on TechNet and Kimberley Tripp’s (Blog|Twitter) webcast in this series on Sparse columns provides an excellent resource showing different schema designs and specifically where sparse columns fit in. MCM Readiness Video on Sparse columns by Kimberly Tripp http://technet.microsoft.com/en-us/sqlserver/ff977043 The second resource is a session from this years PASS Summit (2010) by Don Vilen (Twitter) called Filtered Indexes, Sparse Columns: Together, Separately (AD203). I thought this session was great and in combination with Kimberly’s webcast provides the perfect background for anyone wanting to learn this topic, especially for those studying for the MCM knowledge exam. If you attended PASS you should have a login to stream Don’s session but if not you can buy the official DVD’s from http://www.sqlpass.org The DVDs are worthy investment considering how much material you get access to!   Regards, Christian Christian Bolton  - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services twitter: @christianbolton

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • SQL SERVER – Where Can YOU Get My Books – SQL Server Interview Question and Answers

    - by pinaldave
    Earlier month I released by third book SQL Server Interview Question and Answers. The focus of this book is ‘master the basics’. If you rate yourself 10 out of 10 in SQL Server – this book is not for you but if you want to learn fundamentals or want to refresh your fundamentals this book is for YOU. Earlier I was overwhelmed by love you all have shown to this book on release date leading our three digit inventory to run out of stock. Read detail blog post about the subject over here A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available. Well, we learn the lesson from the experience and have made sure that the inventory does not run out any more. Since then we are now available on multiple outlets. Pretty much anywhere in USA and India the book is available. Additionally, where ever Amazon ships internationally. I have created dedicated page where I have listed where one can avail this book from Details of SQL Server Interview Question and Answers. Even though I keep on getting common question like – where one can get this book. You can get this book from: USA: Amazon India: Flipkart | IndiaPlaza | Crossword In India now you can walk into any crossword store and ask this book, if they do not have it, you can ask them get one for you. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Download Free eBook – Introducing Microsoft SQL Server 2012

    - by pinaldave
    Database Administration and Business Intelligence is indeed very key area of the SQL Server. My very good friend Ross Mistry and Stacia Misner has recently wrote book which is for SQL Server 2012. The best part of the book is it is totally FREE! Well, this book assumes that you have certain level of SQL Server Administration as well Business Intelligence understanding. So if you are absolutely beginner I suggest you read other books of Ross as well attend Pluralsight course of Stacia Misner. Personally I read this book in last 10 days and I find it very easy to read and very comprehensive as well. Part I Database Administration (by Ross Mistry) 1. SQL Server 2012 Editions and Engine Enhancements 2. High-Availability and Disaster-Recovery Enhancements 3. Performance and Scalability 4. Security Enhancements 5. Programmability and Beyond-Relational Enhancements Part II Business Intelligence Development (by Stacia Misner) 6. Integration Services 7. Data Quality Services 8. Master Data Services 9. Analysis Services and PowerPivot 10. Reporting Services Here are various versions of the eBook. PDF ePub Mobi Amazon Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is there a Generic USB TouchScreen Driver 12.04?

    - by lbjoum
    Is there a Generic USB TouchScreen Driver 12.04? Device 03eb:201c I've been looking for 4 days solid (not very skilled) and can't find a solution. I have a generic tablet: C97- Atom N2600 9.7" 2GB 32GB Bluetooth WiFi WebCam Ext.3G Windows 7 Tablet PC Using 12.04 and cannot find a driver. I installed android and the touchscreen works but still lots of other bugs. Oh well, stuck with Windows 7 and not happy about it. Will keep trying, but too much time wasted already. If you have a solution I would love to try it. ubuntu@ubuntu:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 0cf2:6238 ENE Technology, Inc. Bus 001 Device 003: ID 1a40:0101 Terminus Technology Inc. 4-Port HUB Bus 001 Device 005: ID 05e1:0100 Syntek Semiconductor Co., Ltd 802.11g + Bluetooth Wireless Adapter Bus 001 Device 006: ID 090c:3731 Silicon Motion, Inc. - Taiwan (formerly Feiya Technology Corp.) Bus 003 Device 002: ID 03eb:201c Atmel Corp. at90usbkey sample firmware (HID mouse) (from Windows: HID\VID_03EB&PID_201C\6&5F38127&0&0000 USB\VID_03EB&PID_201C\5&193ADADC&1&2 ) Bus 001 Device 007: ID 0518:0001 EzKEY Corp. USB to PS2 Adaptor v1.09 Bus 001 Device 008: ID 192f:0916 Avago Technologies, Pte. ubuntu@ubuntu:~$ sudo lsusb -v Bus 003 Device 002: ID 03eb:201c Atmel Corp. at90usbkey sample firmware (HID mouse) Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 32 idVendor 0x03eb Atmel Corp. idProduct 0x201c at90usbkey sample firmware (HID mouse) bcdDevice 45.a2 iManufacturer 1 CDT iProduct 2 9.75 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 34 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x00 (Missing must-be-set bit!) (Bus Powered) MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.11 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 177 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 5 Device Status: 0x00fb Self Powered Remote Wakeup Enabled Debug Mode ubuntu@ubuntu:~$ sudo lshw ubuntu description: Notebook product: To be filled by O.E.M. (To be filled by O.E.M.) vendor: To be filled by O.E.M. version: To be filled by O.E.M. serial: To be filled by O.E.M. width: 32 bits capabilities: smbios-2.7 dmi-2.7 smp-1.4 smp configuration: boot=normal chassis=notebook cpus=2 family=To be filled by O.E.M. sku=To be filled by O.E.M. uuid=00020003-0004-0005-0006-000700080009 *-core description: Motherboard product: Tiger Hill vendor: INTEL Corporation physical id: 0 version: To be filled by O.E.M. serial: To be filled by O.E.M. slot: To be filled by O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 4.6.5 date: 08/24/2012 size: 64KiB capacity: 960KiB capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer acpi usb biosbootspecification *-cpu:0 description: CPU product: Intel(R) Atom(TM) CPU N2600 @ 1.60GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 slot: CPU 1 size: 1600MHz capacity: 1600MHz width: 64 bits clock: 400MHz capabilities: x86-64 boot fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx constant_tsc arch_perfmon pebs bts nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm arat configuration: cores=2 enabledcores=1 id=2 threads=2 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 24KiB capacity: 24KiB capabilities: internal write-back unified *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 512KiB capacity: 512KiB capabilities: internal varies unified *-logicalcpu:0 description: Logical CPU physical id: 2.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 width: 64 bits capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 width: 64 bits capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 width: 64 bits capabilities: logical *-memory description: System Memory physical id: 28 slot: System board or motherboard size: 2GiB *-bank:0 description: SODIMM [empty] product: [Empty] vendor: [Empty] physical id: 0 serial: [Empty] slot: DIMM0 *-bank:1 description: SODIMM DDR3 Synchronous 800 MHz (1.2 ns) vendor: 69 physical id: 1 serial: 00000210 slot: DIMM1 size: 2GiB width: 64 bits clock: 800MHz (1.2ns) *-cpu:1 physical id: 1 bus info: cpu@1 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 size: 1600MHz capabilities: ht configuration: id=2 *-logicalcpu:0 description: Logical CPU physical id: 2.1 capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 capabilities: logical *-pci description: Host bridge product: Atom Processor D2xxx/N2xxx DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 03 width: 32 bits clock: 33MHz *-display UNCLAIMED description: VGA compatible controller product: Atom Processor D2xxx/N2xxx Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 32 bits clock: 33MHz capabilities: pm msi vga_controller bus_master cap_list configuration: latency=0 resources: memory:dfe00000-dfefffff ioport:f100(size=8) *-multimedia description: Audio device product: N10/ICH 7 Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:42 memory:dff00000-dff03fff *-pci:0 description: PCI bridge product: N10/ICH 7 Family PCI Express Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:2000(size=4096) memory:80000000-801fffff ioport:80200000(size=2097152) *-usb:0 description: USB controller product: N10/ICH 7 Family USB UHCI Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:23 ioport:f0a0(size=32) *-usb:1 description: USB controller product: N10/ICH 7 Family USB UHCI Controller #2 vendor: Intel Corporation physical id: 1d.1 bus info: pci@0000:00:1d.1 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:19 ioport:f080(size=32) *-usb:2 description: USB controller product: N10/ICH 7 Family USB UHCI Controller #3 vendor: Intel Corporation physical id: 1d.2 bus info: pci@0000:00:1d.2 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:18 ioport:f060(size=32) *-usb:3 description: USB controller product: N10/ICH 7 Family USB UHCI Controller #4 vendor: Intel Corporation physical id: 1d.3 bus info: pci@0000:00:1d.3 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:16 ioport:f040(size=32) *-usb:4 description: USB controller product: N10/ICH 7 Family USB2 EHCI Controller vendor: Intel Corporation physical id: 1d.7 bus info: pci@0000:00:1d.7 version: 02 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:dff05000-dff053ff *-pci:1 description: PCI bridge product: 82801 Mobile PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: e2 width: 32 bits clock: 33MHz capabilities: pci subtractive_decode bus_master cap_list *-isa description: ISA bridge product: NM10 Family LPC Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 02 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: latency=0 *-storage description: SATA controller product: N10/ICH7 Family SATA Controller [AHCI mode] vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 logical name: scsi0 version: 02 width: 32 bits clock: 66MHz capabilities: storage msi pm ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=0 resources: irq:41 ioport:f0f0(size=8) ioport:f0e0(size=4) ioport:f0d0(size=8) ioport:f0c0(size=4) ioport:f020(size=16) memory:dff04000-dff043ff *-disk description: ATA Disk product: BIWIN SSD physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 1206 serial: 123403501060 size: 29GiB (32GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=8fbe402b *-volume:0 description: Windows NTFS volume physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 version: 3.1 serial: 249bde5d-8246-9a40-88c7-2d5e3bcaf692 size: 19GiB capacity: 19GiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2011-04-04 02:27:51 filesystem=ntfs state=clean *-volume:1 description: Windows NTFS volume physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 version: 3.1 serial: de12d40f-d5ca-8642-b306-acd9349fda1a size: 10231MiB capacity: 10GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2011-04-04 01:52:26 filesystem=ntfs state=clean *-serial UNCLAIMED description: SMBus product: N10/ICH 7 Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 02 width: 32 bits clock: 33MHz configuration: latency=0 resources: ioport:f000(size=32) *-scsi:0 physical id: 2 bus info: usb@1:1 logical name: scsi4 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk description: SCSI Disk physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb size: 29GiB (31GB) capabilities: partitioned partitioned:dos configuration: signature=00017463 *-volume description: Windows FAT volume vendor: mkdosfs physical id: 1 bus info: scsi@4:0.0.0,1 logical name: /dev/sdb1 logical name: /cdrom version: FAT32 serial: 129b-4f87 size: 29GiB capacity: 29GiB capabilities: primary bootable fat initialized configuration: FATs=2 filesystem=fat mount.fstype=vfat mount.options=rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro state=mounted *-scsi:1 physical id: 3 bus info: usb@1:3.1 logical name: scsi6 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk description: SCSI Disk physical id: 0.0.0 bus info: scsi@6:0.0.0 logical name: /dev/sdc size: 7400MiB (7759MB) capabilities: partitioned partitioned:dos configuration: signature=c3072e18 *-volume description: Windows FAT volume vendor: mkdosfs physical id: 1 bus info: scsi@6:0.0.0,1 logical name: /dev/sdc1 logical name: /media/JOUM8G version: FAT32 serial: e676-9311 size: 7394MiB capacity: 7394MiB capabilities: primary bootable fat initialized configuration: FATs=2 filesystem=fat label=Android mount.fstype=vfat mount.options=rw,nosuid,nodev,relatime,uid=999,gid=999,fmask=0022,dmask=0077,codepage=cp437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro state=mounted ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Plus More Enterprise LTD. USB-compliant keyboard id=10 [slave pointer (2)] ? ? USB Optical Mouse id=11 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Sleep Button id=8 [slave keyboard (3)] ? Plus More Enterprise LTD. USB-compliant keyboard id=9 [slave keyboard (3)] ? USB 2.0 Webcam - Front id=12 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=13 [slave keyboard (3)] ubuntu@ubuntu:~$

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >