Search Results

Search found 89481 results on 3580 pages for 'new technology'.

Page 37/3580 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • new email alias for ASK ADR

    - by user12842161
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} New email alias for Ask ADR! Please note that [email protected] will be decommissioned on the 4th of November. We advise you to start writing in to [email protected] for all ADR related queries and escalations going forward. All emails sent to [email protected] will be automatically forwarded to the new alias till the 4th of November, 2011.

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • New Write Flash SSDs and more disk trays

    - by Steve Tunstall
    In case you haven't heard, the Write SSDs the ZFSSA have been updated. Much faster now for the same price. Sweet. The new write-flash SSDs have a new part number of 7105026 , so make sure you order the right ones. It's important to note that you MUST be on code level 2011.1.4.0 or higher to use these. They have increased in IOPS from 6,000 to 11,000, and increased throughput from 200MB/s to 350MB/s.    Also, you can now add six SAS HBAs (up from 4) to the 7420, allowing one to have three SAS channels with 12 disk trays each, for a new total of 36 disk trays. With 3TB drives, that's 2.5 Petabytes. Is that enough for you? Make sure you add new cards to the correct slots. I've talked about this before, but here is the handy-dandy matrix again so you don't have to go find it. Remember the rules: You can have 6 of any one kind of card (like six 10GigE cards), but you only really get 8 slots, since you have two SAS cards no matter what. If you want more than 12 disk trays, you need two more SAS cards, so think about expansion later, too. In fact, if you are going to have two different speeds of drives, in other words you want to mix 15K speed and 7,200 speed drives in the same system, I would highly recommend two different SAS channels. So I would want four SAS cards in that system, no matter how many trays you have. 

    Read the article

  • Sneak Preview - New CodePlex UI

    We have been busy the last several months working to improve the overall experience for the CodePlex community. We have been working through some of the top requested items, such as our big announcement last week enabling Git. Something that is not explicitly on the feature request list are requests to update the web site look and user experience.  As Brian Harry mentioned, the Future of CodePlex is Bright, so it is time to start brightening up the place. Goals As with any sizeable change you need to decide the scope of changes you want to tackle. We decided that we would optimize on incremental improvements verses taking months to get a completely new experience released. Our goals with this user experience work is to refresh the look and feel of the site, introduce new visual elements and set up the site for future structural changes. So this is not the end, just the beginning. Early Views I want to set a few expectations first, these screen shots are not final, and we are still working through the content and final element placement. Feedback is always welcome, just take that in mind as you review the images. New CodePlex Home The navigation changed a good bit on the home page and we have moved the search to a more consistent location across the site.   User Profile Users Home Page The goal was to make it easier to find and take action on common tasks such as creating projects. Project Home Issue Tracker   This should give you a taste of where we are going with the new user experience.     As always we love the feedback, either comment below, find us on Twitter @codeplex or @mgroves84, or create or vote up suggestions.

    Read the article

  • CSOM (Client Side Object Model) - What's new with SharePoint 2013

    - by KunaalKapoor
    SharePoint CSOMThe Client-Side Object Model or CSOM came out with SharePoint 2010. CSOM is accessible through client.svc but all client.svc calls must go through supported WFC entry points (supported entry points are .NET, Silverlight and JavaScript). So a developer would need to use client side proxy objects exposed by either a .NET assembly or a JavaScript library. Changes with SharePoint 2013REST Capabilities - Direct access to client.svcNew APIs - App ModelREST CapabilitiesOne of the most important changes to the CSOM with SharePoint 2013 is that the web service entry point of client.svc has been extended to allow direct access  via REST-Based web service calls. This is a really critical change since its going to make the SharePoint platform accessible to any other platform, opening the horizons of integration and collaboration with other REST based platforms and devices. OData (a really popular standard data access API for HTTP-based clients) is supported similar to 2010 but will be a more important aspect of SharePoint 2013 development.New API'sCSOM for SharePoint 2013 has been buffed up with several new APIs for not only SharePoint server functionality but also an API for Windows Phone applications. For a SharePoint 2010 farm most of the new APIs mentioned below are available only via server side APIs:SearchTaxonomyPublishingWorkflowUser ProfilesE-DiscoveryAnalyticsBusiness DataIRMFeedsSharePoint 2013 remote APIs being accessible through both CSOM and REST is very important to the new app model where developers can no longer run code in a SharePoint environment nor can they access the server-side APIs. So CSOM plays the savior here.Also, you can now substitute the alias '_api' in order to reference '_vti_bin/client.svc'.

    Read the article

  • What is an effective way to familiarize yourself with a new application in a new language? [closed]

    - by codeninja
    Possible Duplicate: How do I pick up a new language quickly, given I know several others? I started a new job working on an application I'm vaguely familar with, and it's in PERL! I come from a PHP and Java background, so while I understand the basics, there are lot of nuances in PERL that make it troublesome. updated < Im supposed to be a UI developer, but the smallness of the office requires me to learn and do a lot more than just javascript. So that was slightly unexpected in some aspects and I'm just thinking about what approach to take with this /updated So far I've been sifting through the code to understand what each part does, printed out copies of code and try to lookup APIs I'm not familiar with, and so I dunno how effective this process is -- I feel like it's gonna take some time -- and I dont want my new employers to feel like I'm not being productive. Anyone have some ideas or approaches for this kind of situation? I read some of the questions about learning new languages, but I'm curious to see if anyone's had experience with this with PERL.

    Read the article

  • Transferring users and search engines to a new domain

    - by eftpotrm
    I've been asked to take over the maintnance of an existing site that's being reworked. At present it's serving localised content for several languages, but via a fairly unhelpful mechanism that means essentially search engines only have it indexed in English and any deep links will de facto appear in English as well. So, new localised sites are being built under separate domains - not just for this, there's other benefits. What we're then looking to do is to redirect users correctly to the new site, where appropriate. For humans this isn't a problem. We can send them through a gateway page on their first site visit, grab their language preference and put it in a cookie, then redirect them to the new localised content as soon as it's available. For search engines, this isn't so good... In principle I'm happy to simply bypass the gateway page and redirect known spiders to the new site, but this means we're serving radically different content (different URL even!) to human and robot users. Won't this therefore be regarded as cloaking and cause us grief? Anyone know a better way to handle this?

    Read the article

  • Mounting a new hard drive (sda1) to my existing filesystem

    - by shank22
    I tried to read some posts regarding mounting a new hard drive, but I am facing some problem. My new hard drive is sda1. The output of sudo fdisk -l is: sudo fdisk -l Disk /dev/sdb: 999.7 GB, 999653638144 bytes 255 heads, 63 sectors/track, 121534 cylinders, total 1952448512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00016485 Device Boot Start End Blocks Id System /dev/sdb1 * 2048 1935822847 967910400 83 Linux /dev/sdb2 1935824894 1952446463 8310785 5 Extended /dev/sdb5 1935824896 1952446463 8310784 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x78dbcdc1 Device Boot Start End Blocks Id System /dev/sda1 2048 1953521663 976759808 7 HPFS/NTFS/exFAT What should be done to add this new sda1 hard drive on booting up? What should be added in the /etc/fstab file? I have not performed any partition on the new sda1 drive. I need help on how to proceed from scratch and can't afford to take any risk. Please help!

    Read the article

  • Adding the New HTML Editor Extender to a Web Forms Application using NuGet

    - by Stephen Walther
    The July 2011 release of the Ajax Control Toolkit includes a new, lightweight, HTML5 compatible HTML Editor extender. In this blog entry, I explain how you can take advantage of NuGet to quickly add the new HTML Editor control extender to a new or existing ASP.NET Web Forms application. Installing the Latest Version of the Ajax Control Toolkit with NuGet NuGet is a package manager. It enables you to quickly install new software directly from within Visual Studio 2010. You can use NuGet to install additional software when building any type of .NET application including ASP.NET Web Forms and ASP.NET MVC applications. If you have not already installed NuGet then you can install NuGet by navigating to the following address and clicking the giant install button: http://nuget.org/ After you install NuGet, you can add the Ajax Control Toolkit to a new or existing ASP.NET Web Forms application by selecting the Visual Studio menu option Tools, Library Package Manager, Package Manager Console: Selecting this menu option opens the Package Manager Console. You can enter the command Install-Package AjaxControlToolkit in the console to install the Ajax Control Toolkit: After you install the Ajax Control Toolkit with NuGet, your application will include an assembly reference to the AjaxControlToolkit.dll and SanitizerProviders.dll assemblies: Furthermore, your Web.config file will be updated to contain a new tag prefix for the Ajax Control Toolkit controls: <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> The configuration file installed by NuGet adds the prefix ajaxToolkit for all of the Ajax Control Toolkit controls. You can type ajaxToolkit: in source view to get auto-complete in Source view. You can, of course, change this prefix to anything you want. Using the HTML Editor Extender After you install the Ajax Control Toolkit, you can use the HTML Editor Extender with the standard ASP.NET TextBox control to enable users to enter rich formatting such as bold, underline, italic, different fonts, and different background and foreground colors. For example, the following page can be used for entering comments. The page contains a standard ASP.NET TextBox, Button, and Label control. When you click the button, any text entered into the TextBox is displayed in the Label control. It is a pretty boring page: Let’s make this page fancier by extending the standard ASP.NET TextBox with the HTML Editor extender control: Notice that the ASP.NET TextBox now has a toolbar which includes buttons for performing various kinds of formatting. For example, you can change the size and font used for the text. You also can change the foreground and background color – and make many other formatting changes. You can customize the toolbar buttons which the HTML Editor extender displays. To learn how to customize the toolbar, see the HTML Editor Extender sample page here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx Here’s the source code for the ASP.NET page: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication1.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Add Comments</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="TSM1" runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="8" Runat="server" /> <ajaxToolkit:HtmlEditorExtender ID="hee" TargetControlID="txtComments" Runat="server" /> <br /><br /> <asp:Button ID="btnSubmit" Text="Add Comment" Runat="server" onclick="btnSubmit_Click" /> <hr /> <asp:Label ID="lblComment" Runat="server" /> </div> </form> </body> </html> Notice that the page above contains 5 controls. The page contains a standard ASP.NET TextBox, Button, and Label control. However, the page also contains an Ajax Control Toolkit ToolkitScriptManager control and HtmlEditorExtender control. The HTML Editor extender control extends the standard ASP.NET TextBox control. The HTML Editor TargetID attribute points at the TextBox control. Here’s the code-behind for the page above:   using System; namespace WebApplication1 { public partial class Default : System.Web.UI.Page { protected void btnSubmit_Click(object sender, EventArgs e) { lblComment.Text = txtComments.Text; } } }   Preventing XSS/JavaScript Injection Attacks If you use an HTML Editor -- any HTML Editor -- in a public facing web page then you are opening your website up to Cross-Site Scripting (XSS) attacks. An evil hacker could submit HTML using the HTML Editor which contains JavaScript that steals private information such as other user’s passwords. Imagine, for example, that you create a web page which enables your customers to post comments about your website. Furthermore, imagine that you decide to redisplay the comments so every user can see them. In that case, a malicious user could submit JavaScript which displays a dialog asking for a user name and password. When an unsuspecting customer enters their secret password, the script could transfer the password to the hacker’s website. So how do you accept HTML content without opening your website up to JavaScript injection attacks? The Ajax Control Toolkit HTML Editor supports the Anti-XSS library. You can use the Anti-XSS library to sanitize any HTML content. The Anti-XSS library, for example, strips away all JavaScript automatically. You can download the Anti-XSS library from NuGet. Open the Package Manager Console and execute the command Install-Package AntiXSS: Adding the Anti-XSS library to your application adds two assemblies to your application named AntiXssLibrary.dll and HtmlSanitizationLibrary.dll. After you install the Anti-XSS library, you can configure the HTML Editor extender to use the Anti-XSS library your application’s web.config file: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> Summary In this blog entry, I described how you can quickly get started using the new HTML Editor extender – included with the July 2011 release of the Ajax Control Toolkit – by installing the Ajax Control Toolkit with NuGet. If you want to learn more about the HTML Editor then please take a look at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx

    Read the article

  • usb mouse/keyboard doesn't work with 3.11.0-12-generic kernel

    - by x-yuri
    I can't use my usb keyboard/mouse after upgrade from raring to saucy. The keyboard works in grub menu and if I boot with the previous kernel version (3.8.0-31-generic). My new kernel version is 3.11.0-12-generic. I've got Mad Catz R.A.T.7 wired USB mouse, Canyon CNL-MBMSO02 wired usb mouse and Logitech diNovo Edge wireless keyboard, connected to computer through Logitech Unifying Receiver. Using PS/2 keyboard I've managed to get some information. dmesg says: [ 0.166273] ACPI: bus type USB registered [ 0.166273] usbcore: registered new interface driver usbfs [ 0.166273] usbcore: registered new interface driver hub [ 0.166273] usbcore: registered new device driver usb ... [ 3.534226] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.534228] ehci-pci: EHCI PCI platform driver [ 3.534291] ehci-pci 0000:00:1a.7: setting latency timer to 64 [ 3.534299] ehci-pci 0000:00:1a.7: EHCI Host Controller [ 3.534304] ehci-pci 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 3.534315] ehci-pci 0000:00:1a.7: debug port 1 [ 3.538218] ehci-pci 0000:00:1a.7: cache line size of 64 is not supported [ 3.538231] ehci-pci 0000:00:1a.7: irq 18, io mem 0xd3325400 [ 3.548017] ehci-pci 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 3.548042] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.548045] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.548048] usb usb1: Product: EHCI Host Controller [ 3.548050] usb usb1: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.548053] usb usb1: SerialNumber: 0000:00:1a.7 [ 3.548155] hub 1-0:1.0: USB hub found [ 3.548159] hub 1-0:1.0: 6 ports detected [ 3.548311] ehci-pci 0000:00:1d.7: setting latency timer to 64 [ 3.548319] ehci-pci 0000:00:1d.7: EHCI Host Controller [ 3.548323] ehci-pci 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 3.548333] ehci-pci 0000:00:1d.7: debug port 1 [ 3.552228] ehci-pci 0000:00:1d.7: cache line size of 64 is not supported [ 3.552239] ehci-pci 0000:00:1d.7: irq 23, io mem 0xd3325000 [ 3.564014] ehci-pci 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 3.564044] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.564047] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564050] usb usb2: Product: EHCI Host Controller [ 3.564052] usb usb2: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.564056] usb usb2: SerialNumber: 0000:00:1d.7 [ 3.564163] hub 2-0:1.0: USB hub found [ 3.564167] hub 2-0:1.0: 6 ports detected [ 3.564274] ehci-platform: EHCI generic platform driver [ 3.564280] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.564281] ohci-platform: OHCI generic platform driver [ 3.564287] uhci_hcd: USB Universal Host Controller Interface driver [ 3.564345] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 3.564347] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 3.564352] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 3.564378] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000f0c0 [ 3.564402] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564404] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564406] usb usb3: Product: UHCI Host Controller [ 3.564408] usb usb3: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564410] usb usb3: SerialNumber: 0000:00:1a.0 [ 3.564478] hub 3-0:1.0: USB hub found [ 3.564482] hub 3-0:1.0: 2 ports detected [ 3.564589] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 3.564592] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 3.564597] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 3.564623] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000f0a0 [ 3.564647] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564649] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564651] usb usb4: Product: UHCI Host Controller [ 3.564653] usb usb4: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564654] usb usb4: SerialNumber: 0000:00:1a.1 [ 3.564727] hub 4-0:1.0: USB hub found [ 3.564730] hub 4-0:1.0: 2 ports detected [ 3.564834] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 3.564837] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 3.564843] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 3.564863] uhci_hcd 0000:00:1a.2: irq 18, io base 0x0000f080 [ 3.564885] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564887] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564889] usb usb5: Product: UHCI Host Controller [ 3.564891] usb usb5: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564892] usb usb5: SerialNumber: 0000:00:1a.2 [ 3.564962] hub 5-0:1.0: USB hub found [ 3.564966] hub 5-0:1.0: 2 ports detected [ 3.565073] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 3.565076] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 3.565081] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 3.565101] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000f060 [ 3.565124] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565127] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565128] usb usb6: Product: UHCI Host Controller [ 3.565130] usb usb6: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565132] usb usb6: SerialNumber: 0000:00:1d.0 [ 3.565195] hub 6-0:1.0: USB hub found [ 3.565198] hub 6-0:1.0: 2 ports detected [ 3.565303] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 3.565306] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 3.565310] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 3.565329] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000f040 [ 3.565352] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565354] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565356] usb usb7: Product: UHCI Host Controller [ 3.565358] usb usb7: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565359] usb usb7: SerialNumber: 0000:00:1d.1 [ 3.565424] hub 7-0:1.0: USB hub found [ 3.565427] hub 7-0:1.0: 2 ports detected [ 3.565534] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 3.565537] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 3.565541] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 3.565560] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000f020 [ 3.565584] usb usb8: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565587] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565588] usb usb8: Product: UHCI Host Controller [ 3.565590] usb usb8: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565592] usb usb8: SerialNumber: 0000:00:1d.2 [ 3.565658] hub 8-0:1.0: USB hub found [ 3.565661] hub 8-0:1.0: 2 ports detected ... [ 4.120014] usb 2-3: new high-speed USB device number 2 using ehci-pci ... [ 4.468908] usb 2-3: New USB device found, idVendor=046d, idProduct=0825 [ 4.468912] usb 2-3: New USB device strings: Mfr=0, Product=0, SerialNumber=2 [ 4.468914] usb 2-3: SerialNumber: AF582E10 ... [ 5.284019] usb 5-2: new full-speed USB device number 2 using uhci_hcd [ 5.465903] usb 5-2: New USB device found, idVendor=046d, idProduct=0b04 [ 5.465908] usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.465911] usb 5-2: Product: Logitech BT Mini-Receiver [ 5.465914] usb 5-2: Manufacturer: Logitech [ 5.468948] hub 5-2:1.0: USB hub found [ 5.470898] hub 5-2:1.0: 3 ports detected [ 5.476096] Switched to clocksource tsc [ 5.712099] usb 7-2: new full-speed USB device number 2 using uhci_hcd [ 5.896366] usb 7-2: New USB device found, idVendor=046d, idProduct=c52b [ 5.896370] usb 7-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.896372] usb 7-2: Product: USB Receiver [ 5.896374] usb 7-2: Manufacturer: Logitech [ 6.140016] usb 8-1: new full-speed USB device number 2 using uhci_hcd [ 6.324597] usb 8-1: New USB device found, idVendor=0738, idProduct=1708 [ 6.324603] usb 8-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.324605] usb 8-1: Product: Mad Catz R.A.T.7 Mouse [ 6.324608] usb 8-1: Manufacturer: Mad Catz [ 6.564012] usb 8-2: new low-speed USB device number 3 using uhci_hcd [ 6.746602] usb 8-2: New USB device found, idVendor=1d57, idProduct=0010 [ 6.746608] usb 8-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.746610] usb 8-2: Product: usb mouse with wheel [ 6.746613] usb 8-2: Manufacturer: HID-Compliant Mouse [ 7.337898] usb 5-2.2: new full-speed USB device number 3 using uhci_hcd [ 7.490902] usb 5-2.2: New USB device found, idVendor=046d, idProduct=c713 [ 7.490907] usb 5-2.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.490910] usb 5-2.2: Product: Logitech BT Mini-Receiver [ 7.490913] usb 5-2.2: Manufacturer: Logitech [ 7.490915] usb 5-2.2: SerialNumber: 001F203BD6A7 [ 7.569898] usb 5-2.3: new full-speed USB device number 4 using uhci_hcd [ 7.722901] usb 5-2.3: New USB device found, idVendor=046d, idProduct=c714 [ 7.722906] usb 5-2.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.722909] usb 5-2.3: Product: Logitech BT Mini-Receiver [ 7.722911] usb 5-2.3: Manufacturer: Logitech [ 7.722913] usb 5-2.3: SerialNumber: 001F203BD6A7 lsusb (more output): Bus 002 Device 002: ID 046d:0825 Logitech, Inc. Webcam C270 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 003: ID 1d57:0010 Xenta Bus 008 Device 002: ID 0738:1708 Mad Catz, Inc. Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 004: ID 046d:c714 Logitech, Inc. diNovo Edge Keyboard Bus 005 Device 003: ID 046d:c713 Logitech, Inc. Bus 005 Device 002: ID 046d:0b04 Logitech, Inc. Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub More background. Before that I had a problem with logging in to GNOME. During which I upgraded all the packages at one point (apt-get upgrade) and it stopped booting at all (it didn't get to login screen). Then I fixed PATH issue and now I've got this usb-not-working issue. I tried reinstalling kernel, to no effect. Is there anything else I can do to fix or diagnose the problem?

    Read the article

  • Using Teleriks new LINQ implementation to create OData feeds

    This week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. While this is a separate LINQ implementation from traditional OpenAccess Entites, you can use the visual designer without ever interacting with OpenAccess, however, you can always hook into the advanced ORM features like caching, fetch plan optimization, etc, if needed. Just to show off how easy our LINQ implementation is to use, I will walk you through building an OData feed using Data Services Update for .NET Framework 3.5 SP1. (Memo to Microsoft: P-L-E-A-S-E hire someone from Apple to name your products.) How easy is it? If you have a fast machine, are skilled with the mouse, and type fast, you can do this in about 60 seconds via three easy steps. (I promise in about 2-3 weeks that you can do this in less then 30 seconds. Stay tuned for that.)  Step 1 (15-20 seconds): Building your Domain Model In your web project in Visual Studio, right click on the project and select Add|New Item and select Telerik OpenAccess Domain Model as your item template. Give the file a meaningful name as well. Select your database type (SQL Server, SQL Azure, Oracle, MySQL, VistaDB, etc) and build the connection string. If you already have a Visual Studio connection string already saved, this step is trivial.  Then select your tables, enter a name for your model and click Finish. In this case I connected to Northwind and selected only Customers, Orders, and Order Details.  I named my model NorthwindEntities and will use that in my DataService. Step 2 (20-25 seconds): Adding and Configuring your Data Service In your web project in Visual Studio, right click on the project and select Add|New Item and select ADO .NET Data Service as your item template and name your service. In the code behind for your Data Service you have to make three small changes. Add the name of your Telerik Domain Model (entered in Step 1) as the DataService name (shown on line 6 below as NorthwindEntities) and uncomment line 11 and add a * to show all entities. Optionally if you want to take advantage of the DataService 3.5 updates, add line 13 (and change IDataServiceConfiguration to DataServiceConfiguration in line 9.) 1: using System.Data.Services; 2: using System.Data.Services.Common; 3:   4: namespace Telerik.RLINQ.Astoria.Web 5: { 6: public class NorthwindService : DataService<NorthwindEntities> 7: { 8: //change the IDataServiceConfigurationto DataServiceConfiguration 9: public static void InitializeService(DataServiceConfiguration config) 10: { 11: config.SetEntitySetAccessRule("*", EntitySetRights.All); 12: //take advantage of the "Astoria3.5 Update" features 13: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 14: } 15: } 16: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Step 3 (~30 seconds): Adding the DataServiceKeys You now have to tell your data service what are the primary keys of each entity. To do this you have to create a new code file and create a few partial classes. If you type fast, use copy and paste from your first entity,  and use a refactoring productivity tool, you can add these 6-8 lines of code or so in about 30 seconds. This is the most tedious step, but dont worry, Ive bribed some of the developers and our next update will eliminate this step completely. Just create a partial class for each entity you have mapped and add the attribute [DataServiceKey] on top of it along with the keys field name. If you have any complex properties, you will need to make them a primitive type, as I do in line 15. Create this as a separate file, dont manipulate the generated data access classes in case you want to regenerate them again later (even thought that would be much faster.) 1: using System.Data.Services.Common; 2:   3: namespace Telerik.RLINQ.Astoria.Web 4: { 5: [DataServiceKey("CustomerID")] 6: public partial class Customer 7: { 8: } 9:   10: [DataServiceKey("OrderID")] 11: public partial class Order 12: { 13: } 14:   15: [DataServiceKey(new string[] { "OrderID", "ProductID" })] 16: public partial class OrderDetail 17: { 18: } 19:   20: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Done! Time to run the service. Now, lets run the service! Select the svc file and right click and say View in Browser. You will see your OData service and can interact with it in the browser. Now that you have an OData service set up, you can consume it in one of the many ways that OData is consumed: using LINQ, the Silverlight OData client, Excel PowerPivot, or PhP, etc. Happy Data Servicing! Technorati Tags: Telerik,Astoria,Data Services Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • DialogFX: A New Approach to JavaFX Dialogs

    - by HecklerMark
    How would you like a quick and easy drop-in dialog box capability for JavaFX? That's what I was thinking when a weekend presented itself. And never being one to waste a good weekend...  :-) After doing some "roll-your-own" basic dialog building for a JavaFX app, I recently stumbled across Anton Smirnov's work on GitHub. It was a good start, but it wasn't exactly what I was after, and ideas just kept popping up of things I'd do differently. I wanted something a bit more streamlined, a bit easier to just "drop in and use". And so DialogFX was born. DialogFX wasn't intended to be overly fancy, overly clever - just useful and robust. Here were my goals: Easy to use. A dialog "system" should be so simple to use a new developer can drop it in quickly with nearly no learning curve. A seasoned developer shouldn't even have to think, just tap in a few lines and go. Why should dialogs slow "actual development"?  :-) Defaults. If you don't specify something (dialog type, buttons, etc.), a good dialog system should still work. It may not be pretty, but it shouldn't throw gears. Sharable. It's all open source. Even the icons are in the commons, so they can be reused at will. Let's take a look at some screen captures and the code used to produce them.   DialogFX INFO dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX();        dialog.setTitleText("Info Dialog Box Example");        dialog.setMessage("This is an example of an INFO dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ERROR dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ERROR);        dialog.setTitleText("Error Dialog Box Example");        dialog.setMessage("This is an example of an ERROR dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ACCEPT dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ACCEPT);        dialog.setTitleText("Accept Dialog Box Example");        dialog.setMessage("This is an example of an ACCEPT dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX Question dialog (Yes/No) Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. Would you like to continue?");        dialog.showDialog(); DialogFX Question dialog (custom buttons) Screen captures Windows Mac  Sample code         List<String> buttonLabels = new ArrayList<>(2);        buttonLabels.add("Affirmative");        buttonLabels.add("Negative");         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. This also demonstrates the automatic wrapping of text in DialogFX. Would you like to continue?");        dialog.addButtons(buttonLabels, 0, 1);        dialog.showDialog(); A couple of things to note You may have noticed in that last example the addButtons(buttonLabels, 0, 1) call. You can pass custom button labels in and designate the index of the default button (responding to the ENTER key) and the cancel button (for ESCAPE). Optional parameters, of course, but nice when you may want them. Also, the showDialog() method actually returns the index of the button pressed. Rather than create EventHandlers in the dialog that really have little to do with the dialog itself, you can respond to the user's choice within the calling object. Or not. Again, it's your choice.  :-) And finally, I've Javadoc'ed the code in the main places. Hopefully, this will make it easy to get up and running quickly and with a minimum of fuss. How Do I Get (Git?) It? To try out DialogFX, just point your browser here to the DialogFX GitHub repository and download away! Please take a look, try it out, and let me know what you think. All feedback welcome! All the best, Mark 

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • New Article: SharePoint 2010 for Developers &ndash; Whats new?

    - by Sahil Malik
    SharePoint 2010 Training: more information This is an nice overview/beginners article about what is new in SharePoint 2010 from purely a developer point of view. Excerpt - “In some ways SharePoint 2007 was a brand new incarnation of the SharePoint product. For the very first time, ASP.NET 2.0 was applied properly to the product. Things such as master pages, membership providers, sitemap providers etc. were used heavily in SharePoint. As a result, SharePoint 2007 got a whole new developer story to it. But in some ways it was a first version of a big product, so the development story left us wanting for more. Wanting for more because in some ways the API wasn’t ideal, and most certainly the development tools were somewhere between non-existent to bad. Diagnosing SharePoint errors was another frustrating story many have endured. What has changed in SharePoint 2010? Let’s find out.” Read full article ....

    Read the article

  • New .NET Library for Accessing the Survey Monkey API

    - by Ben Emmett
    I’ve used Survey Monkey’s API for a while, and though it’s pretty powerful, there’s a lot of boilerplate each time it’s used in a new project, and the json it returns needs a bunch of processing to be able to use the raw information. So I’ve finally got around to releasing a .NET library you can use to consume the API more easily. The main advantages are: Only ever deal with strongly-typed .NET objects, making everything much more robust and a lot faster to get going Automatically handles things like rate-limiting and paging through results Uses combinations of endpoints to get all relevant data for you, and processes raw response data to map responses to questions To start, either install it using NuGet with PM> Install-Package SurveyMonkeyApi (easier option), or grab the source from https://github.com/bcemmett/SurveyMonkeyApi if you prefer to build it yourself. You’ll also need to have signed up for a developer account with Survey Monkey, and have both your API key and an OAuth token. A simple usage would be something like: string apiKey = "KEY"; string token = "TOKEN"; var sm = new SurveyMonkeyApi(apiKey, token); List<Survey> surveys = sm.GetSurveyList(); The surveys object is now a list of surveys with all the information available from the /surveys/get_survey_list API endpoint, including the title, id, date it was created and last modified, language, number of questions / responses, and relevant urls. If there are more than 1000 surveys in your account, the library pages through the results for you, making multiple requests to get a complete list of surveys. All the filtering available in the API can be controlled using .NET objects. For example you might only want surveys created in the last year and containing “pineapple” in the title: var settings = new GetSurveyListSettings { Title = "pineapple", StartDate = DateTime.Now.AddYears(-1) }; List<Survey> surveys = sm.GetSurveyList(settings); By default, whenever optional fields can be requested with a response, they will all be fetched for you. You can change this behaviour if for some reason you explicitly don’t want the information, using var settings = new GetSurveyListSettings { OptionalData = new GetSurveyListSettingsOptionalData { DateCreated = false, AnalysisUrl = false } }; Survey Monkey’s 7 read-only endpoints are supported, and the other 4 which make modifications to data might be supported in the future. The endpoints are: Endpoint Method Object returned /surveys/get_survey_list GetSurveyList() List<Survey> /surveys/get_survey_details GetSurveyDetails() Survey /surveys/get_collector_list GetCollectorList() List<Collector> /surveys/get_respondent_list GetRespondentList() List<Respondent> /surveys/get_responses GetResponses() List<Response> /surveys/get_response_counts GetResponseCounts() Collector /user/get_user_details GetUserDetails() UserDetails /batch/create_flow Not supported Not supported /batch/send_flow Not supported Not supported /templates/get_template_list Not supported Not supported /collectors/create_collector Not supported Not supported The hierarchy of objects the library can return is Survey List<Page> List<Question> QuestionType List<Answer> List<Item> List<Collector> List<Response> Respondent List<ResponseQuestion> List<ResponseAnswer> Each of these classes has properties which map directly to the names of properties returned by the API itself (though using PascalCasing which is more natural for .NET, rather than the snake_casing used by SurveyMonkey). For most users, Survey Monkey imposes a rate limit of 2 requests per second, so by default the library leaves at least 500ms between requests. You can request higher limits from them, so if you want to change the delay between requests just use a different constructor: var sm = new SurveyMonkeyApi(apiKey, token, 200); //200ms delay = 5 reqs per sec There’s a separate cap of 1000 requests per day for each API key, which the library doesn’t currently enforce, so if you think you’ll be in danger of exceeding that you’ll need to handle it yourself for now.  To help, you can see how many requests the current instance of the SurveyMonkeyApi object has made by reading its RequestsMade property. If the library encounters any errors, including communicating with the API, it will throw a SurveyMonkeyException, so be sure to handle that sensibly any time you use it to make calls. Finally, if you have a survey (or list of surveys) obtained using GetSurveyList(), the library can automatically fill in all available information using sm.FillMissingSurveyInformation(surveys); For each survey in the list, it uses the other endpoints to fill in the missing information about the survey’s question structure, respondents, and responses. This results in at least 5 API calls being made per survey, so be careful before passing it a large list. It also joins up the raw response information to the survey’s question structure, so that for each question in a respondent’s set of replies, you can access a ProcessedAnswer object. For example, a response to a dropdown question (from the /surveys/get_responses endpoint) might be represented in json as { "answers": [ { "row": "9384627365", } ], "question_id": "615487516" } Separately, the question’s structure (from the /surveys/get_survey_details endpoint) might have several possible answers, one of which might look like { "text": "Fourth item in dropdown list", "visible": true, "position": 4, "type": "row", "answer_id": "9384627365" } The library understands how this mapping works, and uses that to give you the following ProcessedAnswer object, which first describes the family and type of question, and secondly gives you the respondent’s answers as they relate to the question. Survey Monkey has many different question types, with 11 distinct data structures, each of which are supported by the library. If you have suggestions or spot any bugs, let me know in the comments, or even better submit a pull request .

    Read the article

  • Using the new CSS Analyzer in JavaFX Scene Builder

    - by Jerome Cambon
    As you know, JavaFX provides from the API many properties that you can set to customize or make your components to behave as you want. For instance, for a Button, you can set its font, or its max size.Using Scene Builder, these properties can be explored and modified using the inspector. However, JavaFX also provides many other properties to have a fine grained customization of your components : the css properties. These properties are typically set from a css stylesheet. For instance, you can set a background image on a Button, change the Button corners, etc... Using Scene Builder, until now, you could set a css property using the inspector Style and Stylesheet editors. But you had to go to the JavaFX css documentation to know the css properties that can be applied to a given component. Hopefully, Scene Builder 1.1 added recently a very interesting new feature : the CSS Analyzer.It allows you to explore all the css properties available for a JavaFX component, and helps you to build your css rules. A very simple example : make a Button rounded Let’s take a very simple example:you would like to customize your Buttons to make them rounded. First, enable the CSS Analyzer, using the ‘View->Show CSS Analyzer’ menu. Grow the main window, and the CSS Analyzer to get more room: Then, drop a Button from the Library to the ContentView: the CSS Analyzer is now showing the Button css properties: As you can see, there is a ‘-fx-background-radius’ css property that allow to define the radius of the background (note that you can get the associated css documentation by clicking on the property name). You can then experiment this by setting the Button style property from the inspector: As you can see in the css doc, one can set the same radius for the 4 corners by a simple number. Once the style value is applied, the Button is now rounded, as expected.Look at the CSS Analyzer: the ‘-fx-background-radius’ property has now 2 entries: the default one, and the one we just entered from the Style property. The new value “win”: it overrides the default one, and become the actual value (to highlight this, the cell background becomes blue). Now, you will certainly prefer to apply this new style to all the Buttons of your FXML document, and have a css rule for this.To do this, save you document first, and create a css file in the same directory than the new document.Create an empty css file (e.g. test.css), and attach it the the root AnchorPane, by first selecting the AnchorPane, then using the Stylesheets editor from the inspector: Add the corresponding css rule to your new test.css file, from your preferred editor (Netbeans for me ;-) and save it. .button { -fx-background-radius: 10px;} Now, select your Button and have a look at the CSS Analyzer. As you can see, the Button is inheriting the css rule (since the Button is a child of the AnchorPane), and still have its inline Style. The Inline style “win”, since it has precedence on the stylesheet. The CSS Analyzer columns are displayed by precedence order.Note the small right-arrow icons, that allow to jump to the source of the value (either test.css, or the inspector in this case).Of course, unless you want to set a specific background radius for this particular Button, you can remove the inline Style from the inspector. Changing the color of a TitledPane arrow In some cases, it can be useful to be able to select the inner element you want to style directly from the Content View . Drop a TitledPane to the Content View. Then select from the CSS Analyzer the CSS cursor (the other cursor on the left allow you to come back to ‘standard’ selection), that will allow to select an inner element: height: 62px;" align="LEFT" border="0"> … and select the TitledPane arrow, that will get a yellow background: … and the Styleable Path is updated: To define a new css rule, you can first copy the Styleable path : .. then paste it in your test.css file. Then, add an entry to set the -fx-background-color to red. You should have something like: .titled-pane:expanded .title .arrow-button .arrow { -fx-background-color : red;} As soon as the test.css is saved, the change is taken into account in Scene Builder. You can also use the Styleable Path to discover all the inner elements of TitledPane, by clicking on the arrow icon: More details You can see the CSS Analyzer in action (and many other features) from the Java One BOF: BOF4279 - In-Depth Layout and Styling with the JavaFX Scene Builder presented by my colleague Jean-Francois Denise. On the right hand, click on the Media link to go to the video (streaming) of the presa. The Scene Builder support of CSS starts at 9:20 The CSS Analyzer presentation starts at 12:50

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Another lesser known feature of SQL Server Management Studio 2012 – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. In one of my previous guest blog on SQL Authority, I wrote about “Additional Connection Parameter” tab of login screen in SQL Server Management Studio (a.k.a. SSMS). On the similar lines, this blog is going to show little less known new feature of login main screen (“Connect to Server”) of SSMS 2012. You might have seen below screen countless times and you might wonder what is there is blog about in this simple screen. Well, continue reading and you would get the answer. Many times, DBA have to login to production server from non-regular machine, may be a developer’s workstation. Once you login to SQL, do your work and close the management studio. Do you know that your server name is saved in management studio? Of course, very useful feature because you may not like to type server name/IP address every time. Whatever servers you have connected, it would be stored by management studio. But sometime, it’s annoying! What you would do if you want SQL Server Management Studio to forget “all” the servers listed in drop down of Server name? To do that, you need to know how and where it’s stored. You can use one of my favorite tool from sysinternals called Process Monitor (also known as ProcMon) and easily figure out that this is stored in a file under your windows user profile. Below is the file in SQL 2008 R2 Management Studio. %appdata%\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin For SQL Server 2012, here is what we can see in ProcMon So, the path is %appdata%\Microsoft\Microsoft SQL Server\110\Tools\Shell\SqlStudio.bin So far, you might wonder, where is the new feature? I have been asked by many users to delete entries from SSMS “Connect to Server” server name list. Well, unofficially, you can delete the file directly which we found via ProcMon. Note that delete file to get rid of server list is not officially supported by Microsoft. Better way to achieve this is provided in SSMS 2012. To delete the servers from the list, highlight the name we want to delete (via keyboard or mouse) and then press delete key via keyboard. We can’t be multi-select and has to be done one by one. We can delete as many entries we want. I have delete few from first screenshot taken and here is the modified version. This is not available in SQL 2008 R2 and its previous version. This came from feedback given to SQL Server Product group. Hope you have learned something new today! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >