Search Results

Search found 20360 results on 815 pages for 'capture output'.

Page 252/815 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • ASP.NET MVC–How to show asterisk after required field label

    - by DigiMortal
    Usually we have some required fields on our forms and it would be nice if ASP.NET MVC views can detect those fields automatically and display nice red asterisk after field label. As this functionality is not built in I built my own solution based on data annotations. In this posting I will show you how to show red asterisk after label of required fields. Here are the main information sources I used when working out my own solution: How can I modify LabelFor to display an asterisk on required fields? (stackoverflow) ASP.NET MVC – Display visual hints for the required fields in your model (Radu Enuca) Although my code was first written for completely different situation I needed it later and I modified it to work with models that use data annotations. If data member of model has Required attribute set then asterisk is rendered after field. If Required attribute is missing then there will be no asterisk. Here’s my code. You can take just LabelForRequired() methods and paste them to your own HTML extension class. public static class HtmlExtensions {     [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]     public static MvcHtmlString LabelForRequired<TModel, TValue>(this HtmlHelper<TModel> html, Expression<Func<TModel, TValue>> expression, string labelText = "")     {         return LabelHelper(html,             ModelMetadata.FromLambdaExpression(expression, html.ViewData),             ExpressionHelper.GetExpressionText(expression), labelText);     }       private static MvcHtmlString LabelHelper(HtmlHelper html,         ModelMetadata metadata, string htmlFieldName, string labelText)     {         if (string.IsNullOrEmpty(labelText))         {             labelText = metadata.DisplayName ?? metadata.PropertyName ?? htmlFieldName.Split('.').Last();         }           if (string.IsNullOrEmpty(labelText))         {             return MvcHtmlString.Empty;         }           bool isRequired = false;           if (metadata.ContainerType != null)         {             isRequired = metadata.ContainerType.GetProperty(metadata.PropertyName)                             .GetCustomAttributes(typeof(RequiredAttribute), false)                             .Length == 1;         }           TagBuilder tag = new TagBuilder("label");         tag.Attributes.Add(             "for",             TagBuilder.CreateSanitizedId(                 html.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(htmlFieldName)             )         );           if (isRequired)             tag.Attributes.Add("class", "label-required");           tag.SetInnerText(labelText);           var output = tag.ToString(TagRenderMode.Normal);             if (isRequired)         {             var asteriskTag = new TagBuilder("span");             asteriskTag.Attributes.Add("class", "required");             asteriskTag.SetInnerText("*");             output += asteriskTag.ToString(TagRenderMode.Normal);         }         return MvcHtmlString.Create(output);     } } And here’s how to use LabelForRequired extension method in your view: <div class="field">     @Html.LabelForRequired(m => m.Name)     @Html.TextBoxFor(m => m.Name)     @Html.ValidationMessageFor(m => m.Name) </div> After playing with CSS style called .required my example form looks like this: These red asterisks are not part of original view mark-up. LabelForRequired method detected that these properties have Required attribute set and rendered out asterisks after field names. NB! By default asterisks are not red. You have to define CSS class called “required” to modify how asterisk looks like and how it is positioned.

    Read the article

  • no volume in kubuntu 10.04

    - by neha
    hello,I am having both gnome and kde on my system.as my gnome is working perfectly but in KDE is there is no sound being generated. output of apley -l and lspci commands is as follows.. neha@neha-laptop:~$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 3: INTEL HDMI [INTEL HDMI] Subdevices: 1/1 Subdevice #0: subdevice #0 and output of lspci command is: neha@neha-laptop:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 0c) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 0c) 00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 02) 00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f2) 00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 02) 02:09.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 05) 02:09.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 22) 02:09.2 System peripheral: Ricoh Co Ltd R5C843 MMC Host Controller (rev 12) 02:09.3 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 12) 02:09.4 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev ff) 09:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller (rev 12) 0b:00.0 Network controller: Broadcom Corporation BCM4312 802.11a/b/g (rev 01) can anyone help me??

    Read the article

  • Mutt not working due to "gnutls_handshake: A TLS packet with unexpected length was received." error

    - by Vinit Kumar
    I am expecting lots of problem trying to make mutt work in Ubuntu 12.04. Here is my .muttrc : http://paste.ubuntu.com/1273585/ Here is the bug I am getting when i tried to connect. gnutls_handshake: A TLS packet with unexpected length was received. Do anyone knows a workaround to fix this error.If so please suggest it asap. Many Thanks in Advance! For debug here is the output of my mutt -v: http://paste.ubuntu.com/1273590/

    Read the article

  • gVim characters unreadable at random times

    - by Mussnoon
    Screenshot - Anyone know what causes it and how to fix? It only started happening today, while I've been using gVim for a couple of months now. Update: Output of locale LANG=en_US.utf8 LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL=

    Read the article

  • How to repair an external harddrive?

    - by dodohjk
    I would like to reformat my hard disk, and if possible recover the (somewhat unimportant) contents if possible. I have a Western Digital 1TB hard drive which had a NTFS partition. I unplugged the drive without safely removing it first. At first a pop up was asking me to use a Windows OS to run the chkdsk /f command, however, in the effort to keep using a Linux OS I used the ntfsfix command on the ubuntu terminal Now, when I try to access the hard drive, it doesn't show up anymore in Nautilus. I tried reformatting it using Disk Utility, but it gives me an error message, and Gparted would hang on the "Scanning devices" step infinitely. Please comment any output that you would like to see and I will add it to my question. EDIT disk utility tells me is on /dev/sdb the command sudo fdisk -l gives dodohjk@DodosPC:~$ sudo fdisk -l [sudo] password for dodohjk: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006fa8c Device Boot Start End Blocks Id System /dev/sda1 * 4094 482344959 241170433 5 Extended /dev/sda2 482344960 488396799 3025920 82 Linux swap / Solaris /dev/sda5 4096 31461127 15728516 83 Linux /dev/sda6 31463424 52434943 10485760 83 Linux /dev/sda7 52436992 62923320 5243164+ 83 Linux /dev/sda8 62924800 482344959 209710080 83 Linux Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6e697373 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 1936269394 3772285809 918008208 4f QNX4.x 3rd part /dev/sdb2 ? 1917848077 2462285169 272218546+ 73 Unknown /dev/sdb3 ? 1818575915 2362751050 272087568 2b Unknown /dev/sdb4 ? 2844524554 2844579527 27487 61 SpeedStor Partition table entries are not in disk order I wrote something wrong here, however here the output of fsck /dev/sbd is dodohjk@DodosPC:~$ sudo fsck /dev/sdb fsck from util-linux 2.20.1 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device&gt;

    Read the article

  • No sound in Ubuntu 12.04

    - by Mohd Arafat Hossain
    I installed Ubuntu 12.04 a month ago and am using it till now. I failed to notice that all this time there was no sound at all while running Ubuntu, even while playing a game in Wine. The weird thing is that only the startup sound comes when I log in (Indian/African drum tone), then comes the utter silence. I tested both Digital Output (S/PDIF) and the speakers in the sound settings but can hear nothing. Any help?

    Read the article

  • Blank Cacti Graphs

    - by tortib
    I'm running Ubuntu 14.04 LTS and I'm having an issue with Cacti 0.8.8b not displaying any data in graphs. The graphs are being created and I see files in /var/lib/cacti/rra. My crontab entry for root is the following: */1 * * * * sudo -u www-data php -q /usr/share/cacti/site/poller.php > /dev/null The output of ls -la /var/lib/cacti/rra is the following: # ls -la /var/lib/cacti/rra/ total 1008 drwxrwx--- 2 www-data www-data 4096 Aug 20 19:27 . drwxr-xr-x 3 www-data www-data 4096 Aug 17 01:41 .. -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_cpu_nice_34.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_cpu_system_35.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_cpu_user_36.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:27 tortib_com_hdd_used_43.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:23 tortib_com_hdd_used_44.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_load_15min_38.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_load_1min_37.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_load_5min_39.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_mem_buffers_40.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_mem_cache_41.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_mem_free_42.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:24 tortib_com_traffic_in_45.rrd -rw-rw-r-- 1 www-data www-data 94816 Aug 20 19:25 tortib_com_traffic_in_46.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:26 tortib_com_traffic_in_47.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_users_48.rrd I tried to run the poller as root from the command line but it doesn't output anything useful nor does it graph any data. The device in cacti shows that that it's able to query snmp and ping is alive. The graphs are still empty though. snmpwalk 127.0.0.1 -v2c -c public works as it should. It walks all MIBs. I'm quite perplexed as to why this isn't working any longer. It was graphing data but then it just stopped. And when it was graphing data it was graphing it intermittently. Thank you for reading this problem and helping.

    Read the article

  • bash dirtrim produces strange results with ~/foo/bar/var directory

    - by queueoverflow
    In some of my projects, I keep a var or a lib folder for runtime output and external libraries. To keep my prompt rather short, I have the export PROMPT_DIRTRIM=3 option in my .bashrc. This works very well for most paths, but as soon as I have a /var in there, it goes nuts like this (for ~/Projects/someproject/var/gfx): ~/.../gfxr/gfxr/gfxr/gfxr/gfxr/gfx Interestingly, it works with /opt/lampp/lib Is there some way to get around this? Update my .bashrc my .bash_functions

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

  • Understanding IIS Bindings

    - by OWScott
    Internet Information Services (IIS) uses 4 decision points for the site bindings.  They are the protocol, port, IP and host header.  This video lesson walks through the bindings and shows how each one is used. This is part 5 of a 52 week series on various topics for the Web Administrator. Other weeks include: Week 1 – Ping and Tracert Week 2 – Understanding DNS zone records Week 3 – Nslookup – the Ultimate DNS Troubleshooting Tool Week 4 – Three Tricks for Capturing Command Line Output Understanding IIS Bindings

    Read the article

  • HyperV integration services v3.4 for 12.10?

    - by nlee
    Networking is sloooow with v3.1 How to upgrade to Integration Services v3.4 in 12.10? modinfo output in 12.10 filename: /lib/modules/3.5.0-17-generic/kernel/drivers/hv/hv_vmbus.ko version: 3.1 license: GPL srcversion: B1AA963EEFBAE322D970F14 alias: acpi*:VMBus:* alias: acpi*:VMBUS:* depends: intree: Y vermagic: 3.5.0-17-generic SMP mod_unload modversions

    Read the article

  • SQL SERVER – Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: A Puzzle – Swap Value of Column Without Case Statement,there were more than 50 solutions proposed in the comment. There were many creative solutions. I have mentioned my personal favorite (different ones) here: Solution of Puzzle – Swap Value of Column Without Case Statement. However, I received lots of questions regarding one of the Solution by SIJIN KUMAR V P. He has used the function SOUNDEX in his solution. The request was to explain how SOUNDEX and DIFFERENCE works. Well, there are pretty decent documentations provided over here SOUNDEX function and DIFFERENCE over on MSDN and if I attempt to explain this function I will end up writing the same details which are available on MSDN. Instead of writing theory, we will try to learn this function by using a couple of simple puzzles. You try to solve the puzzles using the MSDN and see if you can learn something very quickly. In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. The first character of the code is the first character of character_expression and the second through fourth characters of the code are numbers that represent the letters in the expression. Vowels incharacter_expression are ignored unless they are the first letter of the string. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. The return value ranges from 0 through 4: 0 indicates weak or no similarity, and 4 indicates strong similarity or the same values. Learning Puzzle 1: Now let us run following four queries and observe its output. SELECT SOUNDEX('SQLAuthority') SdxValue SELECT SOUNDEX('SLTR') SdxValue SELECT SOUNDEX('SaLaTaRa') SdxValue SELECT SOUNDEX('SaLaTaRaM') SdxValue When you look at the result set all the four values are same. The reason for all the values to be same is as for SQL Server SOUNDEX function all the four strings are similarly sounding string. Learning Puzzle 2: Now let us run following five queries and observe its output. SELECT DIFFERENCE (SOUNDEX('SLTR'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE (SOUNDEX('TH'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SQLAuthority',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR','SQLAuthority') When you look at the result set you will get the result in the ranges from 1 to 4. Here is how it works if your result is 0 which means absolutely not relevant to each other and if your result is 1 which means the results are relevant to each other. Have you ever used above two functions in your business need or on production server? If yes, would you please leave a comment with use cases. I believe it will be beneficial to everyone. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Learning PostgreSql: bulk loading data

    - by Alexander Kuznetsov
    In this post we shall start loading data in bulk. For better performance of inserts, we shall load data into a table without constraints and indexes. This sounds familiar. There is a bulk copy utility, and it is very easy to invoke from C#. The following code feeds the output from a T-SQL stored procedure into a PostgreSql table: using ( var pgTableTarget = new PgTableTarget ( PgConnString , "Data.MyPgTable" , GetColumns ())) using ( var conn = new SqlConnection ( connectionString )) { conn.Open...(read more)

    Read the article

  • Nyquist won't play audio

    - by erjiang
    I downloaded Nyquist, and am having trouble playing sounds from it. If I run it normally, I get: Nyquist -- A Language for Sound Synthesis and Composition Copyright (c) 1991,1992,1995 by Roger B. Dannenberg Version 2.29 > (play (osc 60)) Saving sound file to ./eric-temp.wav error: snd_save -- could not open audio output > If I wrap it by running padsp ny, the sound plays fine for about half a second, and then I get garbage fed to my speakers. Any solutions?

    Read the article

  • Apache on Windows - splitting vHost logs

    - by Cylindric
    I have a Windows Server 2008 running Apache, and it will be hosting several virtual hosts. I'd rather not use the logrotate tool (|bin/logrotate), as it seems create significant extra overhead with all the processes. Is there a simple Windows alternative to get the log entries from a combined log file split into several per-site files? Preferably with custom output directories, but that is optional.

    Read the article

  • How can I automatically switch to USB headset when plugged in?

    - by d3vid
    Whenever I plugged in my old audio jack headset, sound was immediately diverted from my speakers to the headset speakers, and the microphone was immediately available. When I plug in my new USB headset, I have to open Sound Preferences and switch both input and output to the headset. Is there any way to make this happen automatically? I'm using a Fujitsu-Siemens Amilo Pi laptop, Maverick and a Logitech H330 USB headset.

    Read the article

  • Set a Video as Your Desktop Wallpaper with VLC

    - by DigitalGeekery
    Are you tired of static desktop wallpapers and want something a bit more entertaining? Today we’ll take a look at setting a video as wallpaper in VLC media player. Download and install VLC player. You’ll find the download link below. Open VLC and select Tools > Preferences. On the Preferences windows, select the Video button on the left. Under Video Settings, select DirectX video output from the Output dropdown list. Click Save before exiting and then restart VLC. Next, select a video and begin playing it with VLC. Right-click on the screen, select Video, then DirectX Wallpaper.   You can achieve the same result by selecting Video from the Menu and clicking DirectX Wallpaper.   If you’re using Windows Aero Themes, you may get the warning message below and your theme will switch automatically to a basic theme.   After the Wallpaper is enabled, minimize VLC player and enjoy the show as you work.     When you are ready to switch back to your normal wallpaper, click Video, and then close out of VLC.   Occasionally we had to manually change our wallpaper back to normal. You can do that by right clicking on the desktop and selecting your theme.   Conclusion This might not make the most productive desktop environment, but it is pretty cool. It’s definitely not the same old boring wallpaper! Download VLC Similar Articles Productive Geek Tips Dual Monitors: Use a Different Wallpaper on Each Desktop in Windows 7, Vista or XPDual Monitors: Use a Different Wallpaper on Each DesktopDesktop Fun: Video Game Icon PacksDesktop Fun: Starship Theme WallpapersDesktop Fun: Mountains Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • What package is meant to replace thinkfinger in 11.10?

    - by misterhaan
    My laptop has a fingerprint reader that until 11.10 I used via thinkfinger. That package is no longer in the repositories, but I assume there’s a different package meant to support fingerprint readers in thinkfinger’s place. What is the recommended fingerprint reader package? Should I just find thinkfinger on my own and use that instead? Here’s my reader’s lsusb output: Bus 003 Device 002: ID 0483:2016 SGS Thomson Microelectronics Fingerprint Reader

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Turn off keyboard back-light Sony (VAIO SVF1521DCXW)

    - by KasiyA
    I have a Sony laptop and I want to turn keyboard back-light off. It doesn't have a shortcut function key for doing this on the keyboard . I can turn off it with VAIO Control Center in Windows but I don't know how can I turn it off in Ubuntu 14.04. There isn't available to me: /sys/devices/platform/sony-laptop/kbd_backlight doesn't exist on my machine. I have this folder /sys/devices/platform/sony-laptop/ and there is three folder one power folder and two shortcut-ed folder driver , subsystem and five file contains battery_care_health , battery_care_limiter , modalias , touchpad and event This is the output of running sudo modinfo sony-laptop: filename: /lib/modules/3.13.0-34-generic/kernel/drivers/platform/x86/sony-laptop.ko version: 0.6 license: GPL description: Sony laptop extras driver (SPIC and SNC ACPI device) author: Stelian Pop, Mattia Dongili srcversion: 5C6E050349475558A231C59 alias: acpi*:SNY6001:* alias: acpi*:SNY5001:* depends: intree: Y vermagic: 3.13.0-34-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: 50:0B:C5:C8:7D:4B:11:5C:F3:C1:50:4F:7A:92:E2:33:C6:14:3D:58 sig_hashalgo: sha512 parm: debug:set this to 1 (and RTFM) if you want to help the development of this driver (int) parm: no_spic:set this if you don't want to enable the SPIC device (int) parm: compat:set this if you want to enable backward compatibility mode (int) parm: mask:set this to the mask of event you want to enable (see doc) (ulong) parm: camera:set this to 1 to enable Motion Eye camera controls (only use it if you have a C1VE or C1VN model) (int) parm: minor:minor number of the misc device for the SPIC compatibility code, default is -1 (automatic) (int) parm: kbd_backlight:set this to 0 to disable keyboard backlight, 1 to enable it (default: no change from current value) (int) parm: kbd_backlight_timeout:meaningful values vary from 0 to 3 and their meaning depends on the model (default: no change from current value) (int) With the suggested command: sudo modprobe -r sony_laptop sudo modprobe -v sony_laptop kbd_backlight=0 Output was: insmod /lib/modules/3.13.0-34-generic/kernel/drivers/platform/x86/sony-laptop.ko kbd_backlight=0 It doesn't seem to affect the keyboard backlight. And also trying this command: sudo modprobe -v sony_laptop kbd_backlight_timeout=3 kbd_backlight=0 and doesn't seem to effect the keyboard backlight I also test it after restart laptop, And I didn't see any effect too. Important : By default, keyboard backlight is off; when I press a key it turns on and after 15 seconds it turns off again. It's the same result on battery and AC power I followed also http://ubuntuforums.org/showthread.php?t=2139597 and Keyboard backlighting not working on a Vaio VPCSB11FX but didn't work so.

    Read the article

  • Ubuntu 12.04 does not recognize Kingston DT108 16GB memory

    - by aceqott
    I bought a Kingston DT108 16GB and Ubuntu did not recognize it.What should i do?? Output of dmesg | tail after connecting pen drive [ 1455.253010] CPU5: Package power limit notification (total events = 488) [ 1455.253013] CPU1: Package power limit notification (total events = 489) [ 1455.263966] CPU4: Package power limit normal [ 1455.263969] CPU2: Package power limit normal [ 1455.263972] CPU0: Package power limit normal [ 1455.263975] CPU6: Package power limit normal [ 1455.263998] CPU1: Package power limit normal [ 1455.264001] CPU5: Package power limit normal [ 1455.264004] CPU7: Package power limit normal [ 1455.264007] CPU3: Package power limit normal

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Asus N76VM: no sound from external subwoofer "Sonic Master"

    - by Willem
    A few weeks ago I bought a Asus n76vm notebook looking forward to it's 'superior sound'. This sound system compromises a external subwoofer which amplifies bass and is connected to a special output jack. Ubuntu 12.04, however, does not detect this subwoofer. How could this be solved? Any help would be gratefully appreciated http://www.asus.com/Notebooks/Multimedia_Entertainment/N76VM/#specifications

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >