Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 466/1952 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • Disable a control inside a gridview

    - by saeed talaee
    Hi i want to disable link-bottoms control in a grid view with the condition of a special value . for example if the count for a row become 0 ,the link bottom for that row should be invisible . what should i do? where should i write the code? here is cod that i write in row command grid view but it works only of i push the link bottom!! but i want to apply this cod to my page before loading. please guide me int idx = Convert.ToInt32(e.CommandArgument); idx = idx - (GridView1.PageSize * GridView1.PageIndex); int ID = (int)GridView1.DataKeys[idx].Value; string connStr = ConfigurationManager.ConnectionStrings["dbconn"].ConnectionString; SqlConnection sqlconn = new SqlConnection(connStr); SqlCommand sqlcmd = new SqlCommand(); sqlcmd = new SqlCommand("SELECT count(ID) FROM ReviwerArticle where ArticleID=@ArticleID", sqlconn); sqlcmd.Parameters.AddWithValue("@ArticleID", ID); sqlconn.Open(); int count = ((int)sqlcmd.ExecuteScalar()); sqlconn.Close(); if (count == 0) { ((LinkButton)GridView1.Rows[idx].Cells[0].FindControl("LinkButton4") as LinkButton).Visible = false; }

    Read the article

  • Are there any simple ways to publish APKs on Google Play?

    - by StupidCodeGenerator
    I'm from China and we only publish APKs in Chinese market. This is my first time trying to put out game on Google Play. In China, publishing is pretty simple: They give us a jar file and we just import the file in our project and do something with a small document. But when I was reading the Google Play's document, I think it's too complicated. There're so many documents and what ever you read the document always tell you you need to read another document. Are there any simple ways to do this?

    Read the article

  • Java - Display % of upload done

    - by tr-raziel
    I have a java applet for uploading files to server. I want to display the % of data sent but when I use ObjectOutputStream.write() it just writes to the buffer, does not wait until the data has actually been sent. How can I achieve this. Perhaps I need to use thread synchronization or something. Any clues would be most helpful. This is the code I'm using right now: try{ for(File file : ficheiros){ FileInputStream stream = new FileInputStream (file); int bytesRead1 = 0;; int off1 = 0; int len1 = 100000; if(file.length() < 100000) len1 = new Long(file.length()).intValue(); byte[] bytes1 = new byte[len1]; while (off1 < file.length()) { bytes1 = new byte[len1]; if((file.length() - off1) < len1){ len1 = (new Long(file.length()).intValue() - off1); bytes1 = new byte[len1]; } if((bytesRead1 = stream.read(bytes1)) != -1){ //I want this to block until all data has been sent outputToServlet.write(bytes1, 0, bytesRead1 ); System.out.println("off1: " + off1); off1 = off1 + len1; outputToServlet.flush(); } sent += len1; if(sent>totalLength) sent = (int)totalLength; updateFeedback(sent,totalLength,false);//calls method to display % } updateFeedback(-1,-1,true); } }catch(Exception e){ e.printStackTrace(); } Thanks

    Read the article

  • Boot From a USB Drive Even if your BIOS Won’t Let You

    - by Trevor Bekolay
    You’ve always got a trusty bootable USB flash drive with you to solve computer problems, but what if a PC’s BIOS won’t let you boot from USB? We’ll show you how to make a CD or floppy disk that will let you boot from your USB drive. This boot menu, like many created before USB drives became cheap and commonplace, does not include an option to boot from a USB drive. A piece of freeware called PLoP Boot Manager solves this problem, offering an image that can burned to a CD or put on a floppy disk, and enables you to boot to a variety of devices, including USB drives. Put PLoP on a CD PLoP comes as a zip file, which includes a variety of files. To put PLoP on a CD, you will need either plpbt.iso or plpbtnoemul.iso from that zip file. Either disc image should work on most computers, though if in doubt plpbtnoemul.iso should work “everywhere,” according to the readme included with PLoP Boot Manager. Burn plpbtnoemul.iso or plpbt.iso to a CD and then skip to the “booting PLoP Boot Manager” section. Put PLoP on a Floppy Disk If your computer is old enough to still have a floppy drive, then you will need to put the contents of the plpbt.img image file found in PLoP’s zip file on a floppy disk. To do this, we’ll use a freeware utility called RawWrite for Windows. We aren’t fortunate enough to have a floppy drive installed, but if you do it should be listed in the Floppy drive drop-down box. Select your floppy drive, then click on the “…” button and browse to plpbt.img. Press the Write button to write PLoP boot manager to your floppy disk. Booting PLoP Boot Manager To boot PLoP, you will need to have your CD or floppy drive boot with higher precedence than your hard drive. In many cases, especially with floppy disks, this is done by default. If the CD or floppy drive is not set to boot first, then you will need to access your BIOS’s boot menu, or the setup menu. The exact steps to do this vary depending on your BIOS – to get a detailed description of the process, search for your motherboard’s manual (or your laptop’s manual if you’re working with a laptop). In general, however, as the computer boots up, some important keyboard strokes are noted somewhere prominent on the screen. In our case, they are at the bottom of the screen. Press Escape to bring up the Boot Menu. Previously, we burned a CD with PLoP Boot Manager on it, so we will select the CD-ROM Drive option and hit Enter. If your BIOS does not have a Boot Menu, then you will need to access the Setup menu and change the boot order to give the floppy disk or CD-ROM Drive higher precedence than the hard drive. Usually this setting is found in the “Boot” or “Advanced” section of the Setup menu. If done correctly, PLoP Boot Manager will load up, giving a number of boot options. Highlight USB and press Enter. PLoP begins loading from the USB drive. Despite our BIOS not having the option, we’re now booting using the USB drive, which in our case holds an Ubuntu Live CD! This is a pretty geeky way to get your PC to boot from a USB…provided your computer still has a floppy drive. Of course if your BIOS won’t boot from a USB it probably has one…or you really need to update it. Download PLoP Boot Manager Download RawWrite for Windows Similar Articles Productive Geek Tips Create a Bootable Ubuntu 9.10 USB Flash DriveReinstall Ubuntu Grub Bootloader After Windows Wipes it OutCreate a Bootable Ubuntu USB Flash Drive the Easy WayBuilding a New Computer – Part 3: Setting it UpInstall Windows XP on Your Pre-Installed Windows Vista Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 4: Service Client (using Service Metadata)

    - by Your DisplayName here!
    See parts 1, 2 and 3 first. In this part we will finally build a client for our federated service. There are basically two ways to accomplish this. You can use the WCF built-in tooling to generate client and configuration via the service metadata (aka ‘Add Service Reference’). This requires no WIF on the client side. Another approach would be to use WIF’s WSTrustChannelFactory to manually talk to the ADFS 2 WS-Trust endpoints. This option gives you more flexibility, but is slightly more code to write. You also need WIF on the client which implies that you need to run on a WIF supported operating system – this rules out e.g. Windows XP clients. We’ll start with the metadata way. You simply create a new client project (e.g. a console app) – call ‘Add Service Reference’ and point the dialog to your service endpoint. What will happen then is, that VS will contact your service and read its metadata. Inside there is also a link to the metadata endpoint of ADFS 2. This one will be contacted next to find out which WS-Trust endpoints are available. The end result will be a client side proxy and a configuration file. Let’s first write some code to call the service and then have a closer look at the config file. var proxy = new ServiceClient(); proxy.GetClaims().ForEach(c =>     Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",         c.ClaimType,         c.Value,         c.Issuer,         c.OriginalIssuer)); That’s all. The magic is happening in the configuration file. When you in inspect app.config, you can see the following general configuration hierarchy: <client /> element with service endpoint information federation binding and configuration containing ADFS 2 endpoint 1 (with binding and configuration) ADFS 2 endpoint n (with binding and configuration) (where ADFS 2 endpoint 1…n are the endpoints I talked about in part 1) You will see a number of <issuer /> elements in the binding configuration where simply the first endpoint from the ADFS 2 metadata becomes the default endpoint and all other endpoints and their configuration are commented out. You now need to find the endpoint you want to use (based on trust version, credential type and security mode) and replace that with the default endpoint. That’s it. When you call the WCF proxy, it will inspect configuration, then first contact the selected ADFS 2 endpoint to request a token. This token will then be used to authenticate against the service. In the next post I will show you the more manual approach using the WIF APIs.

    Read the article

  • C# Proposal: Compile Time Static Checking Of Dynamic Objects

    - by Paulo Morgado
    C# 4.0 introduces a new type: dynamic. dynamic is a static type that bypasses static type checking. This new type comes in very handy to work with: The new languages from the dynamic language runtime. HTML Document Object Model (DOM). COM objects. Duck typing … Because static type checking is bypassed, this: dynamic dynamicValue = GetValue(); dynamicValue.Method(); is equivalent to this: object objectValue = GetValue(); objectValue .GetType() .InvokeMember( "Method", BindingFlags.InvokeMethod, null, objectValue, null); Apart from caching the call site behind the scenes and some dynamic resolution, dynamic only looks better. Any typing error will only be caught at run time. In fact, if I’m writing the code, I know the contract of what I’m calling. Wouldn’t it be nice to have the compiler do some static type checking on the interactions with these dynamic objects? Imagine that the dynamic object that I’m retrieving from the GetValue method, besides the parameterless method Method also has a string read-only Property property. This means that, from the point of view of the code I’m writing, the contract that the dynamic object returned by GetValue implements is: string Property { get; } void Method(); Since it’s a well defined contract, I could write an interface to represent it: interface IValue { string Property { get; } void Method(); } If dynamic allowed to specify the contract in the form of dynamic(contract), I could write this: dynamic(IValue) dynamicValue = GetValue(); dynamicValue.Method(); This doesn’t mean that the value returned by GetValue has to implement the IValue interface. It just enables the compiler to verify that dynamicValue.Method() is a valid use of dynamicValue and dynamicValue.OtherMethod() isn’t. If the IValue interface already existed for any other reason, this would be fine. But having a type added to an assembly just for compile time usage doesn’t seem right. So, dynamic could be another type construct. Something like this: dynamic DValue { string Property { get; } void Method(); } The code could now be written like this; DValue dynamicValue = GetValue(); dynamicValue.Method(); The compiler would never generate any IL or metadata for this new type construct. It would only thee used for compile type static checking of dynamic objects. As a consequence, it makes no sense to have public accessibility, so it would not be allowed. Once again, if the IValue interface (or any other type definition) already exists, it can be used in the dynamic type definition: dynamic DValue : IValue, IEnumerable, SomeClass { string Property { get; } void Method(); } Another added benefit would be IntelliSense. I’ve been getting mixed reactions to this proposal. What do you think? Would this be useful?

    Read the article

  • ASP.NET Server-side comments

    - by nmarun
    I believe a good number of you know about Server-side commenting. This blog is just like a revival to refresh your memories. When you write comments in your .aspx/.ascx files, people usually write them as: 1: <!-- This is a comment. --> To show that it actually makes a difference for using the server-side commenting technique, I’ve started a web application project and my default.aspx page looks like this: 1: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ServerSideComment._Default" %> 2: <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 3: </asp:Content> 4: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 5: <h2> 6: <!-- This is a comment --> 7: Welcome to ASP.NET! 8: </h2> 9: <p> 10: To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. 11: </p> 12: <p> 13: You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" 14: title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. 15: </p> 16: </asp:Content> See the comment in line 6 and when I run the app, I can do a view source on the browser which shows up as: 1: <h2> 2: <!-- This is a comment --> 3: Welcome to ASP.NET! 4: </h2> Using Fiddler shows the page size as: Let’s change the comment style and use server-side commenting technique. 1: <h2> 2: <%-- This is a comment --%> 3: Welcome to ASP.NET! 4: </h2> Upon rendering, the view source looks like: 1: <h2> 2: 3: Welcome to ASP.NET! 4: </h2> Fiddler now shows the page size as: The difference is that client-side comments are ignored by the browser, but they are still sent down the pipe. With server-side comments, the compiler ignores everything inside this block. Visual Studio’s Text Editor toolbar also puts comments as server-side ones. If you want to give it a shot, go to your design page and press Ctrl+K, Ctrl+C on some selected text and you’ll see it commented in the server-side commenting style.

    Read the article

  • Installer Reboots at "Detecting hardware" (disks and other hardware) on all recent Server Installs

    - by Ryan Rosario
    I have a very frustrating problem with my PC. I cannot install any recent version of Ubuntu Server (or even Desktop) since 9.04 even using the text-based installer. I boot from a USB stick created by Unetbootin (I also tried other methods such as startup disk creator with no difference). On the Server installer, it gets to "Detecting Hardware" (the second one about disks and all other hardware, not network hardware) and then either hangs at 0% (waited 24 hours), or reboots after a minute or two. My system (late 2007): ASUS P5NSLI motherboard Intel Core 2 Duo E6600 2.4Ghz 2 x 1GB Corsair 667MHz RAM nVidia GeForce 6600 I have unplugged everything (including the only hard disk, CD-ROMs and floppy). I have only one stick of RAM (tried each one to no avail) and am booting the installer from a USB stick (booting from CD-ROM yields the same problem). I also tried several of the boot options (nomodeset, nousb, acpi=off, noapic, i915.modeset=1/0, xforcevesa) in all combinations) to no avail. The only active parts of my system are the video card, mouse, keyboard and USB stick. I have also updated the BIOS to the most recent version. (FWIW, on the Desktop installer, I get a black screen after hitting the Install option.) Even after removing "quiet" I am unable to see what kernel panic is occurring (or not occurring) to cause the install to crash. I am only able to save the debug logs via a simple webserver in the installer. After the last line (I repeatedly refreshed), the server stops responding and the installer hangs or reboots: Jan 2 01:04:03 main-menu[302]: INFO: Menu item 'disk-detect' selected Jan 2 01:04:04 kernel: [ 309.154372] sata_nv 0000:00:0e.0: version 3.5 Jan 2 01:04:04 kernel: [ 309.154409] sata_nv 0000:00:0e.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.154531] sata_nv 0000:00:0e.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.164442] scsi0 : sata_nv Jan 2 01:04:04 kernel: [ 309.167610] scsi1 : sata_nv Jan 2 01:04:04 kernel: [ 309.167762] ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xd400 irq 10 Jan 2 01:04:04 kernel: [ 309.167774] ata2: SATA max UDMA/133 cmd 0x970 ctl 0xb70 bmdma 0xd408 irq 10 Jan 2 01:04:04 kernel: [ 309.167948] sata_nv 0000:00:0f.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.168071] sata_nv 0000:00:0f.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.171931] scsi2 : sata_nv Jan 2 01:04:04 kernel: [ 309.173793] scsi3 : sata_nv Jan 2 01:04:04 kernel: [ 309.173943] ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe800 irq 11 Jan 2 01:04:04 kernel: [ 309.173954] ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe808 irq 11 Jan 2 01:04:04 kernel: [ 309.174061] pata_amd 0000:00:0d.0: version 0.4.1 Jan 2 01:04:04 kernel: [ 309.174160] pata_amd 0000:00:0d.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.177045] scsi4 : pata_amd Jan 2 01:04:04 kernel: [ 309.178628] scsi5 : pata_amd Jan 2 01:04:04 kernel: [ 309.178801] ata5: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf000 irq 14 Jan 2 01:04:04 kernel: [ 309.178811] ata6: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf008 irq 15 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface lo Jan 2 01:04:04 kernel: [ 309.485062] ata3: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:04 kernel: [ 309.633094] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 2 01:04:04 kernel: [ 309.641647] ata1.00: ATA-8: ST31000528AS, CC38, max UDMA/133 Jan 2 01:04:04 kernel: [ 309.641658] ata1.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32) Jan 2 01:04:04 kernel: [ 309.657614] ata1.00: configured for UDMA/133 Jan 2 01:04:04 kernel: [ 309.657969] scsi 0:0:0:0: Direct-Access ATA ST31000528AS CC38 PQ: 0 ANSI: 5 Jan 2 01:04:04 kernel: [ 309.658482] sd 0:0:0:0: Attached scsi generic sg0 type 0 Jan 2 01:04:04 kernel: [ 309.658588] sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jan 2 01:04:04 kernel: [ 309.658812] sd 0:0:0:0: [sda] Write Protect is off Jan 2 01:04:04 kernel: [ 309.658823] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 2 01:04:04 kernel: [ 309.658918] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 2 01:04:04 kernel: [ 309.675630] sda: sda1 sda2 Jan 2 01:04:04 kernel: [ 309.676440] sd 0:0:0:0: [sda] Attached SCSI disk Jan 2 01:04:05 kernel: [ 309.969102] ata2: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:05 kernel: [ 310.281137] ata4: SATA link down (SStatus 0 SControl 300) Anybody have any additional ideas I could try? I am getting ready to just toss the motherboard.

    Read the article

  • Optional Parameters and Named Arguments in C# 4 (and a cool scenario w/ ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the seventeenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers two new language feature being added to C# 4.0 – optional parameters and named arguments – as well as a cool way you can take advantage of optional parameters (both in VB and C#) with ASP.NET MVC 2. Optional Parameters in C# 4.0 C# 4.0 now supports using optional parameters with methods, constructors, and indexers (note: VB has supported optional parameters for awhile). Parameters are optional when a default value is specified as part of a declaration.  For example, the method below takes two parameters – a “category” string parameter, and a “pageIndex” integer parameter.  The “pageIndex” parameter has a default value of 0, and as such is an optional parameter: When calling the above method we can explicitly pass two parameters to it: Or we can omit passing the second optional parameter – in which case the default value of 0 will be passed:   Note that VS 2010’s Intellisense indicates when a parameter is optional, as well as what its default value is when statement completion is displayed: Named Arguments and Optional Parameters in C# 4.0 C# 4.0 also now supports the concept of “named arguments”.  This allows you to explicitly name an argument you are passing to a method – instead of just identifying it by argument position.  For example, I could write the code below to explicitly identify the second argument passed to the GetProductsByCategory method by name (making its usage a little more explicit): Named arguments come in very useful when a method supports multiple optional parameters, and you want to specify which arguments you are passing.  For example, below we have a method DoSomething that takes two optional parameters: We could use named arguments to call the above method in any of the below ways: Because both parameters are optional, in cases where only one (or zero) parameters is specified then the default value for any non-specified arguments is passed. ASP.NET MVC 2 and Optional Parameters One nice usage scenario where we can now take advantage of the optional parameter support of VB and C# is with ASP.NET MVC 2’s input binding support to Action methods on Controller classes. For example, consider a scenario where we want to map URLs like “Products/Browse/Beverages” or “Products/Browse/Deserts” to a controller action method.  We could do this by writing a URL routing rule that maps the URLs to a method like so: We could then optionally use a “page” querystring value to indicate whether or not the results displayed by the Browse method should be paged – and if so which page of the results should be displayed.  For example: /Products/Browse/Beverages?page=2. With ASP.NET MVC 1 you would typically handle this scenario by adding a “page” parameter to the action method and make it a nullable int (which means it will be null if the “page” querystring value is not present).  You could then write code like below to convert the nullable int to an int – and assign it a default value if it was not present in the querystring: With ASP.NET MVC 2 you can now take advantage of the optional parameter support in VB and C# to express this behavior more concisely and clearly.  Simply declare the action method parameter as an optional parameter with a default value: C# VB If the “page” value is present in the querystring (e.g. /Products/Browse/Beverages?page=22) then it will be passed to the action method as an integer.  If the “page” value is not in the querystring (e.g. /Products/Browse/Beverages) then the default value of 0 will be passed to the action method.  This makes the code a little more concise and readable. Summary There are a bunch of great new language features coming to both C# and VB with VS 2010.  The above two features (optional parameters and named parameters) are but two of them.  I’ll blog about more in the weeks and months ahead. If you are looking for a good book that summarizes all the language features in C# (including C# 4.0), as well provides a nice summary of the core .NET class libraries, you might also want to check out the newly released C# 4.0 in a Nutshell book from O’Reilly: It does a very nice job of packing a lot of content in an easy to search and find samples format. Hope this helps, Scott

    Read the article

  • Having Fun with Coding4Fun&rsquo;s Windows Phone 7 Controls

    - by mbcrump
    I’m a big believer in having a hobby project as you can probably tell from the first sentence in my “personal webpage using Silverlight” article. One of my current hobby projects is to re-do my current WP7 application in the marketplace. I knew up front that I needed a “Loading” animation and a better “About” box. After starting to develop my own, I noticed a great set of WP7 controls by Coding4Fun and decided to use them in my new application. Before I go any further they are FREE and Open-Source. It is really simple to get started, just go to the CodePlex site and click the download button. After you have downloaded it then extract it to a Folder and you will have 4 DLL files. They are listed below: Now create a Windows Phone 7 Project and add references to the DLL’s by right clicking on the References folder and clicking “Add references”.   After adding the references, we can get started. I needed a ProgressOverlay animation or “Loading Screen” while my RSS feed is downloading. Basically, you just need to add the following namespace to whatever page you want the control on: xmlns:Controls="clr-namespace:Coding4Fun.Phone.Controls;assembly=Coding4Fun.Phone.Controls" And then the code inside your Grid or wherever you want the Loading screen placed. <Controls:ProgressOverlay Name="progressOverlay" > <Controls:ProgressOverlay.Content> <TextBlock>Loading</TextBlock> </Controls:ProgressOverlay.Content> </Controls:ProgressOverlay> Bam, you now have a great looking loading screen. Of course inside the ProgressOverlay, you may want to add a Visibility property to turn it off after your data loads if you are using MVVM or similar pattern.   Next up, I needed a nice clean “About Box” that looks good but is also functional. Meaning, if they click on my twitter name, web or email to launch the appropriate task. Again, this is only a few lines of code: var p = new AboutPrompt(); p.VersionNumber = "2.0"; p.Show("Michael Crump", "@mbcrump", "[email protected]", @"http://michaelcrump.net"); A nice clean “About” box with just a few lines of code! I’m all for code that I don’t have to write. It also comes with a pretty sweet InputPrompt for grabbing info from a user: The code for this is also very simple: InputPrompt input = new InputPrompt(); input.Completed += (s, e) => { MessageBox.Show(e.Result.ToString()); }; input.Title = "Input Box"; input.Message = "What does a \"Developer Large\" T-Shirt Mean? "; input.Show(); I also enjoyed the PhoneHelper that allows you to get data out of the WMAppManifest File very easy. So for example if I wanted the Version info from the WMAppManifest file. I could write one line and get it. PhoneHelper.GetAppAttribute("Version") Of course you would want to make sure you add the following using statement: using Coding4Fun.Phone.Controls.Data; You can’t have all these cool controls without a great set of Converters. The included BooleanToVisibility converter will convert a Boolean to and from a Visibility value. This is excellent when using something like a CheckBox to display a TextBox when its checked. See the example below: The code is below: <phone:PhoneApplicationPage.Resources> <Converters:BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter"/> </phone:PhoneApplicationPage.Resources> <CheckBox x:Name="checkBox"/> <TextBlock Text="Display Text" Visibility="{Binding ElementName=checkBox, Path=IsChecked, Converter={StaticResource BooleanToVisibilityConverter} }"/> That’s not all the goodies included. They also provide a RoundedButton, TimePicker and several other converters. The documentation is great and I would recommend you give them a shot if you need any of this functionality. Btw, thank Brian Peek for his awesome work on Coding4Fun!  Subscribe to my feed

    Read the article

  • “I could use a little help here” or “I can do it myself, thank you” for Cloud Projects

    - by BuckWoody
    Windows Azure allows you to write code in languages within the .NET stack, you can use Java, C++, PHP, NodeJS and others. Code is code - other than keeping things stateless, using a Web or Worker Role in Azure is not all that different from working with an on-premises system. However…. Working in a scalable, component-based stateless architecture that can use federated security is not all that common for many developers. Some are used to owning the server, scaling up, and state-full paradigms that have a single security domain. Making the transition whilst trying to create a new software application or even port a previous one can be daunting. Sure, we have absolutely tons of free training, kits, videos, online books and more to learn on your own, but some things like architecture can be pivotal as you move along. So the question is, should you just strike out on your own for a Cloud project, or get Microsoft Consulting Services or another partner to work with you on your first one? I use a few decision points to help guide the projects I assist in. Note: I’m a huge fan of having help that ends up giving you training and leaves you in charge. If you do engage with someone to help you, make sure you keep this clear and take more and more ownership yourself as the project progresses. How much time do you have? Usually the first thing I ask is about the timeline for the project. It doesn’t matter how skilled you are, if you have a short window to get things done it’s better to get help - especially if this is your first cloud project. Having someone that knows the platform well can save you amazing amounts of time. If you have longer, then start with the training in the link above and once you feel confident, jump in. How complex is the project? If there are a lot of moving parts, it’s best to engage a partner. The reason is that certain interactions - particularly things like Service Bus or Data Integration  - can be quite different than what you may have encountered before. How many people do you have? I have a “pizza rule” about projects I’ve used in my career - if it takes over two pizzas to feed everyone on the project, it’s too big and will fail. That being said, one developer and a one-week deadline does not a good project make, usually. It’s best to have at least one architect (or someone in that role) guiding the project along, and at least two developers to work on a cloud project. That’s a generalization of course, since I’ve seen great software on Azure with one developer writing code all by herself, but for more complex projects, more (to a point) is better. The nice thing about bringing on a partner is that you don’t have to hire them full time - they help you and then they go away. How critical is the project? There’s no shame in using some help. If the platform is new, if the project is large and complex, and if it is critical to the business, you should engage a partner. That’s regardless of Cloud or anything else - get some help. You don’t want to hit your company’s bottom line in a negative way, but you have to innovate and get them a competitive advantage. Do your research, make sure the partner is qualified to help you, and get it done. Don’t let these questions scare you off. There are lots of projects you can implement on Windows and SQL Azure with nothing other than the Software Development Kit (SDK) that you get for free with Windows Azure. And assistance comes in many forms - sometimes just phone support, a friend you can ask. Microsoft Consulting Services or any of our great partners. You can get help on just the architecture piece or have them show you how to write the code. They’ll get involved as little or as much as you like.

    Read the article

  • ASP.NET Error Handling: Creating an extension method to send error email

    - by Jalpesh P. Vadgama
    Error handling in asp.net required to handle any kind of error occurred. We all are using that in one or another scenario. But some errors are there which will occur in some specific scenario in production environment in this case We can’t show our programming errors to the End user. So we are going to put a error page over there or whatever best suited as per our requirement. But as a programmer we should know that error so we can track the scenario and we can solve that error or can handle error. In this kind of situation an Error Email comes handy. Whenever any occurs in system it will going to send error in our email. Here I am going to write a extension method which will send errors in email. From asp.net 3.5 or higher version of .NET framework  its provides a unique way to extend your classes. Here you can fine more information about extension method. So lets create extension method via implementing a static class like following. I am going to use same code for sending email via my Gmail account from here. Following is code for that. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Net.Mail; namespace Experiement { public static class MyExtension { public static void SendErrorEmail(this Exception ex) { MailMessage mailMessage = new MailMessage(new MailAddress("[email protected]") , new MailAddress("[email protected]")); mailMessage.Subject = "Exception Occured in your site"; mailMessage.IsBodyHtml = true; System.Text.StringBuilder errorMessage = new System.Text.StringBuilder(); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>","Exception",ex.Message)); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", "Stack Trace", ex.StackTrace)); if (ex.InnerException != null) { errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", " Inner Exception", ex.InnerException.Message)); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", "Inner Stack Trace", ex.InnerException.StackTrace)); } mailMessage.Body = errorMessage.ToString(); System.Net.NetworkCredential networkCredentials = new System.Net.NetworkCredential("[email protected]", "password"); SmtpClient smtpClient = new SmtpClient(); smtpClient.EnableSsl = true; smtpClient.UseDefaultCredentials = false; smtpClient.Credentials = networkCredentials; smtpClient.Host = "smtp.gmail.com"; smtpClient.Port = 587; smtpClient.Send(mailMessage); } } } After creating an extension method let us that extension method to handle error like following in page load event of page. using System; namespace Experiement { public partial class WebForm1 : System.Web.UI.Page { protected void Page_Load(object sender,System.EventArgs e) { try { throw new Exception("My custom Exception"); } catch (Exception ex) { ex.SendErrorEmail(); Response.Write(ex.Message); } } } } Now in above code I have generated custom exception for example but in production It can be any Exception. And you can see I have use ex.SendErrorEmail() function in catch block to send email. That’s it.. Now it will throw exception and you will email in your email box like below.   That’s its. It’s so simple…Stay tuned for more.. Happy programming.. Technorati Tags: Exception,Extension Mehtod,Error Handling,ASP.NET

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • Book Reviews: Art of Community and Eyetracking Web Usability

    - by ultan o'broin
    Holidays time offers a chance to catch up on some user experience and user assistance related material. So, two short book reviews (which I considered using my new Tumblr blog for. More about that another time) coming up. The Art of Community by Jono Bacon Excellent starting point for anyone wanting to get going in the community software (FLOSS, for example) space or understand how to set up, manage, and leverage the collective intelligence of communities for whatever ends. The book is a little too long in my opinion, and of course, usage of what Jono is recommending needs to be nuanced and adapted for enterprise applications space (hardly surprising there is a lot about Ubuntu, Lug Radio, and so on given Jono's interests). Shame there wasn't more information on international, non-English community considerations too. Still, some great ideas and insight into setting up and managing communities that I will leverage (watch out for the results on this blog, later in 2011). One section, on collaborative writing really jumped out. It reinforced the whole idea that to successful community initiatives are based on instigators knowing what makes the community tick in the first place. How about this for insight into user profiles for people who write community user assistance (OK then, "doc") and what tools they might use (in this case, we're talking about Jokosher): "Most people who write documentation for open source software projects would fall into the category of power user. They are technology enthusiasts who are not interested in the super-technical avenues of programming, but want to help out. Many of these people have good writing skills and a good knowledge of using the software, so the documentation fit is natural. With Jokosher we wanted to acknowledge this profile of user. As such, instead of focussing on complex text processing tools, we encouraged our documentation contributors to use a wiki." The book is available for free here, and well as being available from usual sources. Eyetracking Web Usability by Jakob Nielsen and Kara Prentice Another fine book by established experts. I have some field experience of eyetracking studies myself --in the user assistance for enterprise applications space--though Jakob and Kara concentrate on websites for their research here. I would caution how much about websites transfers easily to the applications space, especially enterprise applications, as claimed in the book too. However, Jakob and Kara do make the case very well that understanding design goals (for example, productivity improvement in the case of applications) and the context of the software use is critical. Executing a study using eyetracking technology requires that you know what you want to test, can set up realistic tasks for testing by representative testers, and then analyze the results. Be precise, as lots of data will be generated (I think the authors underplay the effort in analyzing data too). What I found disappointing was the lack of emphasis on eyetracking as only part of the usability solution. It's really for fine-tuning designs in my opinion, and should be used after other design reviews. I also wasn't that crazy about the level of disengagement between the qualitative and quantitative side of this kind of testing that the book indicated. I think it is useful to have testers verbalize their thoughts and for test engineers to prompt, intervene, or guide as necessary. More on cultural or international aspects to usability testing might have been included too (websites are available to everyone). To conclude, I enjoyed the book, took on board some key takeaways about methodologies and found the recommendations sensible and easy to follow (for example about Forms layouts). Applying enterprise applications requirements such as those relating to user profiles, design goals, and overall context of use in conjunction with what's in this book would be the way to go here. It also made me think of how interesting it would be to compare eyetracking findings between website and enterprise applications usage.

    Read the article

  • Creating an HttpHandler to handle request of your own extension

    - by Jalpesh P. Vadgama
    I have already posted about http handler in details before some time here. Now let’s create an http handler which will handle my custom extension. For that we need to create a http handlers class which will implement Ihttphandler. As we are implementing IHttpHandler we need to implement one method called process request and another one is isReusable property. The process request function will handle all the request of my custom extension. so Here is the code for my http handler class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; namespace Experiement { public class MyExtensionHandler:IHttpHandler { public MyExtensionHandler() { //Implement intialization here } bool IHttpHandler.IsReusable { get { return true; } } void IHttpHandler.ProcessRequest(HttpContext context) { string excuttablepath = context.Request.AppRelativeCurrentExecutionFilePath; if (excuttablepath.Contains("HelloWorld.dotnetjalps")) { Page page = new HelloWorld(); page.AppRelativeVirtualPath = context.Request.AppRelativeCurrentExecutionFilePath; page.ProcessRequest(context); } } } } Here in above code you can see that in process request function I am getting current executable path and then I am processing that page. Now Lets create a page with extension .dotnetjalps and then we will process this page with above created http handler. so let’s create it. It will create a page like following. Now let’s write some thing in page load Event like following. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Experiement { public partial class HelloWorld : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World"); } } } Now we have to tell our web server that we want to process request from this .dotnetjalps extension through our custom http handler for that we need to add a tag in httphandler sections of web.config like following. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpHandlers> <add verb="*" path="*.dotnetjalps" type="Experiement.MyExtensionHandler,Experiement"/> </httpHandlers> </system.web> </configuration> That’s it now run that page into browser and it will execute like following in browser That’s you.. Isn’t it cool.. Stay tuned for more.. Happy programming.. Technorati Tags: HttpHandler,ASP.NET,Extension

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

  • Using the Katana Authentication handlers with NancyFx

    - by cibrax
    Once you write an OWIN Middleware service, it can be reused everywhere as long as OWIN is supported. In my last post, I discussed how you could write an Authentication Handler in Katana for Hawk (HMAC Authentication). Good news is NancyFx can be run as an OWIN handler, so you can use many of existing middleware services, including the ones that are ship with Katana. Running NancyFx as a OWIN handler is pretty straightforward, and discussed in detail as part of the NancyFx documentation here. After run the steps described there and you have the application working, only a few more steps are required to register the additional middleware services. The example bellow shows how the Startup class is modified to include Hawk authentication. public class Startup { public void Configuration(IAppBuilder app) { app.UseHawkAuthentication(new HawkAuthenticationOptions { Credentials = (id) => { return new HawkCredential { Id = "dh37fgj492je", Key = "werxhqb98rpaxn39848xrunpaw3489ruxnpa98w4rxn", Algorithm = "hmacsha256", User = "steve" }; } }); app.UseNancy(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This code registers the Hawk Authentication Handler on top of the OWIN pipeline, so it will try to authenticate the calls before the request messages are passed over to NancyFx. The authentication handlers in Katana set the user principal in the OWIN environment using the key “server.User”. The following code shows how you can get that principal in a NancyFx module, public class HomeModule : NancyModule { public HomeModule() { Get["/"] = x => { var env = (IDictionary<string, object>)Context.Items[NancyOwinHost.RequestEnvironmentKey]; if (!env.ContainsKey("server.User") || env["server.User"] == null) { return HttpStatusCode.Unauthorized; } var identity = (ClaimsPrincipal)env["server.User"]; return "Hello " + identity.Identity.Name; }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Thanks to OWIN, you don’t know any details of how these cross cutting concerns can be implemented in every possible web application framework.

    Read the article

  • Instant Rename and Rename Refactoring

    - by Petr
    During the last weeks I have got  a few questions about rename refactoring and some users also complain to me that the refactoring in NetBeans 6.x was much faster. So I would like to explain the situation. For some people, who don't know, Instant Rename action and Rename Refactoring  can look like one action. But it's not true, even if  both actions use the same shortcut (CTRL + R). NetBeans 6.x contained only Instant Rename action (speaking about PHP support), which we can mark as very simple rename refactoring through one file. From NetBeans 7.0 the Instant Rename action works only in "non public" context. It means that this action is used for fast renaming variables that has local context like inside a method, or for renaming private methods and fields that can not be used outside of the scope, where they are declared. From user point of view these two action can be simply recognized. When is after CTRL+R called Instant Rename action, then the identifier is surrounded with rectangle and you can rename it directly in the file. It's fast and simple, also the usages of this identifier are renamed in the same time as you write. The picture below shows Instant Rename action for $message identifier, that is visible only in the print_test method and due this after CTRL+R is called Instant Rename. In NetBeans 7.0, there was added Rename Refactoring that is called for public identifiers. It means for identifiers that could be used in other files. If you press CTRL+R shortcut when the caret is inside $hello identifier from the picture above, NetBeans recognizes that $hello is declared / used in a global context and calls the Rename Refactoring that brings a dialog to change the name of the identifier. From this dialog you have to preview suggested changes, through pressing Preview button and then execute the refactoring through Do Refactoring button. Yes, it's more complicated from user point of view than Instant Rename, but in Rename Refactoring NetBeans can change more files at once. It should be  the developer responsibility to decide whether the suggested changes are right and the refactoring can be executed or in some files original name should be kept. Someone can argue that he doesn't use $hello variable in any other file so Instant Rename could be used in such case. Yes it's true, but in such case NetBeans has to know all usages of all identifiers and keep this informations up to date during editing a file. I'm sure that this is not possible due to the performance problems, mainly for big projects. So the usages are computed after pressing the Preview button. And why is the Refactor button always disabled in the Rename dialog and user has to always go through the preview phase? NetBeans has API and SPI for implementing refactoring actions and this dialog is a part of this infrastructure. If you rename an identifier for example in Java, the Refactor buttons is enabled, but Java is strongly type language and you can be almost in 99% sure that the IDE will suggest the right results. In PHP as a dynamic language, we can not be sure, what NetBeans finds is only a "guess". This is why NetBeans pushes developers to preview the changes for PHP rename. I hope that I have explain it clearly. I'm open to any discussion. What I have described above is situation in NetBeans 7.0, 7.0.1 and probably it will be also in NetBeans 7.1, because there is no plan to change it. Please write your opinion here.

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • SQLAuthority News – Follow up on – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    Last month I had a fantastic time with lots of puzzles and brain teasers, the amount of participation which I have received on the blog is indeed inspiring to write more. One of the blog post was about how to replace a column name in all the stored procedures. The article had very interesting conversation as a follow up. Please read the original article Replace a Column Name in Multiple Stored Procedure all together before reading this blog further as they are connected. Let us start few of the interesting comments. SQL Server Expert Imran Mohammed had a wonderful first and excellent note. I suggest all of you to read it. Imran stresses on the Data Modelling and Logical as well as Physical Design. Developers must create a logical design and get approval on naming convention, data types, references, constraints, indexes etc. He further suggested that one should not cut steps but must follow all the industry standards and guidelines. Here extended my blog post with following note – “Extending Pinal’s answer, what you can do is go to database properties, all tasks, scripts objects, in scripting wizard select all the stored procedure for which you want to change column name, export the query to a new window and then do find and replace, all in once window and execute the script. But make sure you check what you are replacing, sometimes column names are also used in table names, for ex:Table Name: Product and Column Name: ProductId, ProductName”. Thanks Imran Great Points!  Gatej Alexandru suggested that it is not good idea to DROP or CREATE but rather use ALTER as quite possible there may be permissions issue as well. Very good point let me see if I can write blog post over it. Vinay Kumar and SQLStudent144 have proposed another method to achieve the same. I am combining their solution and writing them here. Step 1. Press Ctrl+T or change “Result to Text” mode. Step 2. Execute below commands.SELECT 'EXEC sp_helptext [' + referencing_schema_name + '.' + referencing_entity_name + ']' FROM sys.dm_sql_referencing_entities('schema.objectname','OBJECT') Where schema.objectname is the object or table you are searching for. Step 3. Now copy the result and paste in new window. Again Press Ctrl+T or change “Result to Text” mode. Step 4. Copy the result and paste in new window. Execute the query. Step 5. Copy the result and paste in new window. Step 6. Now find your searching text in the script, make necessary changes and execute this script. Do not forget to remove the code which is generated in resultset which are not relevant to the T-SQL Script. Digitqr suggest we can do this for other objects besides Stored Procedure as well. Iosif suggests to use tool SQL Search from RedGate. I guess this sums it well. We have an alternative perspective to our original issue of replacing the column name in multiple stored procedure. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    For the last few weeks, I have been doing Friday Puzzles and I am really loving it. Yesterday I received a very interesting question by Navneet Chaurasia on Facebook Page. He was asked this question in one of the interview questions for job. Please read the original thread for a complete idea of the conversation. I am presenting the same question here. Puzzle Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Here is the quick setup script for the puzzle. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO The above query will return following result set. The puzzle was to write a single update column which will generate following result set. There are multiple answers to this simple puzzle. Let me show you three different ways. I am assuming that the column will have either value ‘male’ or ‘female’ only. Method 1: Using CASE Statement I believe this is going to be the most popular solution as we are all familiar with CASE Statement. UPDATE SimpleTable SET Gender = CASE Gender WHEN 'male' THEN 'female' ELSE 'male' END GO SELECT * FROM SimpleTable GO Method 2: Using REPLACE  Function I totally understand it is the not cleanest solution but it will for sure work in giving situation. UPDATE SimpleTable SET Gender = REPLACE(('fe'+Gender),'fefe','') GO SELECT * FROM SimpleTable GO Method 3: Using IIF in SQL Server 2012 If you are using SQL Server 2012 you can use IIF and get the same effect as CASE statement. UPDATE SimpleTable SET Gender = IIF(Gender = 'male', 'female', 'male') GO SELECT * FROM SimpleTable GO You can read my article series on SQL Server 2012 various functions over here. SQL SERVER – Denali – Logical Function – IIF() – A Quick Introduction SQL SERVER – Detecting Leap Year in T-SQL using SQL Server 2012 – IIF, EOMONTH and CONCAT Function Let us clean up. DROP TABLE SimpleTable GO Question to you: I came up with three simple tricks where there is a single UPDATE statement which swaps the values in the column. Do you know any other simple trick? If yes, please post here in the comments. I will pick two random winners from all the valid answers. Winners will get 1) Print Copy of SQL Server Interview Questions and Answers 2) Free Learning Code for Online Video Courses I will announce the winners on coming Monday. Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >