Search Results

Search found 574 results on 23 pages for 'iniquities of evil men'.

Page 22/23 | < Previous Page | 18 19 20 21 22 23  | Next Page >

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Adding the New HTML Editor Extender to a Web Forms Application using NuGet

    - by Stephen Walther
    The July 2011 release of the Ajax Control Toolkit includes a new, lightweight, HTML5 compatible HTML Editor extender. In this blog entry, I explain how you can take advantage of NuGet to quickly add the new HTML Editor control extender to a new or existing ASP.NET Web Forms application. Installing the Latest Version of the Ajax Control Toolkit with NuGet NuGet is a package manager. It enables you to quickly install new software directly from within Visual Studio 2010. You can use NuGet to install additional software when building any type of .NET application including ASP.NET Web Forms and ASP.NET MVC applications. If you have not already installed NuGet then you can install NuGet by navigating to the following address and clicking the giant install button: http://nuget.org/ After you install NuGet, you can add the Ajax Control Toolkit to a new or existing ASP.NET Web Forms application by selecting the Visual Studio menu option Tools, Library Package Manager, Package Manager Console: Selecting this menu option opens the Package Manager Console. You can enter the command Install-Package AjaxControlToolkit in the console to install the Ajax Control Toolkit: After you install the Ajax Control Toolkit with NuGet, your application will include an assembly reference to the AjaxControlToolkit.dll and SanitizerProviders.dll assemblies: Furthermore, your Web.config file will be updated to contain a new tag prefix for the Ajax Control Toolkit controls: <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> The configuration file installed by NuGet adds the prefix ajaxToolkit for all of the Ajax Control Toolkit controls. You can type ajaxToolkit: in source view to get auto-complete in Source view. You can, of course, change this prefix to anything you want. Using the HTML Editor Extender After you install the Ajax Control Toolkit, you can use the HTML Editor Extender with the standard ASP.NET TextBox control to enable users to enter rich formatting such as bold, underline, italic, different fonts, and different background and foreground colors. For example, the following page can be used for entering comments. The page contains a standard ASP.NET TextBox, Button, and Label control. When you click the button, any text entered into the TextBox is displayed in the Label control. It is a pretty boring page: Let’s make this page fancier by extending the standard ASP.NET TextBox with the HTML Editor extender control: Notice that the ASP.NET TextBox now has a toolbar which includes buttons for performing various kinds of formatting. For example, you can change the size and font used for the text. You also can change the foreground and background color – and make many other formatting changes. You can customize the toolbar buttons which the HTML Editor extender displays. To learn how to customize the toolbar, see the HTML Editor Extender sample page here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx Here’s the source code for the ASP.NET page: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication1.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Add Comments</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="TSM1" runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="8" Runat="server" /> <ajaxToolkit:HtmlEditorExtender ID="hee" TargetControlID="txtComments" Runat="server" /> <br /><br /> <asp:Button ID="btnSubmit" Text="Add Comment" Runat="server" onclick="btnSubmit_Click" /> <hr /> <asp:Label ID="lblComment" Runat="server" /> </div> </form> </body> </html> Notice that the page above contains 5 controls. The page contains a standard ASP.NET TextBox, Button, and Label control. However, the page also contains an Ajax Control Toolkit ToolkitScriptManager control and HtmlEditorExtender control. The HTML Editor extender control extends the standard ASP.NET TextBox control. The HTML Editor TargetID attribute points at the TextBox control. Here’s the code-behind for the page above:   using System; namespace WebApplication1 { public partial class Default : System.Web.UI.Page { protected void btnSubmit_Click(object sender, EventArgs e) { lblComment.Text = txtComments.Text; } } }   Preventing XSS/JavaScript Injection Attacks If you use an HTML Editor -- any HTML Editor -- in a public facing web page then you are opening your website up to Cross-Site Scripting (XSS) attacks. An evil hacker could submit HTML using the HTML Editor which contains JavaScript that steals private information such as other user’s passwords. Imagine, for example, that you create a web page which enables your customers to post comments about your website. Furthermore, imagine that you decide to redisplay the comments so every user can see them. In that case, a malicious user could submit JavaScript which displays a dialog asking for a user name and password. When an unsuspecting customer enters their secret password, the script could transfer the password to the hacker’s website. So how do you accept HTML content without opening your website up to JavaScript injection attacks? The Ajax Control Toolkit HTML Editor supports the Anti-XSS library. You can use the Anti-XSS library to sanitize any HTML content. The Anti-XSS library, for example, strips away all JavaScript automatically. You can download the Anti-XSS library from NuGet. Open the Package Manager Console and execute the command Install-Package AntiXSS: Adding the Anti-XSS library to your application adds two assemblies to your application named AntiXssLibrary.dll and HtmlSanitizationLibrary.dll. After you install the Anti-XSS library, you can configure the HTML Editor extender to use the Anti-XSS library your application’s web.config file: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> Summary In this blog entry, I described how you can quickly get started using the new HTML Editor extender – included with the July 2011 release of the Ajax Control Toolkit – by installing the Ajax Control Toolkit with NuGet. If you want to learn more about the HTML Editor then please take a look at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • I&rsquo;m sorry RPGs, it&rsquo;s not you, it&rsquo;s me: The birth of my game idea

    - by George Clingerman
    One of the things I’ve had to give up in order to have some development time at night is gaming. It’s something I refused to admit for years but I’ve just had to face the facts. I’m no longer a gamer. I just don’t have hours and hours of free time to pour into gaming and when I do have hours and hours of free time I want to pour them into game development. That doesn’t mean I don’t game at all! I play games pretty much every day. It just means I’ve moved more into the casual game realm. It’s all I have time for when juggling priorities in my life. That means that games like Gears of War 2 sit shrink wrapped on my shelf and although I popped Dragon Age into my Xbox 360 one time, I barely made it through the opening sequence and haven’t had time to sit down and play again. Instead I’m playing short games like Jamestown, Atom Zombie Smasher, Fortix or if I have time to jump in and play a few rounds maybe some Monday Night Combat or Team Fortress 2. These are games I can instantly get into and play for just a short period of time and then walk away. Breath of Death VII saved my life: Back in the day (way, way back in the day) I used to be a pretty big RPG fan. Not big by a lot of RPG gamers' standards (most of the RPGs RPG fans about I’ve never heard of) but I used to LOVE to play them on the NES, SNES and Genesis and considered that my genre. Final Fantasy, Shining in the Darkness, Bard’s Tale, Faxanadu, Shadowrun, Ultima, Dragon Warrior, Chrono Trigger, Phantasy Star, Shining Force and well the list could go on but those are the ones I remember off the top of my head. I loved playing RPGs and they were my games of choice. After my first son was born (this was just about 12 years ago), I tried to continue playing RPGs and purchased games like Baldur’s Gate I & II, Neverwinter Nights, Fable, then a few of the Final Fantasy’s then Kingdom Hearts. I kept buying these games and then only playing for about fifteen minutes and never getting back to them. I still loved RPGs but they just no longer fit into my life (I still haven’t accepted that since I still purchased Dragon Age II for some reason and convinced myself I’d find the time). Adding three more sons to the mix (that’s 4 total) didn’t help much to finding more RPG time (except for Breath of Death VII and other XBLIG RPG titles, thanks guys!) All work and no RPG: A few months ago as I was sitting thinking about the lack of RPGs in my life and talking to my wife about why I wish RPGs were different and easier for a dad like me to get into. She seemed like she was listening, so I started listing all the things that made them impossible for me to play. Here’s a short list I came up with. They take 15 billion hours to complete I have a few minutes at a time I can grab to play them if I want to have time to code. At that rate it would take me 9 trillion years to beat just one RPG. There’s such long spans of times between when I can play them I forget what I was even doing so I have to spend most of the playtime I have just figuring that out and then my play time is over. Repeat. I’ll never finish one and since it takes so long to get to the fun part in an RPG, I’m never having fun. RPGs aren’t fun if you don’t have hours to play them at a time. As you can see based on my science and math, RPGs aren’t fun for me any more. From there my brain started toying around with ideas of RPGs that would work for me. They would have to be a short RPG, you know one you could beat in a single play session. A dad sized play session. I started thinking, wouldn’t it be awesome if there was a fifteen minute RPG? That got me laughing and I took that as a good sign that it sounded fun and so I thought about it a little more. I immediately discarded the idea of doing a real RPG. I’m sure a short RPG like that could be done but it wasn’t the vibe that I had in my head. No this was going to be something that just had the core essence of an RPG. In reality what I’d be making would be more of an arcade style game. One with high scores and lots of crazy action on the screen. And that’s when it hit me. It would be a speed run RPG. That’s the basics of the game I’m working on.   The Elevator Pitch: It’s a 2D top down RPG themed arcade game focused on speed. It sounds like an RPG, smells like an RPG but it’s merely emulating an RPG. The game is focused on fun and mayhem in RPG form with players leveling up in seconds instead of hours and rushing to finish quests as quickly as possible because they’ve only got fifteen minutes before EVIL overtakes the world. If the player takes longer than fifteen minutes, it’s game over man. One to four player co-operative play to really see just how fast players can level up and beat the game. Gamers will compete on leaderboards for bragging rights for fastest 1, 2, 3, and 4 player speed runs, lowest leveled characters to beat the game, highest leveled characters to beat the game and so on. Times will be tracked for everything from how long a player sat distributing stats, equipping items, talking to NPCs to running around the level. These stats will be shown at the end of each quest/level so the players can work on improving their speed run for that part of the game next time around. It’s the perfect RPG for those of us who only have fifteen minutes of game time! Where I’m at: I’m still at the prototyping stage attempting to but all the basic framework pieces in place that will at minimum give me one level to rush through. I’ve been working on this prototype for about a month now though so I’m going to have to step it up a bit or I’m not going to get finished in time (remember I’ve only got 85 days left!) Lots of the game code is in place (although pretty sloppy) but I still can’t play through that first quest/level just yet. That’s my goal to finish up by the end of next Sunday (3/25/2012). You can all hold me to that and cheer me on or heckle me throughout the week. Either way that should help me stay a bit more motivated and focused. In my head this feels like it’s going to be a fun game so I’m looking forward to seeing how it actually plays!

    Read the article

  • Cutting edge technology, a lone Movember ranger and a 5-a-side football club ...meet the team at Oracle’s Belfast Offices.

    - by user10729410
    Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Olivia O’Connell To see what’s in store at Oracle’s next Open Day which comes to Belfast this week, I visited the offices with some colleagues to meet the team and get a feel for what‘s in store on November 29th. After being warmly greeted by Frances and Francesca, who make sure Front of House and Facilities run smoothly, we embarked on a quick tour of the 2 floors Oracle occupies, led by VP Bo, it was time to seek out some willing volunteers to be interviewed/photographed - what a shy bunch! A bit of coaxing from the social media team was needed here! In a male-dominated environment, the few women on the team caught my eye immediately. I got chatting to Susan, a business analyst and Bronagh, a tech writer. It becomes clear during our chat that the male/female divide is not an issue – “everyone here just gets on with the job,” says Suzanne, “We’re all around the same age and have similar priorities and luckily everyone is really friendly so there are no problems. ” A graduate of Queen’s University in Belfast majoring in maths & computer science, Susan works closely with product management and the development teams to ensure that the final project delivered to clients meets and exceeds their expectations. Bronagh, who joined us following working for a tech company in Montreal and gaining her post-grad degree at University of Ulster agrees that the work is challenging but “the environment is so relaxed and friendly”. Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Software developer David is taking the Movember challenge for the first time to raise vital funds and awareness for men’s health. Like other colleagues in the office, he is a University of Ulster graduate and works on Reference applications and Merchandising Tools which enable customers to establish e-shops using Oracle technologies. The social activities are headed up by Gordon, a software engineer on the commerce team who joined the team 4 years ago after graduating from the University of Strathclyde at Glasgow with a degree in Computer Science. Everyone is unanimous that the best things about working at Oracle’s Belfast offices are the casual friendly environment and the opportunity to be at the cutting edge of technology. We’re looking forward to our next trip to Belfast for some cool demos and meet candidates. And as for the camera-shyness? Look who came out to have their picture taken at the end of the day! Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle offices in Belfast are located on the 6th floor, Victoria House, Gloucester Street, Belfast BT1 4LS, UK View Larger Map Normal 0 false false false EN-IE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Open day takes place on Thursday, 29th November 4pm – 8pm. Visit the 5 Demo Stations to find out more about each teams' activities and projects to date. See live demos including "Engaging the Customer", "Managing Your Store", "Helping the Customer", "Shopping on-line" and "The Commerce Experience" processes. The "Working @Oracle" stand will give you the chance to connect with our recruitment team and get information about the Recruitment process and making your career path in Oracle. Register here.

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Friday, September 07, 2012

    CodePlex Daily Summary for Friday, September 07, 2012Popular ReleasesUmbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...menu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.B INI Sharp Library: B INI Sharp Library v1.0.0.0 Final Realsed: The frist realsedActive Social Migrator: ActiveSocialMigrator 1.0.0 Beta: Beta release for the Active Social Migration tool.EntLib.com????????: ??????demo??-For SQL 2005-2008: EntLibShopping ???v3.0 - ??????demo??,?????SQL SERVER 2005/2008/2008 R2/2012 ??????。 ??(??)??????。 THANKS.Sistem LPK Pemkot Semarang: Panduan Penggunaan Sistem LPK: Panduan cara menggunakan Aplikasi Sistem LPK Bagian Pembangunan Kota SemarangActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.New ProjectsAdding 2013 Jewish Holidays for Outlook2003: Instruction: Copy the outlook.hol file to your compuer where Outlook2003 is installed. Double click the file, choose "Israel" and continue. That's it Agilcont System: Sistema de contabilidad para empresas privadas de preferencia para cajas que trabajan con efectivo en soles, dolares y con el material oroARB (A Request Broker): The idea is something like a Request Broker, but with some additional functionality.BATTLE.NET - SDK: This SDK provides the ability to use the Battle.net (Blizzard) Services for all supported Games such Diablo 3, World of Warcraft. Container Terminal System: SummaryDeclarative UX Streaming Data Language for the Cloud: Bringing a better communication paradigm for media and data..Get User Profile Information from SharePoint UserProfile Service: Used SharePoint object model to get the user profile information from the User Profile Service.Guess The City & State Windows 8 Source Code: Source code for Guess The City & State in Malaysia Windows 8 AppJquery Tree: This project is to demonstrate tree basic functionality.MCEBuddy 2.x: Convert and Remove Commercials for your Windows Media CenterMvcDesign: MvcDesign engine implementation projectMy Task Manager: This is a task manager module for DotNetNuke. I am using it to get started developing modules.MyAppwithbranches: MyAppwithbranchesProjecte prova: rpyGEO: pyGEO is a python package capable of parsing microarray data files. It also has a primitive plotting function.Scarlet Road: Scarlet Road is a top-down shooter. It's you against an unending horde of monsters.simplecounter: A simple counter, cick and counter.SiteEmpires: ????????Soundcloud Loader: Simple Tool for downloading Tracks from Soundcloud.Windows Phone Samples: Windows Phone code samples.

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • How to implement jquery and Mootools together ?

    - by Avi Kumar Manku
    I am developing a website in which I am implementing two slider for images gallery using one with jQuery and one with moottools. But there is problem in implementing these because when I use both together the jQuery slider doesn't works where mootools slider works. jQuery slider works in case where I remove mootools. What should I do to implement both sliders together? Any suggestions will be helpful. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Tresmode | Footwear &amp; Accessories</title> <script type="text/javascript" src="js/jquery-1.5.min.js"></script> <script src="js/jquery.easing.1.3.js" type="text/javascript"></script> <script src="js/jquery.slideviewer.1.2.js" type="text/javascript"></script> <!-- Syntax hl --> <script src="js/jquery.syntax.min.js" type="text/javascript" charset="utf-8"></script> <script type="text/javascript"> $(window).bind("load", function() { $("div#mygaltop").slideView({toolTip: true, ttOpacity: 0.5}); $("div#mygalone").slideView(); //if leaved blank performs the default kind of animation (easeInOutExpo, 750) $("div#mygaltwo").slideView({ easeFunc: "easeInOutBounce", easeTime: 2200, toolTip: true }); $("div#mygalthree").slideView({ easeFunc: "easeInOutSine", easeTime: 100, uiBefore: true, ttOpacity: 0.5, toolTip: true }); }); $(function(){ $.syntax({root: 'http://www.gcmingati.net/wordpress/wp-content/themes/giancarlo-mingati/js/jquery-syntax/'}); }); </script> <link href="css/style.css" rel="stylesheet" type="text/css" /> <link href="css/product.css" rel="stylesheet" type="text/css" /> <link href="css/scroll.css" rel="stylesheet" type="text/css" /> <!--[if lte IE 8]> <link href="css/ieonly.css" rel="stylesheet" type="text/css" /> <![endif]--> <script language="javascript" type="text/javascript" src="js/mootools-1.2-core.js"></script> <script language="javascript" type="text/javascript" src="js/mootools-1.2-more.js"></script> <script language="javascript" type="text/javascript" src="js/SlideItMoo.js"></script> <script language="javascript" type="text/javascript"> window.addEvent('domready', function(){ /* thumbnails example , links only */ new SlideItMoo({itemsVisible:5, // the number of thumbnails that are visible currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this thumbsContainer: 'thumbs', elementScrolled: 'thumb_container', overallContainer: 'gallery_container'}); /* thumbnails example , div containers */ new SlideItMoo({itemsVisible:5, // the number of thumbnails that are visible currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this thumbsContainer: 'thumbs2', elementScrolled: 'thumb_container2', overallContainer: 'gallery_container2'}); /* banner rotator example */ new SlideItMoo({itemsVisible:1, // the number of thumbnails that are visible showControls:0, // show the next-previous buttons autoSlide:2500, // insert interval in milliseconds currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this transition: Fx.Transitions.Bounce.easeOut, thumbsContainer: 'banners', elementScrolled: 'banner_container', overallContainer: 'banners_container'}); }); </script> </head> <body> <div id="landing"> <!-- landing page menu --> <div id="landing_menu"> <ul> <li><a class="active" href="#">SPECIALS</a></li> <li><a href="#">SHOP MEN'S</a></li> <li class="none"><a class="none" href="#">SHOP WOMEN'S</a></li> </ul> </div> <!-- landing page menu --> <!-- loading container menu --> <div id="container_part"> <div id="big_image_slider"> <!-- <img src="images/briteloves.png" alt="Britelove" /> --> <div id="mygaltop" class="svw"> <ul> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/briteloves.png" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/1.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/2.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/3.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/4.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/5.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/6.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/7.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/8.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/9.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/10.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/11.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/12.jpg" /></li> </ul> </div> </div> <div class="new_style_banner"><img src="images/new_styles.png" alt="new style" /></div> <div class="new_style_banner"><img src="images/ford-super-models.png" alt="ford super models" /></div> </div> <!--- loading container menu --> <!-- footer scrool ---> <div id="footer_scroll"> <!--thumbnails slideshow begin--> <div id="gallery_container"> <div id="thumb_container"> <div id="thumbs"> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/1.jpg"/></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/2.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/3.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/4.jpg" /></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/5.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/6.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/1.jpg"/></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/2.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/7.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/8.jpg" /></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/9.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/10.jpg"/></a> </div> </div> </div> <!--thumbnails slideshow end--> </div> <!-- foooter scrooll --> </div> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> var pageTracker = _gat._getTracker("UA-2064812-2"); pageTracker._initData(); pageTracker._trackPageview(); </script> </body> </html>

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

  • socket operation on nonsocket or bad file descriptor

    - by Magn3s1um
    I'm writing a pthread server which takes requests from clients and sends them back a bunch of .ppm files. Everything seems to go well, but sometimes when I have just 1 client connected, when trying to read from the file descriptor (for the file), it says Bad file Descriptor. This doesn't make sense, since my int fd isn't -1, and the file most certainly exists. Other times, I get this "Socket operation on nonsocket" error. This is weird because other times, it doesn't give me this error and everything works fine. When trying to connect multiple clients, for some reason, it will only send correctly to one, and then the other client gets the bad file descriptor or "nonsocket" error, even though both threads are processing the same messages and do the same routines. Anyone have an idea why? Here's the code that is giving me that error: while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); The messages for both threads are the same, being of the form ./path/imageXX.ppm where XX is the number that should go to the client. The file size of each image is 58368 bytes. Sometimes, this code hangs on the read, and stops execution. I don't know this would be either, because the file descriptor comes back as valid. Thanks in advanced. Edit: Here's some sample output: Sending to client a: ./support/images/sw90.ppm This is fd 4 Error: : Socket operation on non-socket Sending to client a: ./support/images/sw91.ppm This is fd 4 Error: : Socket operation on non-socket Sending ./support/images/sw92.ppm This is fd 4 I am hhere2 Error: : Socket operation on non-socket My dispatcher has defeated evil Sample with 2 clients (client b was serviced first) Sending to client b: ./support/images/sw87.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw88.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw89.ppm This is fd 6 Error: : Success This is fd 6 Error: : Bad file descriptor Sending to client a: ./support/images/sw85.ppm This is fd 6 Error: As you can see, who ever is serviced first in this instance can open the files, but not the 2nd person. Edit2: Full code. Sorry, its pretty long and terribly formatted. #include <netinet/in.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include <sys/types.h> #include <sys/socket.h> #include <errno.h> #include <stdio.h> #include <unistd.h> #include <pthread.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include "ring.h" /* Version 1 Here is what is implemented so far: The threads are created from the arguments specified (number of threads that is) The server will lock and update variables based on how many clients are in the system and such. The socket that is opened when a new client connects, must be passed to the threads. To do this, we need some sort of global array. I did this by specifying an int client and main_pool_busy, and two pointers poolsockets and nonpoolsockets. My thinking on this was that when a new client enters the system, the server thread increments the variable client. When a thread is finished with this client (after it sends it the data), the thread will decrement client and close the socket. HTTP servers act this way sometimes (they terminate the socket as soon as one transmission is sent). *Note down at bottom After the server portion increments the client counter, we must open up a new socket (denoted by new_sd) and get this value to the appropriate thread. To do this, I created global array poolsockets, which will hold all the socket descriptors for our pooled threads. The server portion gets the new socket descriptor, and places the value in the first spot of the array that has a 0. We only place a value in this array IF: 1. The variable main_pool_busy < worknum (If we have more clients in the system than in our pool, it doesn't mean we should always create a new thread. At the end of this, the server signals on the condition variable clientin that a new client has arrived. In our pooled thread, we then must walk this array and check the array until we hit our first non-zero value. This is the socket we will give to that thread. The thread then changes the array to have a zero here. What if our all threads in our pool our busy? If this is the case, then we will know it because our threads in this pool will increment main_pool_busy by one when they are working on a request and decrement it when they are done. If main_pool_busy >= worknum, then we must dynamically create a new thread. Then, we must realloc the size of our nonpoolsockets array by 1 int. We then add the new socket descriptor to our pool. Here's what we need to figure out: NOTE* Each worker should generate 100 messages which specify the worker thread ID, client socket descriptor and a copy of the client message. Additionally, each message should include a message number, starting from 0 and incrementing for each subsequent message sent to the same client. I don't know how to keep track of how many messages were to the same client. Maybe we shouldn't close the socket descriptor, but rather keep an array of structs for each socket that includes how many messages they have been sent. Then, the server adds the struct, the threads remove it, then the threads add it back once they've serviced one request (unless the count is 100). ------------------------------------------------------------- CHANGES Version 1 ---------- NONE: this is the first version. */ #define MAXSLOTS 30 #define dis_m 15 //problems with dis_m ==1 //Function prototypes void inc_clients(); void init_mutex_stuff(pthread_t*, pthread_t*); void *threadpool(void *); void server(int); void add_to_socket_pool(int); void inc_busy(); void dec_busy(); void *dispatcher(); void create_message(long, int, int, char *, char *); void init_ring(); void add_to_ring(char *, char *, int, int, int); int socket_from_string(char *); void add_to_head(char *); void add_to_tail(char *); struct message * reorder(struct message *, struct message *, int); int get_threadid(char *); void delete_socket_messages(int); struct message * merge(struct message *, struct message *, int); int get_request(char *, char *, char*); ///////////////////// //Global mutexes and condition variables pthread_mutex_t startservice; pthread_mutex_t numclients; pthread_mutex_t pool_sockets; pthread_mutex_t nonpool_sockets; pthread_mutex_t m_pool_busy; pthread_mutex_t slots; pthread_mutex_t numm; pthread_mutex_t circ; pthread_cond_t clientin; pthread_cond_t m; /////////////////////////////////////// //Global variables int clients; int main_pool_busy; int * poolsockets, nonpoolsockets; int worknum; struct ring mqueue; /////////////////////////////////////// int main(int argc, char ** argv){ //error handling if not enough arguments to program if(argc != 3){ printf("Not enough arguments to server: ./server portnum NumThreadsinPool\n"); _exit(-1); } //Convert arguments from strings to integer values int port = atoi(argv[1]); worknum = atoi(argv[2]); //Start server portion server(port); } /////////////////////////////////////////////////////////////////////////////////////////////// //The listen server thread///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////// void server(int port){ int sd, new_sd; struct sockaddr_in name, cli_name; int sock_opt_val = 1; int cli_len; pthread_t threads[worknum]; //create our pthread id array pthread_t dis[1]; //create our dispatcher array (necessary to create thread) init_mutex_stuff(threads, dis); //initialize mutexes and stuff //Server setup /////////////////////////////////////////////////////// if ((sd = socket (AF_INET, SOCK_STREAM, 0)) < 0) { perror("(servConn): socket() error"); _exit (-1); } if (setsockopt (sd, SOL_SOCKET, SO_REUSEADDR, (char *) &sock_opt_val, sizeof(sock_opt_val)) < 0) { perror ("(servConn): Failed to set SO_REUSEADDR on INET socket"); _exit (-1); } name.sin_family = AF_INET; name.sin_port = htons (port); name.sin_addr.s_addr = htonl(INADDR_ANY); if (bind (sd, (struct sockaddr *)&name, sizeof(name)) < 0) { perror ("(servConn): bind() error"); _exit (-1); } listen (sd, 5); //End of server Setup ////////////////////////////////////////////////// for (;;) { cli_len = sizeof (cli_name); new_sd = accept (sd, (struct sockaddr *) &cli_name, &cli_len); printf ("Assigning new socket descriptor: %d\n", new_sd); inc_clients(); //New client has come in, increment clients add_to_socket_pool(new_sd); //Add client to the pool of sockets if (new_sd < 0) { perror ("(servConn): accept() error"); _exit (-1); } } pthread_exit(NULL); //Quit } //Adds the new socket to the array designated for pthreads in the pool void add_to_socket_pool(int socket){ pthread_mutex_lock(&m_pool_busy); //Lock so that we can check main_pool_busy int i; //If not all our main pool is busy, then allocate to one of them if(main_pool_busy < worknum){ pthread_mutex_unlock(&m_pool_busy); //unlock busy, we no longer need to hold it pthread_mutex_lock(&pool_sockets); //Lock the socket pool array so that we can edit it without worry for(i = 0; i < worknum; i++){ //Find a poolsocket that is -1; then we should put the real socket there. This value will be changed back to -1 when the thread grabs the sockfd if(poolsockets[i] == -1){ poolsockets[i] = socket; pthread_mutex_unlock(&pool_sockets); //unlock our pool array, we don't need it anymore inc_busy(); //Incrememnt busy (locks the mutex itself) pthread_cond_signal(&clientin); //Signal first thread waiting on a client that a client needs to be serviced break; } } } else{ //Dynamic thread creation goes here pthread_mutex_unlock(&m_pool_busy); } } //Increments the client number. If client number goes over worknum, we must dynamically create new pthreads void inc_clients(){ pthread_mutex_lock(&numclients); clients++; pthread_mutex_unlock(&numclients); } //Increments busy void inc_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy++; pthread_mutex_unlock(&m_pool_busy); } //Initialize all of our mutexes at the beginning and create our pthreads void init_mutex_stuff(pthread_t * threads, pthread_t * dis){ pthread_mutex_init(&startservice, NULL); pthread_mutex_init(&numclients, NULL); pthread_mutex_init(&pool_sockets, NULL); pthread_mutex_init(&nonpool_sockets, NULL); pthread_mutex_init(&m_pool_busy, NULL); pthread_mutex_init(&circ, NULL); pthread_cond_init (&clientin, NULL); main_pool_busy = 0; poolsockets = malloc(sizeof(int)*worknum); int threadreturn; //error checking variables long i = 0; //Loop and create pthreads for(i; i < worknum; i++){ threadreturn = pthread_create(&threads[i], NULL, threadpool, (void *) i); poolsockets[i] = -1; if(threadreturn){ perror("Thread pool created unsuccessfully"); _exit(-1); } } pthread_create(&dis[0], NULL, dispatcher, NULL); } ////////////////////////////////////////////////////////////////////////////////////////// /////////Main pool routines ///////////////////////////////////////////////////////////////////////////////////////// void dec_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy--; pthread_mutex_unlock(&m_pool_busy); } void dec_clients(){ pthread_mutex_lock(&numclients); clients--; pthread_mutex_unlock(&numclients); } //This is what our threadpool pthreads will be running. void *threadpool(void * threadid){ long id = (long) threadid; //Id of this thread int i; int socket; int counter = 0; //Try and gain access to the next client that comes in and wait until server signals that a client as arrived while(1){ pthread_mutex_lock(&startservice); //lock start service (required for cond wait) pthread_cond_wait(&clientin, &startservice); //wait for signal from server that client exists pthread_mutex_unlock(&startservice); //unlock mutex. pthread_mutex_lock(&pool_sockets); //Lock the pool socket so we can get the socket fd unhindered/interrupted for(i = 0; i < worknum; i++){ if(poolsockets[i] != -1){ socket = poolsockets[i]; poolsockets[i] = -1; pthread_mutex_unlock(&pool_sockets); } } printf("Thread #%d is past getting the socket\n", id); int incoming = 1; while(counter < 100 && incoming != 0){ char buffer[512]; bzero(buffer,512); int startcounter = 0; incoming = read(socket, buffer, 512); if(buffer[0] != 0){ //client ID:priority:request:arguments char id[100]; long prior; char request[100]; char arg1[100]; char message[100]; char arg2[100]; char * point; point = strtok(buffer, ":"); strcpy(id, point); point = strtok(NULL, ":"); prior = atoi(point); point = strtok(NULL, ":"); strcpy(request, point); point = strtok(NULL, ":"); strcpy(arg1, point); point = strtok(NULL, ":"); if(point != NULL){ strcpy(arg2, point); } int fd; if(strcmp(request, "start_movie") == 0){ int count = 1; while(count <= 100){ char temp[10]; snprintf(temp, 50, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s to %s\n", message, id); count++; add_to_ring(message, id, prior, counter, socket); //Adds our created message to the ring counter++; } printf("I'm out of the loop\n"); } else if(strcmp(request, "seek_movie") == 0){ int count = atoi(arg2); while(count <= 100){ char temp[10]; snprintf(temp, 10, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s\n", message); count++; } } //create_message(id, socket, counter, buffer, message); //Creates our message from the input from the client. Stores it in buffer } else{ delete_socket_messages(socket); break; } } counter = 0; close(socket);//Zero out counter again } dec_clients(); //client serviced, decrement clients dec_busy(); //thread finished, decrement busy } //Creates a message void create_message(long threadid, int socket, int counter, char * buffer, char * message){ snprintf(message, strlen(buffer)+15, "%d:%d:%d:%s", threadid, socket, counter, buffer); } //Gets the socket from the message string (maybe I should just pass in the socket to another method) int socket_from_string(char * message){ char * substr1 = strstr(message, ":"); char * substr2 = substr1; substr2++; int occurance = strcspn(substr2, ":"); char sock[10]; strncpy(sock, substr2, occurance); return atoi(sock); } //Adds message to our ring buffer's head void add_to_head(char * message){ printf("Adding to head of ring\n"); mqueue.head->message = malloc(strlen(message)+1); //Allocate space for message strcpy(mqueue.head->message, message); //copy bytes into allocated space } //Adds our message to our ring buffer's tail void add_to_tail(char * message){ printf("Adding to tail of ring\n"); mqueue.tail->message = malloc(strlen(message)+1); //allocate space for message strcpy(mqueue.tail->message, message); //copy bytes into allocated space mqueue.tail->next = malloc(sizeof(struct message)); //allocate space for the next message struct } //Adds a message to our ring void add_to_ring(char * message, char * id, int prior, int mnum, int socket){ //printf("This is message %s:" , message); pthread_mutex_lock(&circ); //Lock the ring buffer pthread_mutex_lock(&numm); //Lock the message count (will need this to make sure we can't fill the buffer over the max slots) if(mqueue.head->message == NULL){ add_to_head(message); //Adds it to head mqueue.head->socket = socket; //Set message socket mqueue.head->priority = prior; //Set its priority (thread id) mqueue.head->mnum = mnum; //Set its message number (used for sorting) mqueue.head->id = malloc(sizeof(id)); strcpy(mqueue.head->id, id); } else if(mqueue.tail->message == NULL){ //This is the problem for dis_m 1 I'm pretty sure add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } else{ mqueue.tail->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.tail->next; add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } mqueue.mcount++; pthread_mutex_unlock(&circ); if(mqueue.mcount >= dis_m){ pthread_mutex_unlock(&numm); pthread_cond_signal(&m); } else{ pthread_mutex_unlock(&numm); } printf("out of add to ring\n"); fflush(stdout); } ////////////////////////////////// //Dispatcher routines ///////////////////////////////// void *dispatcher(){ init_ring(); while(1){ pthread_mutex_lock(&slots); pthread_cond_wait(&m, &slots); pthread_mutex_lock(&numm); pthread_mutex_lock(&circ); printf("Dispatcher to the rescue!\n"); mqueue.head = reorder(mqueue.head, mqueue.tail, mqueue.mcount); //printf("This is the head %s\n", mqueue.head->message); //printf("This is the tail %s\n", mqueue.head->message); fflush(stdout); struct message * pointer = mqueue.head; int count = 0; while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); printf("My dispatcher has defeated evil\n"); } } void init_ring(){ mqueue.head = malloc(sizeof(struct message)); mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.mcount = 0; } struct message * reorder(struct message * begin, struct message * end, int num){ //printf("I am reordering for size %d\n", num); fflush(stdout); int i; if(num == 1){ //printf("Begin: %s\n", begin->message); begin->next = NULL; return begin; } else{ struct message * left = begin; struct message * right; int middle = num/2; for(i = 1; i < middle; i++){ left = left->next; } right = left -> next; left -> next = NULL; //printf("Begin: %s\nLeft: %s\nright: %s\nend:%s\n", begin->message, left->message, right->message, end->message); left = reorder(begin, left, middle); if(num%2 != 0){ right = reorder(right, end, middle+1); } else{ right = reorder(right, end, middle); } return merge(left, right, num); } } struct message * merge(struct message * left, struct message * right, int num){ //printf("I am merginging! left: %s %d, right: %s %dnum: %d\n", left->message,left->priority, right->message, right->priority, num); struct message * start, * point; int lenL= 0; int lenR = 0; int flagL = 0; int flagR = 0; int count = 0; int middle1 = num/2; int middle2; if(num%2 != 0){ middle2 = middle1+1; } else{ middle2 = middle1; } while(lenL < middle1 && lenR < middle2){ count++; //printf("In here for count %d\n", count); if(lenL == 0 && lenR == 0){ if(left->priority < right->priority){ start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ start = right; point = right; right = right->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ ////printf("This is where we are\n"); start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ start = right; point = right; right = right->next; point->next = NULL; lenR++; } } } else{ if(left->priority < right->priority){ point->next = left; left = left->next; //move the left pointer point = point->next; point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ point->next = left; //set our enum; left = left->next; point = point->next;//move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } } } if(lenL == middle1){ flagL = 1; break; } if(lenR == middle2){ flagR = 1; break; } } if(flagL == 1){ point->next = right; point = point->next; for(lenR; lenR< middle2-1; lenR++){ point = point->next; } point->next = NULL; mqueue.tail = point; } else{ point->next = left; point = point->next; for(lenL; lenL< middle1-1; lenL++){ point = point->next; } point->next = NULL; mqueue.tail = point; } //printf("This is the start %s\n", start->message); //printf("This is mqueue.tail %s\n", mqueue.tail->message); return start; } void delete_socket_messages(int a){ }

    Read the article

  • Problem with jQuery plugin TinySort

    - by Volmar
    I'm trying to sort a list with the help of jQuery and the TinySort-plugin, and it works good but one thing is not working as i want. My Code is: <!doctype html> <html> <head> <meta charset="UTF-8" /> <title>TinySort problem</title> <script type="text/javascript" src="http://so.volmar.se/www/js/jquery.js"></script> <script type="text/javascript" src="http://so.volmar.se/www/js/jquery.tinysort.min.js"></script> <script type="text/javascript"> function pktsort(way){ if($("div#paket>ul>li.sortdiv>a#s_abc").text() == "A-S"){ $("div#paket>ul>li.sortdiv>a#s_abc").text("S-A"); $("div#paket ul li.sortable").tsort("",{place:"org",returns:true,order:"desc"}); }else{ $("div#paket>ul>li.sortdiv>a#s_abc").text("A-S"); $("div#paket ul li.sortable").tsort("",{place:"org",returns:true,order:"asc"}); } } </script> </head> <body> <div id="paket" title="Paket"> <ul class="rounded"> <li class="sortdiv">Sort: <a href="#" onclick="pktsort();" class="active_sort" id="s_abc">A-S</a></li> <li class="sortable">Almost Famous</li> <li class="sortable">Children of Men</li> <li class="sortable">Coeurs</li> <li class="sortable">Colossal Youth</li> <li class="sortable">Demonlover</li> <li class="sortable">Femme Fatale</li> <li class="sortable">I'm Not There</li> <li class="sortable">In the City of Sylvia</li> <li class="sortable">Into the Wild</li> <li class="sortable">Je rentre à la maison</li> <li class="sortable">King Kong</li> <li class="sortable">Little Miss Sunshine</li> <li class="sortable">Man on Wire</li> <li class="sortable">Milk</li> <li class="sortable">Monsters Inc.</li> <li class="sortable">My Winnipeg</li> <li class="sortable">Ne touchez pas la hache</li> <li class="sortable">Nói albinói</li> <li class="sortable">Regular Lovers</li> <li class="sortable">Shaun of the Dead</li> <li class="sortable">Silent Light</li> <li class="addmore"><b>This text is not supposed to move</b></li> </ul> </div> </body> </html> you can try it out at: http://www.volmar.se/list-prob.html MY PROBLEM IS: I don't want the <li class="addmore"> to move above all the <li class="sortable">-elements when i press the sort-link. i wan't it to always be in the bottom. you can find documentation of the TinySort plugin here. i've tried loads of combinations with place and returns propertys but i just can't get it right.

    Read the article

  • Dialog created after first becomes unresponsive unless created first?

    - by Justin Sterling
    After creating the initial dialog box that works perfectly fine, I create another dialog box when the Join Game button is pressed. The dialog box is created and show successfully, however I am unable to type in the edit box or even press or exit the dialog. Does anyone understand how to fix this or why it happens? I made sure the dialog box itself was not the problem by creating and displaying it from the main loop in the application. It worked fine when I created it that way. So why does it error when being created from another dialog? My code is below. This code is for the DLGPROC function that each dialog uses. #define WIN32_LEAN_AND_MEAN #include "Windows.h" #include ".\Controllers\Menu\MenuSystem.h" #include ".\Controllers\Game Controller\GameManager.h" #include ".\Controllers\Network\Network.h" #include "resource.h" #include "main.h" using namespace std; extern GameManager g; extern bool men; NET_Socket server; extern HWND d; HWND joinDlg; char ip[64]; void JoinMenu(){ joinDlg = CreateDialog(g_hInstance, MAKEINTRESOURCE(IDD_GETADDRESSINFO), NULL, (DLGPROC)GameJoinDialogPrompt); SetFocus(joinDlg); // ShowWindow(joinDlg, SW_SHOW); ShowWindow(d, SW_HIDE); } LRESULT CALLBACK GameJoinDialogPrompt(HWND Dialogwindow, UINT Message, WPARAM wParam, LPARAM lParam){ switch(Message){ case WM_COMMAND:{ switch(LOWORD(wParam)){ case IDCONNECT:{ GetDlgItemText(joinDlg, IDC_IP, ip, 63); if(server.ConnectToServer(ip, 7890, NET_UDP) == NET_INVALID_SOCKET){ LogString("Failed to connect to server! IP: %s", ip); MessageBox(NULL, "Failed to connect!", "Error", MB_OK); ShowWindow(joinDlg, SW_SHOW); break; }   } LogString("Connected!"); break; case IDCANCEL: ShowWindow(d, SW_SHOW); ShowWindow(joinDlg, SW_HIDE); break; } break; } case WM_CLOSE: PostQuitMessage(0); break; } return 0; } LRESULT CALLBACK GameMainDialogPrompt(HWND Dialogwindow, UINT Message, WPARAM wParam, LPARAM lParam){ switch(Message){ case WM_PAINT:{ PAINTSTRUCT ps; RECT rect; HDC hdc = GetDC(Dialogwindow);    hdc = BeginPaint(Dialogwindow, &ps); GetClientRect (Dialogwindow, &rect); FillRect(hdc, &rect, CreateSolidBrush(RGB(0, 0, 0)));    EndPaint(Dialogwindow, &ps);    break;  } case WM_COMMAND:{ switch(LOWORD(wParam)){ case IDC_HOST: if(!NET_Initialize()){ break; } if(server.CreateServer(7890, NET_UDP) != 0){ MessageBox(NULL, "Failed to create server.", "Error!", MB_OK); PostQuitMessage(0); return -1; } ShowWindow(d, SW_HIDE); break; case IDC_JOIN:{ JoinMenu(); } break; case IDC_EXIT: PostQuitMessage(0); break; default: break; } break; } return 0; } } I call the first dialog using the below code void EnterMenu(){ // joinDlg = CreateDialog(g_hInstance, MAKEINTRESOURCE(IDD_GETADDRESSINFO), g_hWnd, (DLGPROC)GameJoinDialogPrompt);// d = CreateDialog(g_hInstance, MAKEINTRESOURCE(IDD_SELECTMENU), g_hWnd, (DLGPROC)GameMainDialogPrompt); } The dialog boxes are not DISABLED by default, and they are visible by default. Everything is set to be active on creation and no code deactivates the items on the dialog or the dialog itself.

    Read the article

  • WPF Combobox binding: can't change selection.

    - by SteveCav
    After wasting hours on this, following on the heels of my Last Problem, I'm starting to feel that Framework 4 is a master of subtle evil, or my PC is haunted. I have three comboboxes and a textbox on a WPF form, and I have an out-of-the-box Subsonic 3 ActiveRecord DAL. When I load this "edit record" form, the comboboxes fill correctly, they select the correct items, and the textbox has the correct text. I can change the TextBox text and save the record just fine, but the comboboxes CANNOT BE CHANGED. The lists drop down and highlight, but when you click on an item, the item selected stays the same. Here's my XAML: <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Asset</TextBlock> <ComboBox Name="cboAsset" Width="180" DisplayMemberPath="AssetName" SelectedValuePath="AssetID" SelectedValue="{Binding AssetID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Status</TextBlock> <ComboBox Name="cboStatus" Width="180" DisplayMemberPath="JobStatusDesc" SelectedValuePath="JobStatusID" SelectedValue="{Binding JobStatusID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Category</TextBlock> <ComboBox Name="cboCategories" Width="180" DisplayMemberPath="CategoryName" SelectedValuePath="JobCategoryID" SelectedValue="{Binding JobCategoryID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Reason</TextBlock> <TextBox Name="txtReason" Width="380" Text="{Binding Reason}"/> </StackPanel> Here are the relevant snips of my code (intJobID is passed in): SvcMgrDAL.Job oJob; IQueryable<SvcMgrDAL.JobCategory> oCategories = SvcMgrDAL.JobCategory.All().OrderBy(x => x.CategoryName); IQueryable<SvcMgrDAL.Asset> oAssets = SvcMgrDAL.Asset.All().OrderBy(x => x.AssetName); IQueryable<SvcMgrDAL.JobStatus> oStatus = SvcMgrDAL.JobStatus.All(); cboCategories.ItemsSource = oCategories; cboStatus.ItemsSource = oStatus; cboAsset.ItemsSource = oAssets; this.JobID = intJobID; oJob = SvcMgrDAL.Job.SingleOrDefault(x => x.JobID == intJobID); this.DataContext = oJob; Things I've tried: -Explicitly setting IsReadOnly="false" and IsSynchronizedWithCurrentItem="True" -Changing the combobox ItemSources from IQueryables to Lists. -Building my own Job object (plain vanilla entity class). -Every binding mode for the comboboxes. The Subsonic DAL doesn't implement INotifyPropertyChanged, but I don't see as it'd need to for simple binding like this. I just want to be able to pick something from the dropdown and save it. Comparing it with my last problem (link at the top of this message), I seem to have something really wierd with data sources going on. Maybe it's a Subsonic thing?

    Read the article

  • Future of VB.NET? [closed]

    - by Alex Yeung
    Hi all, I worked with C# for years. Last year, I changed my job and the company use VB.NET. Of course, theoretically C# and VB.NET are very similar and I easily adapted. However, I have worked for VB.NET for 1 year. I cannot see any future of VB.NET. As a programming language, it is so foo. Here is a list of what C# can do but VB.NET cannot Case insensitive variables how to think a new variable name? If I have a property called FolderPath, i need to establish another private variable called _folderPath or m_folderPath. In C#, FolderPath and folderPath are two variables. Moreover, it gets compile error if variable name is same as a class name. For example Dim guid = Guid.NewGuid(). (What the...) Again, I need to think a new variable name. Adhoc scope In C#, we could use {...} to create a adhoc scope and all resource in {...} will not affect the code outside. However there is not such syntax in VB.NET. I could only use If True Then and End If to make a local scope which is so unclear. In-Method Region Sometime, it is unavoidable to have a long long method. VB.NET does not support in-method region. I always need to scroll down for 1000 lines. It wastes my time. No multi-line string definition In C#, we could var s = @"..." to define a multi-line string. In VB.NET there is no direct method to do that. The indirect way is use XML-literal string. Dim s = <![CDATA[...]]>.Value. However it is unclear. No block comment In C#, we have line comment // and block comment /* ... */. However in VB.NET we only have line comment which is a very big trouble for me. No statement end symbol Statements are separated by line break in VB.NET; while statements are separated by ; in C#. Underscore I think many people know underscore _ is continue statement symbol. I really disagree with that. I know MS VB.NET language team is going to remove the underscore syntax from VB.NET. However what can we do now? Although underscore is removed in the future, what's the advantage of that? I cannot see any advantage! With scope With scope is an evil scope. Although it allows shorter statement, it is hard to trace. Default Namesapce in project level It is a nightmare for me. The only advantage of VB.NET is property initialization. I think C# cannot do that (correct me if I am wrong.) Public Property ThisIsMyProperty As String = "MyValue" Remarks: I don't think optional method parameters is an advantage of OOP. By those disadvantages, I cannot see the future of VB.NET. Anyone sees the future of VB.NET?

    Read the article

  • Ajax Control Toolkit July 2011 Release and the New HTML Editor Extender

    - by Stephen Walther
    I’m happy to announce the July 2011 release of the Ajax Control Toolkit which includes important bug fixes and a completely new HTML Editor Extender control. You can download the July 2011 Release by visiting the Ajax Control Toolkit CodePlex site at: http://AjaxControlToolkit.CodePlex.com Using the New HTML Editor Extender Control You can use the new HTML Editor Extender to extend any standard ASP.NET TextBox control so that it supports rich formatting such as bold, italics, bulleted lists, numbered lists, typefaces and different foreground and background colors. The following code illustrates how you can extend a standard ASP.NET TextBox control with the HtmlEditorExtender: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Simple.aspx.cs" Inherits="WebApplication1.Simple" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Simple</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager runat="Server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="60" Rows="8" runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server" /> </form> </body> </html> This page has the following three controls: ToolkitScriptManager – The ToolkitScriptManager renders all of the scripts required by the Ajax Control Toolkit. TextBox – The TextBox control is a standard ASP.NET TextBox which is set to display multiple lines (a TextArea instead of an Input element). HtmlEditorExtender – The HtmlEditorExtender is set to extend the TextBox control. You can use the standard TextBox Text property to read the rich text entered into the TextBox control on the server. Lightweight and HTML5 The HTML Editor Extender works on all modern browsers including the most recent versions of Mozilla Firefox (Firefox 5), Google Chrome (Chrome 12), and Apple Safari (Safari 5). Furthermore, the HTML Editor Extender is compatible with Microsoft Internet Explorer 6 and newer. The HTML Editor Extender is very lightweight. It takes advantage of the HTML5 ContentEditable attribute so it does not require an iframe or complex browser workarounds. If you select View Source in your browser while using the HTML Editor Extender, we hope that you will be pleasantly surprised by how little markup and script is generated by the HTML Editor Extender. Customizable Toolbar Buttons Depending on the web application that you are building, you will want to display different toolbar buttons with the HTML Editor Extender. One of the design goals of the HTML Editor Extender was to make it very easy for you to customize the toolbar buttons. Imagine, for example, that you want to use the HTML Editor Extender when accepting comments on blog posts. In that case, you might want to restrict the type of formatting that a user can display. You might want to enable a user to format text as bold or italic but you do not want the user to make any other formatting changes. The following page illustrates how you can customize the HTML Editor Extender toolbar: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="CustomToolbar.aspx.cs" Inherits="WebApplication1.CustomToolbar" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html> <head runat="server"> <title>Custom Toolbar</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager Runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="10" Text="Hello <b>world!</b>" Runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server"> <Toolbar> <asp:Bold /> <asp:Italic /> </Toolbar> </asp:HtmlEditorExtender> </form> </body> </html> Notice that the HTML Editor Extender in the page above has a Toolbar subtag. You can list the toolbar buttons which you want to appear within the subtag. In the case above, only Bold and Italic buttons are displayed. Here is a complete list of the Toolbar buttons currently supported by the HTML Editor Extender: Undo Redo Bold Italic Underline StrikeThrough Subscript Superscript JustifyLeft JustifyCenter JustifyRight JustifyFull InsertOrderedList InsertUnorderedList CreateLink UnLink RemoveFormat SelectAll UnSelect Delete Cut Copy Paste BackgroundColorSelector ForeColorSelector FontNameSelector FontSizeSelector Indent Outdent InsertHorizontalRule HorizontalSeparator Of course the HTML Editor Extender was designed to be extensible. You can create your own buttons and add them to the control. Compatible with the AntiXSS Library When using the HTML Editor Extender on a public facing website, we strongly recommend that you use the HTML Editor Extender with the AntiXSS Library. If you allow users to submit arbitrary HTML, and you don’t take any action to strip out malicious markup, then you are opening your website to Cross-Site Scripting Attacks (XSS attacks). The HTML Editor Extender uses the Provider Model to support different Sanitizer Providers. The July 2011 release of the Ajax Control Toolkit ships with a single Sanitizer Provider which uses the AntiXSS library (see http://AntiXss.CodePlex.com ). A Sanitizer Provider is responsible for sanitizing HTML markup by removing any malicious elements, attributes, and attribute values. For example, the AntiXss Sanitizer Provider will take the following block of HTML: <b><a href=""javascript:doEvil()"">Visit Grandma</a></b> <script>doEvil()</script> And return the following sanitized block of HTML: <b><a href="">Visit Grandma</a></b> Notice that the JavaScript href and <SCRIPT> tag are both stripped out. Be aware that there are a depressingly large number of ways to sneak evil markup into your HTML. You definitely want a Sanitizer as a safety net. Before you can use the AntiXSS Sanitizer Provider, you must add three assemblies to your web application: AntiXSSLibrary.dll, HtmlSanitizationLibrary.dll, and SanitizerProviders.dll. All three assemblies are included with the CodePlex download of the Ajax Control Toolkit in the SanitizerProviders folder. Here’s how you modify your web.config file to use the AntiXSS Sanitizer Provider: <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <compilation targetFramework="4.0" debug="true"/> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> </system.web> </configuration> You can detect whether the HTML Editor Extender is using the AntiXSS Sanitizer Provider by checking the HtmlEditorExtender SanitizerProvider property like this: if (MyHtmlEditorExtender.SanitizerProvider == null) { throw new Exception("Please enable the AntiXss Sanitizer!"); } When the SanitizerProvider property has the value null, you know that a Sanitizer Provider has not been configured in the web.config file. Because the AntiXSS library requires Full Trust, you cannot use the AntiXSS Sanitizer Provider with most shared website hosting providers. Because most shared hosting providers only support Medium Trust and not Full Trust, we do not recommend using the HTML Editor Extender with a public website hosted with a shared hosting provider. Why a New HTML Editor Control? The Ajax Control Toolkit now includes two HTML Editor controls. Why did we introduce a new HTML Editor control when there was already an existing HTML Editor? We think you will like the new HTML Editor much more than the previous one. We had several goals with the new HTML Editor Extender: Lightweight – We wanted to leverage HTML5 to create a lightweight HTML Editor. The new HTML Editor generates much less markup and script than the previous HTML Editor. Secure – We wanted to make it easy to integrate the AntiXSS library with the HTML Editor. If you are creating a public facing website, we strongly recommend that you use the AntiXSS Provider. Customizable – We wanted to make it easy for users to customize the toolbar buttons displayed by the HTML Editor. Compatibility – We wanted to ensure that the HTML Editor will work with the latest versions of the most popular browsers (including Internet Explorer 6 and higher). The old HTML Editor control is still included in the Ajax Control Toolkit and continues to live in the AjaxControlToolkit.HTMLEditor namespace. We have not modified the control and you can continue to use the control in the same way as you have used it in the past. However, we hope that you will consider migrating to the new HTML Editor Extender for the reasons listed above. Summary We’ve introduced a new Ajax Control Toolkit control with this release. I want to thank the developers and testers on the Superexpert team for the huge amount of work which they put into this control. It was a non-trivial task to build an entirely new control which has the complexity of the HTML Editor in less than 6 weeks. Please let us know what you think! We want to hear your feedback. If you discover issues with the new HTML Editor Extender control, or you have questions about the control, or you have ideas for how it can be improved, then please post them to this blog. Tomorrow starts a new sprint

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • HTML5 Form Validation

    - by Stephen.Walther
    The latest versions of Google Chrome (16+), Mozilla Firefox (8+), and Internet Explorer (10+) all support HTML5 client-side validation. It is time to take HTML5 validation seriously. The purpose of the blog post is to describe how you can take advantage of HTML5 client-side validation regardless of the type of application that you are building. You learn how to use the HTML5 validation attributes, how to perform custom validation using the JavaScript validation constraint API, and how to simulate HTML5 validation on older browsers by taking advantage of a jQuery plugin. Finally, we discuss the security issues related to using client-side validation. Using Client-Side Validation Attributes The HTML5 specification discusses several attributes which you can use with INPUT elements to perform client-side validation including the required, pattern, min, max, step, and maxlength attributes. For example, you use the required attribute to require a user to enter a value for an INPUT element. The following form demonstrates how you can make the firstName and lastName form fields required: <!DOCTYPE html> <html > <head> <title>Required Demo</title> </head> <body> <form> <label> First Name: <input required title="First Name is Required!" /> </label> <label> Last Name: <input required title="Last Name is Required!" /> </label> <button>Register</button> </form> </body> </html> If you attempt to submit this form without entering a value for firstName or lastName then you get the validation error message: Notice that the value of the title attribute is used to display the validation error message “First Name is Required!”. The title attribute does not work this way with the current version of Firefox. If you want to display a custom validation error message with Firefox then you need to include an x-moz-errormessage attribute like this: <input required title="First Name is Required!" x-moz-errormessage="First Name is Required!" /> The pattern attribute enables you to validate the value of an INPUT element against a regular expression. For example, the following form includes a social security number field which includes a pattern attribute: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Pattern</title> </head> <body> <form> <label> Social Security Number: <input required pattern="^d{3}-d{2}-d{4}$" title="###-##-####" /> </label> <button>Register</button> </form> </body> </html> The regular expression in the form above requires the social security number to match the pattern ###-##-####: Notice that the input field includes both a pattern and a required validation attribute. If you don’t enter a value then the regular expression is never triggered. You need to include the required attribute to force a user to enter a value and cause the value to be validated against the regular expression. Custom Validation You can take advantage of the HTML5 constraint validation API to perform custom validation. You can perform any custom validation that you need. The only requirement is that you write a JavaScript function. For example, when booking a hotel room, you might want to validate that the Arrival Date is in the future instead of the past: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Constraint Validation API</title> </head> <body> <form> <label> Arrival Date: <input id="arrivalDate" type="date" required /> </label> <button>Submit Reservation</button> </form> <script type="text/javascript"> var arrivalDate = document.getElementById("arrivalDate"); arrivalDate.addEventListener("input", function() { var value = new Date(arrivalDate.value); if (value < new Date()) { arrivalDate.setCustomValidity("Arrival date must be after now!"); } else { arrivalDate.setCustomValidity(""); } }); </script> </body> </html> The form above contains an input field named arrivalDate. Entering a value into the arrivalDate field triggers the input event. The JavaScript code adds an event listener for the input event and checks whether the date entered is greater than the current date. If validation fails then the validation error message “Arrival date must be after now!” is assigned to the arrivalDate input field by calling the setCustomValidity() method of the validation constraint API. Otherwise, the validation error message is cleared by calling setCustomValidity() with an empty string. HTML5 Validation and Older Browsers But what about older browsers? For example, what about Apple Safari and versions of Microsoft Internet Explorer older than Internet Explorer 10? What the world really needs is a jQuery plugin which provides backwards compatibility for the HTML5 validation attributes. If a browser supports the HTML5 validation attributes then the plugin would do nothing. Otherwise, the plugin would add support for the attributes. Unfortunately, as far as I know, this plugin does not exist. I have not been able to find any plugin which supports both the required and pattern attributes for older browsers, but does not get in the way of these attributes in the case of newer browsers. There are several jQuery plugins which provide partial support for the HTML5 validation attributes including: · jQuery Validation — http://docs.jquery.com/Plugins/Validation · html5Form — http://www.matiasmancini.com.ar/jquery-plugin-ajax-form-validation-html5.html · h5Validate — http://ericleads.com/h5validate/ The jQuery Validation plugin – the most popular JavaScript validation library – supports the HTML5 required attribute, but it does not support the HTML5 pattern attribute. Likewise, the html5Form plugin does not support the pattern attribute. The h5Validate plugin provides the best support for the HTML5 validation attributes. The following page illustrates how this plugin supports both the required and pattern attributes: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>h5Validate</title> <style type="text/css"> .validationError { border: solid 2px red; } .validationValid { border: solid 2px green; } </style> </head> <body> <form id="customerForm"> <label> First Name: <input id="firstName" required /> </label> <label> Social Security Number: <input id="ssn" required pattern="^d{3}-d{2}-d{4}$" title="Expected pattern is ###-##-####" /> </label> <input type="submit" /> </form> <script type="text/javascript" src="Scripts/jquery-1.4.4.min.js"></script> <script type="text/javascript" src="Scripts/jquery.h5validate.js"></script> <script type="text/javascript"> // Enable h5Validate plugin $("#customerForm").h5Validate({ errorClass: "validationError", validClass: "validationValid" }); // Prevent form submission when errors $("#customerForm").submit(function (evt) { if ($("#customerForm").h5Validate("allValid") === false) { evt.preventDefault(); } }); </script> </body> </html> When an input field fails validation, the validationError CSS class is applied to the field and the field appears with a red border. When an input field passes validation, the validationValid CSS class is applied to the field and the field appears with a green border. From the perspective of HTML5 validation, the h5Validate plugin is the best of the plugins. It adds support for the required and pattern attributes to browsers which do not natively support these attributes such as IE9. However, this plugin does not include everything in my wish list for a perfect HTML5 validation plugin. Here’s my wish list for the perfect back compat HTML5 validation plugin: 1. The plugin would disable itself when used with a browser which natively supports HTML5 validation attributes. The plugin should not be too greedy – it should not handle validation when a browser could do the work itself. 2. The plugin should simulate the same user interface for displaying validation error messages as the user interface displayed by browsers which natively support HTML5 validation. Chrome, Firefox, and Internet Explorer all display validation errors in a popup. The perfect plugin would also display a popup. 3. Finally, the plugin would add support for the setCustomValidity() method and the other methods of the HTML5 validation constraint API. That way, you could implement custom validation in a standards compatible way and you would know that it worked across all browsers both old and new. Security It would be irresponsible of me to end this blog post without mentioning the issue of security. It is important to remember that any client-side validation — including HTML5 validation — can be bypassed. You should use client-side validation with the intention to create a better user experience. Client validation is great for providing a user with immediate feedback when the user is in the process of completing a form. However, client-side validation cannot prevent an evil hacker from submitting unexpected form data to your web server. You should always enforce your validation rules on the server. The only way to ensure that a required field has a value is to verify that the required field has a value on the server. The HTML5 required attribute does not guarantee anything. Summary The goal of this blog post was to describe the support for validation contained in the HTML5 standard. You learned how to use both the required and the pattern attributes in an HTML5 form. We also discussed how you can implement custom validation by taking advantage of the setCustomValidity() method. Finally, I discussed the available jQuery plugins for adding support for the HTM5 validation attributes to older browsers. Unfortunately, I am unaware of any jQuery plugin which provides a perfect solution to the problem of backwards compatibility.

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • VSNewFile: A Visual Studio Addin to More Easily Add New Items to a Project

    - by InfinitiesLoop
    My first Visual Studio Add-in! Creating add-ins is pretty simple, once you get used to the CommandBar model it is using, which is apparently a general Office suite extensibility mechanism. Anyway, let me first explain my motivation for this. It started out as an academic exercise, as I have always wanted to dip my feet in a little VS extensibility. But I thought of a legitimate need for an add-in, at least in my personal experience, so it took on new life. But I figured I can’t be the only one who has felt this way, so I decided to publish the add-in, and host it on GitHub (VSNewFile on GitHub) hoping to spur contributions. Adding Files the Built-in Way Here’s the problem I wanted to solve. You’re working on a project, and it’s time to add a new file to the project. Whatever it is – a class, script, html page, aspx page, or what-have-you, you go through a menu or keyboard shortcut to get to the “Add New Item” dialog. Typically, you do it by right-clicking the location where you want the file (the project or a folder of it): This brings up a dialog the contains, well, every conceivable type of item you might want to add. It’s all the available item templates, which can result in anywhere from a ton to a veritable sea of choices. To be fair, this dialog has been revamped in Visual Studio 2010, which organizes it a little better than Visual Studio 2008, and adds a search box. It also loads noticeably faster.   To me, this dialog is just getting in my way. If I want to add a JavaScript script to my project, I don’t want to have to hunt for the script template item in this dialog. Yes, it is categorized, and yes, it now has a search box. But still, all this UI to swim through when all I need is a new file in the project. I will name it. I will provide the content, I don’t even need a ‘template’. VS kind of realizes this. In the add menu in a class library project, for example, there is a “Add Class…” choice. But all this really does is select that project item from the dialog by default. You still must wait for the dialog, see it, and type in a name for the file. How is that really any different than hitting F2 on an existing item? It isn’t. Adding Files the Hack Way What I often find myself doing, just to avoid going through this dialog, is to copy and paste an existing file, rename it, then “CTRL-A, DEL” the content. In a few short keystrokes I’ve got my new file. Even if the original file wasn’t the right type, it doesn’t matter – I will rename it anyway, including the extension. It works well enough if the place I am adding the file to doesn’t have much in it already. But if there are a lot of files at that level, it sucks, because the new file will have the name “Copy of xyz”, causing it to be moved into the ‘C’ section of the alphabetically sorted items, which might be far, far away from the original file (and so I tend to try and copy a file that starts with ‘C’ *evil grin*). Using ‘Export Template’ To be completely fair I should at least mention this feature. I’m not even sure if this is new in VS 2010 or not (I think so). But it allows you to export a project item or items, including potential project references required by it. Then it becomes a new item in the available ‘installed templates’. No doubt this is useful to help bootstrap new projects. But that still requires you to go through the ‘New Item’ dialog. Adding Files with VSNewFile So hopefully I have sufficiently defined the problem and got a few of you to think, “Yeah, me too!”… What VSNewFile does is let you skip the dialog entirely by adding project items directly to the context menu. But it does a bit more than that, so do read on. For example, to add a new class, you can right-click the location and pick that option. A new .cs file is instantly added to the project, and the new item is selected and put into the ‘rename’ mode immediately. The default items available are shown here. But you can customize them. You can also customize the content of each template. To do so, you create a directory in your documents folder, ‘VSNewFile Templates’. In there, you drop the templates you want to use, but you name them in a particular way. For example, here’s a template that will add a new item named “Add TITLE”. It will add a project item named “SOMEFILE.foo” (or ‘SOMEFILE1.foo’ if that exists, etc). The format of the file name is: <ORDER>_<KEY>_<BASE FILENAME>_<ICON ID>_<TITLE>.<EXTENTION> Where: <ORDER> is a number that lets you determine the order of the items in the menu (relative to each other). <KEY> is a case sensitive identifier different for each template item. More on that later. <BASE FILENAME> is the default name of the file, which doesn’t matter that much, since they will be renaming it anyway. <ICON ID> is a number the dictates the icon used for the menu item. There are a huge number of built-in choices. More on that later. <TITLE> is the string that will appear in the menu. And, the contents of the file are the default content for the item (the ‘template’). The content of the file can contain anything you want, of course. But it also supports two tokens: %NAMESPACE% and %FILENAME%, which will be replaced with the corresponding values. Here is the content of this sample: testing Namespace = %NAMESPACE% Filename = %FILENAME% I kind went back and forth on this. I could have made it so there’d be an XML or JSON file that defines the templates, instead of cramming all this data into the filename itself. I like the simplicity of this better. It makes it easy to customize since you can literally just throw these files around, copy them from someone else, etc, without worrying about merge data into a central description file, in whatever format. Here’s our new item showing up: Practical Use One immediate thing I am using this for is to make it easier to add very commonly used scripts to my web projects. For example, uh, say, jQuery? :) All I need to do is drop jQuery-1.4.2.js and jQuery-1.4.2.min.js into the templates folder, provide the order, title, etc, and then instantly, I can now add jQuery to any project I have without even thinking about “where is jQuery? Can I copy it from that other project?”   Using the KEY There are two reasons for the ‘key’ portion of the item. First, it allows you to turn off the built-in, default templates, which are: FILE = Add File (generic, empty file) VB = Add VB Class CS = Add C# Class (includes some basic usings) HTML = Add HTML page (includes basic structure, doctype, etc) JS = Add Script (includes an immediately-invoking function closure) To turn one off, just include a file with the name “_<KEY>”. For example, to turn off all the items except our custom one, you do this: The other reason for the key is that there are new Visual Studio Commands created for each one. This makes it possible to bind a keyboard shortcut to one of them. So you could, for example, have a keyboard combination that adds a new web page to your website, or a new CS class to your class library, etc. Here is our sample item showing up in the keyboard bindings option. Even though the contents of the template directory may change from one launch of Visual Studio to the next, the bindings will remain attached to any item with a particular key, thanks to it taking care not to lose keyboard bindings even though the commands are completely recreated each time. The Icon Face ID Visual Studio uses a Microsoft Office style add-in mechanism, I gather. There are a predetermined set of built-in icons available. You can use your own icons when developing add-ins, of course, but I’m no designer. I just wanted to find appropriate-ish icons for the built-in templates, and allow you to choose from an existing built-in icon for your own. Unfortunately, there isn’t a lot out there on the interwebs that helps you figure out what the built-in types are. There’s an MSDN article that describes at length a way to create a program that lists all the icons. But I don’t want to write a program to figure them out! Just show them to me! Sheesh :) Thankfully, someone out there felt the same way, and uses a novel hack to get the icons to show up in an outlook toolbar. He then painstakingly took screenshots of them, one group at a time. It isn’t complete though – there are tens of thousands of icons. But it’s good enough. If anyone has an exhaustive list, please let me, and the rest of the add-in community know. Icon Face ID Reference Installing the Add-in It will work with Visual Studio 2008 and Visual Studio 2010. Just unzip the release into your Documents\Visual Studio 20xx\Addins folder. It contains the binary and the Visual Studio “.addin” file. For example, the path to mine is: C:\Users\InfinitiesLoop\Documents\Visual Studio 2010\Addins Conclusion So that’s it! I hope you find it as useful as I have. It’s on GitHub, so if you’re into this kind of thing, please do fork it and improve it! Reference: VSNewFile on GitHub VSNewFile release on GitHub Icon Face ID Reference

    Read the article

  • HTML5 Form Validation

    - by Stephen.Walther
    The latest versions of Google Chrome (16+), Mozilla Firefox (8+), and Internet Explorer (10+) all support HTML5 client-side validation. It is time to take HTML5 validation seriously. The purpose of the blog post is to describe how you can take advantage of HTML5 client-side validation regardless of the type of application that you are building. You learn how to use the HTML5 validation attributes, how to perform custom validation using the JavaScript validation constraint API, and how to simulate HTML5 validation on older browsers by taking advantage of a jQuery plugin. Finally, we discuss the security issues related to using client-side validation. Using Client-Side Validation Attributes The HTML5 specification discusses several attributes which you can use with INPUT elements to perform client-side validation including the required, pattern, min, max, step, and maxlength attributes. For example, you use the required attribute to require a user to enter a value for an INPUT element. The following form demonstrates how you can make the firstName and lastName form fields required: <!DOCTYPE html> <html > <head> <title>Required Demo</title> </head> <body> <form> <label> First Name: <input required title="First Name is Required!" /> </label> <label> Last Name: <input required title="Last Name is Required!" /> </label> <button>Register</button> </form> </body> </html> If you attempt to submit this form without entering a value for firstName or lastName then you get the validation error message: Notice that the value of the title attribute is used to display the validation error message “First Name is Required!”. The title attribute does not work this way with the current version of Firefox. If you want to display a custom validation error message with Firefox then you need to include an x-moz-errormessage attribute like this: <input required title="First Name is Required!" x-moz-errormessage="First Name is Required!" /> The pattern attribute enables you to validate the value of an INPUT element against a regular expression. For example, the following form includes a social security number field which includes a pattern attribute: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Pattern</title> </head> <body> <form> <label> Social Security Number: <input required pattern="^\d{3}-\d{2}-\d{4}$" title="###-##-####" /> </label> <button>Register</button> </form> </body> </html> The regular expression in the form above requires the social security number to match the pattern ###-##-####: Notice that the input field includes both a pattern and a required validation attribute. If you don’t enter a value then the regular expression is never triggered. You need to include the required attribute to force a user to enter a value and cause the value to be validated against the regular expression. Custom Validation You can take advantage of the HTML5 constraint validation API to perform custom validation. You can perform any custom validation that you need. The only requirement is that you write a JavaScript function. For example, when booking a hotel room, you might want to validate that the Arrival Date is in the future instead of the past: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Constraint Validation API</title> </head> <body> <form> <label> Arrival Date: <input id="arrivalDate" type="date" required /> </label> <button>Submit Reservation</button> </form> <script type="text/javascript"> var arrivalDate = document.getElementById("arrivalDate"); arrivalDate.addEventListener("input", function() { var value = new Date(arrivalDate.value); if (value < new Date()) { arrivalDate.setCustomValidity("Arrival date must be after now!"); } else { arrivalDate.setCustomValidity(""); } }); </script> </body> </html> The form above contains an input field named arrivalDate. Entering a value into the arrivalDate field triggers the input event. The JavaScript code adds an event listener for the input event and checks whether the date entered is greater than the current date. If validation fails then the validation error message “Arrival date must be after now!” is assigned to the arrivalDate input field by calling the setCustomValidity() method of the validation constraint API. Otherwise, the validation error message is cleared by calling setCustomValidity() with an empty string. HTML5 Validation and Older Browsers But what about older browsers? For example, what about Apple Safari and versions of Microsoft Internet Explorer older than Internet Explorer 10? What the world really needs is a jQuery plugin which provides backwards compatibility for the HTML5 validation attributes. If a browser supports the HTML5 validation attributes then the plugin would do nothing. Otherwise, the plugin would add support for the attributes. Unfortunately, as far as I know, this plugin does not exist. I have not been able to find any plugin which supports both the required and pattern attributes for older browsers, but does not get in the way of these attributes in the case of newer browsers. There are several jQuery plugins which provide partial support for the HTML5 validation attributes including: · jQuery Validation — http://docs.jquery.com/Plugins/Validation · html5Form — http://www.matiasmancini.com.ar/jquery-plugin-ajax-form-validation-html5.html · h5Validate — http://ericleads.com/h5validate/ The jQuery Validation plugin – the most popular JavaScript validation library – supports the HTML5 required attribute, but it does not support the HTML5 pattern attribute. Likewise, the html5Form plugin does not support the pattern attribute. The h5Validate plugin provides the best support for the HTML5 validation attributes. The following page illustrates how this plugin supports both the required and pattern attributes: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>h5Validate</title> <style type="text/css"> .validationError { border: solid 2px red; } .validationValid { border: solid 2px green; } </style> </head> <body> <form id="customerForm"> <label> First Name: <input id="firstName" required /> </label> <label> Social Security Number: <input id="ssn" required pattern="^\d{3}-\d{2}-\d{4}$" title="Expected pattern is ###-##-####" /> </label> <input type="submit" /> </form> <script type="text/javascript" src="Scripts/jquery-1.4.4.min.js"></script> <script type="text/javascript" src="Scripts/jquery.h5validate.js"></script> <script type="text/javascript"> // Enable h5Validate plugin $("#customerForm").h5Validate({ errorClass: "validationError", validClass: "validationValid" }); // Prevent form submission when errors $("#customerForm").submit(function (evt) { if ($("#customerForm").h5Validate("allValid") === false) { evt.preventDefault(); } }); </script> </body> </html> When an input field fails validation, the validationError CSS class is applied to the field and the field appears with a red border. When an input field passes validation, the validationValid CSS class is applied to the field and the field appears with a green border. From the perspective of HTML5 validation, the h5Validate plugin is the best of the plugins. It adds support for the required and pattern attributes to browsers which do not natively support these attributes such as IE9. However, this plugin does not include everything in my wish list for a perfect HTML5 validation plugin. Here’s my wish list for the perfect back compat HTML5 validation plugin: 1. The plugin would disable itself when used with a browser which natively supports HTML5 validation attributes. The plugin should not be too greedy – it should not handle validation when a browser could do the work itself. 2. The plugin should simulate the same user interface for displaying validation error messages as the user interface displayed by browsers which natively support HTML5 validation. Chrome, Firefox, and Internet Explorer all display validation errors in a popup. The perfect plugin would also display a popup. 3. Finally, the plugin would add support for the setCustomValidity() method and the other methods of the HTML5 validation constraint API. That way, you could implement custom validation in a standards compatible way and you would know that it worked across all browsers both old and new. Security It would be irresponsible of me to end this blog post without mentioning the issue of security. It is important to remember that any client-side validation — including HTML5 validation — can be bypassed. You should use client-side validation with the intention to create a better user experience. Client validation is great for providing a user with immediate feedback when the user is in the process of completing a form. However, client-side validation cannot prevent an evil hacker from submitting unexpected form data to your web server. You should always enforce your validation rules on the server. The only way to ensure that a required field has a value is to verify that the required field has a value on the server. The HTML5 required attribute does not guarantee anything. Summary The goal of this blog post was to describe the support for validation contained in the HTML5 standard. You learned how to use both the required and the pattern attributes in an HTML5 form. We also discussed how you can implement custom validation by taking advantage of the setCustomValidity() method. Finally, I discussed the available jQuery plugins for adding support for the HTM5 validation attributes to older browsers. Unfortunately, I am unaware of any jQuery plugin which provides a perfect solution to the problem of backwards compatibility.

    Read the article

< Previous Page | 18 19 20 21 22 23  | Next Page >