Search Results

Search found 5422 results on 217 pages for 'coding convention'.

Page 194/217 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Problems with MGCP proxy creation

    - by Popof
    Hi, I'm trying to bypass my ISP router with my FreeBSD server (I've an optical connection so I've a RJ45 used to connect the box to WAN) Internet and TV are working fine (Using igmpproxy to forward TV stream) but I've a problem with phone. ISP's box is connected to the server which gives it a LAN address. The problem is that when the box builds MGCP packets (and especially SDP ones) it uses its LAN address. So I've think of writing an UDP proxy to handle MGCP and SDP packets in order to replace LAN address with server WAN address and then forward packet to WAN. Before starting coding I've captured stream packets using my server as a bridge between WAN connection and the ISP's box. And, in order to see if my solution is viable, I've tried to send those packets to the box using nemesis. I tried to send a packet (found in capture) containing an endpoint audit: AUEP 1447 aaln/[email protected] MGCP 1.0 F: A In the wireshark capture the box replied: 200 1447 OK A: a:PCMU;PCMA;G726-16;G726-24;G726-32;G726-40;G.723.1-5.3;G.723.1-6.3;G729;TELEPHONE-EVENT, fmtp:"TELEPHONE-EVENT 0-15,144,149,159", p:10-30, b:4-40, e:on, t:00, s:on, v:L;M;G;D, m:sendonly;recvonly;sendrecv;inactive;confrnce;replcate;netwtest;netwloop, dq-gi But when I use nemesis, I got an ICMP error: Port unreachable (Type 3, Code 3). To build this packet, WAN source address of the capture is replaced with my server LAN address, using the mgcp-callagent port (2727) and the packet is sent to the LAN address of the box at mgcp-gateway port (2427). The command I use is nemesis udp -S 192.168.2.1 -D 192.168.2.2 -x 2727 -y 2427 -P packet_to_send. I also tried an UDP scan to the box on callagent and gateway port: PORT STATE SERVICE 2727/udp open|filtered unknown 2427/udp closed unknown I found those results a little bit strange because it should be the 2427 port opened, as it was in capture. Internet Protocol, Src: <ISP MGCP Server>, Dst: <My WAN Address> User Datagram Protocol, Src Port: mgcp-callagent (2727), Dst Port: mgcp-gateway (2427) Does someone has any idea about how having my box responding to my requests ? Thanks in advance and sorry for my english.

    Read the article

  • Windows 7 & Virtual PC and Internet (gateway) problems on host PC

    - by Mufasa
    I upgraded to Windows 7 on a PC that is a few years old. The CPU was one revision away from having Hyper-V on it. So, I had to install Microsoft Virtual PC 2007 (v6.0.156.0) to run full XP instances instead of the seamless XP virtualization that is advertised so much. That's fine though; the 'older' version is useful since I use it to run different versions of the whole XP/IE stack for testing. (I'm a web developer.) ...And for the one 16-bit application we still use at the office for scheduling. * sigh * The virtual instances work fine, including networking. My issue is that after a reboot or coming out of sleep mode, my host Windows 7 won't connect to the Internet. It will connect to the local network fine. If I disable the "Virtual Machine Network Services" item (I'll call "VMNS" from here on) in the LAN Connection properties box, it starts working. But than the Virtual PC instances lose their network connectivity. If I re-enable VMNS again in the same instance, everything works (Internet on host and in the virtualized instances). But after the next reboot/sleep cycle this starts over. The route table gave me a clue though. When doing a cycle w/ VMNS enabled: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.3.51 20 0.0.0.0 0.0.0.0 10.0.10.10 10.0.3.51 276 ... After VMNS is disabled, the first route goes away. I assume that is for VMNS to intercept virtualized instance's network connections and forward them correctly? Just a guess though. More info: I checked my Firewall settings and Services (because I'm sort of a control nazi and turn off a lot) but couldn't find anything that made sense and if turned on changed anything. So it might be something there I'm missing, but I don't know what. My current hacked solution: So, I figured I'd mess with the routes myself to see if that helped, it did. If I run a route delete 0.0.0.0 on the universal (0.0.0.0) gateway routes, and add back in just the 2nd line with route add 0.0.0.0 mask 0.0.0.0 10.0.10.10--the one that points to my actual gateway (10.0.10.10)--then I don't have to mess with the disable/enable cycle of VMNS, and everything works. Running those two commands is faster then bringing up connection options and disabling and re-enabling VMNS, but I still don't want to have use that hack script every boot either. (Oh, and I also tried messing with hard-coding TCP/IP settings in my network adapter, including setting high metrics, etc., but that didn't help either.) Any suggestions on the right way to fix this?

    Read the article

  • bluetooth connection using pybluez

    - by srj0408
    I am working on bluetooth not exactly on bluetooth stack-development but to use bluetooth in one of my project. I had done all that before using some of the py-bluez commands like hciconfig, hcitool scan , then simple-agents and using serial module inside python. But that was quite random. We were able to connect only one specific device based on its bluetooth address and there was no facility of reconnection once the devices are disconnected. Now i want to try out this stuff in a sequential manner like this (i am doing that all on a RPI and for at present on ubuntu 12.04.) i) Store some names in a file along with some other information with respect to that device. ii) Run a script to find out the device in locality with those names and if any one if found, report that. For this step, i had taken a reference from BTBook , made available from MIT. Below is the script for the same, but that script only search for the single name. from bluetooth import * target_name = "XT1033" target_address = None nearby_devices = discover_devices() for address in nearby_devices: if target_name == lookup_name( address ): target_address = address break if target_address is not None: print "found target bluetooth device with address ", target_address connect_socket(target_address); else: print "could not find target bluetooth device nearby" iii) Connect the device using client sock. But i dont have any device on which i can write a simple python script. My client can be any device that will be publishing data. Now i came through a script in the same book, that actually connect to a client requesting permission to connect to server. from bluetooth import * port = 1 server_sock=BluetoothSocket( RFCOMM ) server_sock.bind(("",port)) server_sock.listen(1) client_sock, client_info = server_sock.accept() print "Accepted connection from ", client_info data = client_sock.recv(1024) print "received [%s]" % data client_sock.close() server_sock.close() here client_sock, client_info = server_sock.accept() provide the client address and port requested to be connected. Can i pass address obtained from the earlier script to this, so that it connect server to the client? iv) Then if client get disconnected, re-connect(a simple polling can be used.) All this stuff can be done using bash and py-bluez functions but i want to do that in a sequential manner.I am not a master in python but i can do some small stuff. Can any one guide me for the same or can direct me to more usefull resource through which i can continue my coding part after finding the "X", "Y" named devices.

    Read the article

  • Complete Guide to Networking Windows 7 with XP and Vista

    - by Mysticgeek
    Since there are three versions of Windows out in the field these days, chances are you need to share data between them. Today we show how to get each version to be share files and printers with one another. In a perfect world, getting your computers with different Microsoft operating systems to network would be as easy as clicking a button. With the Windows 7 Homegroup feature, it’s almost that easy. However, getting all three of them to communicate with each other can be a bit of a challenge. Today we’ve put together a guide that will help you share files and printers in whatever scenario of the three versions you might encounter on your home network. Sharing Between Windows 7 and XP The most common scenario you’re probably going to run into is sharing between Windows 7 and XP.  Essentially you’ll want to make sure both machines are part of the same workgroup, set up the correct sharing settings, and making sure network discovery is enabled on Windows 7. The biggest problem you may run into is finding the correct printer drivers for both versions of Windows. Share Files and Printers Between Windows 7 & XP  Map a Network Drive Another method of sharing data between XP and Windows 7 is mapping a network drive. If you don’t need to share a printer and only want to share a drive, then you can just map an XP drive to Windows 7. Although it might sound complicated, the process is not bad. The trickiest part is making sure you add the appropriate local user. This will allow you to share the contents of an XP drive to your Windows 7 computer. Map a Network Drive from XP to Windows 7 Sharing between Vista and Windows 7 Another scenario you might run into is having to share files and printers between a Vista and Windows 7 machine. The process is a bit easier than sharing between XP and Windows 7, but takes a bit of work. The Homegroup feature isn’t compatible with Vista, so we need to go through a few different steps. Depending on what your printer is, sharing it should be easier as Vista and Windows 7 do a much better job of automatically locating the drivers. How to Share Files and Printers Between Windows 7 and Vista Sharing between Vista and XP When Windows Vista came out, hardware requirements were intensive, drivers weren’t ready, and sharing between them was complicated due to the new Vista structure. The sharing process is pretty straight-forward if you’re not using password protection…as you just need to drop what you want to share into the Vista Public folder. On the other hand, sharing with password protection becomes a bit more difficult. Basically you need to add a user and set up sharing on the XP machine. But once again, we have a complete tutorial for that situation. Share Files and Folders Between Vista and XP Machines Sharing Between Windows 7 with Homegroup If you have one or more Windows 7 machine, sharing files and devices becomes extremely easy with the Homegroup feature. It’s as simple as creating a Homegroup on on machine then joining the other to it. It allows you to stream media, control what data is shared, and can also be password protected. If you don’t want to make your Windows 7 machines part of the same Homegroup, you can still share files through the Public Folder, and setup a printer to be shared as well.   Use the Homegroup Feature in Windows 7 to Share Printers and Files Create a Homegroup & Join a New Computer To It Change which Files are Shared in a Homegroup Windows Home Server If you want an ultimate setup that creates a centralized location to share files between all systems on your home network, regardless of the operating system, then set up a Windows Home Server. It allows you to centralize your important documents and digital media files on one box and provides easy access to data and the ability to stream media to other machines on your network. Not only that, but it provides easy backup of all your machines to the server, in case disaster strikes. How to Install and Setup Windows Home Server How to Manage Shared Folders on Windows Home Server Conclusion The biggest annoyance is dealing with printers that have a different set of drivers for each OS. There is no real easy way to solve this problem. Our best advice is to try to connect it to one machine, and if the drivers won’t work, hook it up to the other computer and see if that works. Each printer manufacturer is different, and Windows doesn’t always automatically install the correct drivers for the device. We hope this guide helps you share your data between whichever Microsoft OS scenario you might run into! Here are some other articles that will help you accomplish your home networking needs: Share a Printer on a Home Network from Vista or XP to Windows 7 How to Share a Folder the XP Way in Windows Vista Similar Articles Productive Geek Tips Delete Wrong AutoComplete Entries in Windows Vista MailSvchost Viewer Shows Exactly What Each svchost.exe Instance is DoingFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaShow Hidden Files and Folders in Windows 7 or VistaAdd Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • WPF ListView as a DataGrid – Part 3

    - by psheriff
    I have had a lot of great feedback on the blog post about turning the ListView into a DataGrid by creating GridViewColumn objects on the fly. So, in the last 2 parts, I showed a couple of different methods for accomplishing this. Let’s now look at one more and that is use Reflection to extract the properties from a Product, Customer, or Employee object to create the columns. Yes, Reflection is a slower approach, but you could create the columns one time then cache the View object for re-use. Another potential drawback is you may have columns in your object that you do not wish to display on your ListView. But, just because so many people asked, here is how to accomplish this using Reflection.   Figure 1: Use Reflection to create GridViewColumns. Using Reflection to gather property names is actually quite simple. First you need to pass any type (Product, Customer, Employee, etc.) to a method like I did in my last two blog posts on this subject. Below is the method that I created in the WPFListViewCommon class that now uses reflection. C#public static GridView CreateGridViewColumns(Type anyType){  // Create the GridView  GridView gv = new GridView();  GridViewColumn gvc;   // Get the public properties.  PropertyInfo[] propInfo =          anyType.GetProperties(BindingFlags.Public |                                BindingFlags.Instance);   foreach (PropertyInfo item in propInfo)  {    gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.Name);    gvc.Header = item.Name;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns( _  ByVal anyType As Type) As GridView  ' Create the GridView   Dim gv As New GridView()  Dim gvc As GridViewColumn   ' Get the public properties.   Dim propInfo As PropertyInfo() = _    anyType.GetProperties(BindingFlags.Public Or _                          BindingFlags.Instance)   For Each item As PropertyInfo In propInfo    gvc = New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.Name)    gvc.Header = item.Name    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd Function The key to using Relection is using the GetProperties method on the type you pass in. When you pass in a Product object as Type, you can now use the GetProperties method and specify, via flags, which properties you wish to return. In the code that I wrote, I am just retrieving the Public properties and only those that are Instance properties. I do not want any static/Shared properties or private properties. GetProperties returns an array of PropertyInfo objects. You can loop through this array and build your GridViewColumn objects by reading the Name property from the PropertyInfo object. Build the Product Screen To populate the ListView shown in Figure 1, you might write code like the following: C#private void CollectionSample(){  Product prod = new Product();   // Setup the GridView Columns  lstData.View =      WPFListViewCommon.CreateGridViewColumns(typeOf(Product));  lstData.DataContext = prod.GetProducts();} VB.NETPrivate Sub CollectionSample()  Dim prod As New Product()   ' Setup the GridView Columns  lstData.View = WPFListViewCommon.CreateGridViewColumns( _       GetType(Product))  lstData.DataContext = prod.GetProducts()End Sub All you need to do now is to pass in a Type object from your Product class that you can get by using the typeOf() function in C# or the GetType() function in VB. That’s all there is to it! Summary There are so many different ways to approach the same problem in programming. That is what makes programming so much fun! In this blog post I showed you how to create ListView columns on the fly using Reflection. This gives you a lot of flexibility without having to write extra code as was done previously. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid – Part 3" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".  

    Read the article

  • XNA Notes 006

    - by George Clingerman
    If you used to think the XNA community was small and inactive, hopefully these XNA Notes are opening your eyes. And I honestly feel like I’m still only catching the tail end of everything that’s going on. It’s a large and active community and you can be so mired down in one part of it you miss all sorts of cool stuff another part is doing. XNA is many things to a lot of people and that makes for a lot of really awesome things going on. So here’s what I saw going on this last week! Time Critical XNA New: XNA Team - Peer Review now closes for XNA 3.1 games http://blogs.msdn.com/b/xna/archive/2011/02/08/peer-review-pipeline-closed-for-new-xna-gs-3-1-games-or-updates-on-app-hub.aspx http://twitter.com/XNACommunity/statuses/34649816529256448 The XNA Team posts about a meet up with Microsoft for Creator’s going to be at GDC, March 3rd at the Lobby Bar http://on.fb.me/fZungJ XNA Team: @mklucher is busying playing the the bubblegum on WP7 made by a member of the XNA team (although reportedly made in Silverlight? Crazy! ;) ) http://twitter.com/mklucher/statuses/34645662737895426 http://bubblegum.me Shawn Hargreaves posts multiple posts (is this a sign that something new is coming from the XNA team? Usually when Shawn has time to post, something has just wrapped up…) Random Shuffle http://blogs.msdn.com/b/shawnhar/archive/2011/02/09/random-shuffle.aspx Doing the right thing: resume, rewind or skip ahead http://blogs.msdn.com/b/shawnhar/archive/2011/02/10/doing-the-right-thing-resume-rewind-or-skip-ahead.aspx XNA Developers: Andrew Russel was on .NET Rocks recently talking with Carl and Richard about developing games for Xbox, iPhone and Android http://www.dotnetrocks.com/default.aspx?ShowNum=635 Eric W. releases the Fishing Girl source code into the wild http://ericw.ca/blog/posts/fishing-girl-now-open-source/ http://forums.create.msdn.com/forums/p/74642/454512.aspx#454512 BinaryTweedDeej reminds that XNA community that Indie City wants you involved http://twitter.com/BinaryTweedDeej/statuses/34596114028044288 http://www.indiecity.com Mike McLaughlin (@mikebmcl) releases his first two XNA articles on the TechNet wiki http://social.technet.microsoft.com/wiki/contents/articles/xna-framework-overview.aspx http://social.technet.microsoft.com/wiki/contents/articles/content-pipeline-overview.aspx John Watte plays around with the Content Pipeline and Music Visualization exploring just what can be done. http://www.enchantedage.com/xna-content-pipeline-fft-song-analysis http://www.enchantedage.com/fft-in-xna-content-pipeline-for-beat-detection-for-the-win Simon Stevens writes up his talk on Vector Collision Physics http://www.simonpstevens.com/News/VectorCollisionPhysics @domipheus puts together an XNA Task Manager http://www.flickr.com/photos/domipheus/5405603197/ MadNinjaSkillz releases his fork of Nick's Easy Storage component on CodePlex http://twitter.com/MadNinjaSkillz/statuses/34739039068229634 http://ezstorage.codeplex.com @ActiveNick was interviewed by Rob Cameron and discusses Windows Phone 7, Bing Maps and XNA http://twitter.com/ActiveNick/statuses/35348548526546944 http://msdn.microsoft.com/en-us/cc537546 Radiangames (Luke Schneider) posts about converting his games from XNA to Unity http://radiangames.com/?p=592 UberMonkey (@ElementCy) posts about a new project in the works, CubeTest a Minecraft style terrain http://www.ubergamermonkey.com/personal-projects/new-project-in-the-works/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Ubergamermonkey+%28UberGamerMonkey%29 Xbox LIVE Indie Games (XBLIG): VideoGamer Rob review Bonded Realities http://videogamerrob.wordpress.com/2011/02/05/xblig-review-bonded-realities/ XBLIG Round Up on Gamergeddon http://www.gamergeddon.com/2011/02/06/xbox-indie-game-round-up-february-6th/ Are gamers still rating Indie Games after the Xbox Dashboard update? http://www.gamemarx.com/news/2011/02/06/are-gamers-still-rating-indie-games-after-the-xbox-dashboard-update.aspx Joystiq - Xbox Live Indie Gems: Corrupted http://www.joystiq.com/2011/02/04/xbox-live-indie-gems-corrupted/ Raymond Matthews of DarkStarMatryx reviews (Almost) Total Mayhem and Aban Hawkins & the 1000 Spikes http://www.darkstarmatryx.com/?p=225 http://www.darkstarmatryx.com/?p=229 8 Bit Horse reviews Aban Hawkins & the 1000 spikes http://8bithorse.blogspot.com/2011/01/aban-hawkins-1000-spikes-xbl-indie.html 2010 wrap-up for FunInfused Games http://www.krissteele.net/blogdetails.aspx?id=245 NeoGaf roundup of January's XBLIGs http://www.neogaf.com/forum/showthread.php?t=420528 Armless Ocotopus interviews Michael Ventnor creator of Bonded Realities http://www.armlessoctopus.com/2011/02/07/interview-michael-ventnor-of-red-crest-studios/ @recharge_media posts about the new city music for Woodvale in Sin Rising http://rechargemedia.com/2011/02/08/new-city-theme-woodvale/ @DrMisty posts some footage of YoYoYo in action http://www.mstargames.co.uk/mistryblogmain/54-yoyoyoblogs/184-video-update.html Xona Games - Decimation X3 on Reviews on the Run http://video.citytv.com/video/detail/782443063001.000000/reviews-on-the-run--february-8-2011/g4/ @benkane gives an early peek at his action RPG coming to XBLIG http://www.youtube.com/watch?v=bDF_PrvtwU8 Rock, Paper Shotgun talks to Zeboyd games about bringing Cthulhu Saves the World to PC http://www.rockpapershotgun.com/2011/02/11/summoning-cthulhu-natter-with-zeboyd/ Xbox LIVE Indieverse interviews the creator of Bonded Realities http://xbl-indieverse.blogspot.com/2011/02/xbl-indieverse-interview-red-crest.html XNA Game Development: Dream-In-Code posts about an upcoming XNA Challenge/Coding contest http://www.dreamincode.net/forums/blog/1385/entry-3192-xna-challengecontest/ Sgt.Conker covers Fishing Girl and IndieFreaks Game Framework release http://www.sgtconker.com/2011/02/fishing-girl-did-not-sell-a-single-copy/ http://www.sgtconker.com/2011/02/indiefreaks-game-framework-v0-2-0-0/ @slyprid releases Transmute v0.40a with lots of new features and fixes http://twitter.com/slyprid/statuses/34125423067533312 http://twitter.com/slyprid/statuses/35326876243337216 http://forgottenstarstudios.com/ Jeff Brown writes an XNA 4.0 tutorial on Saving/Loading on the Xbox 360 http://www.robotfootgames.com/xna-tutorials/92-xna-tutorial-savingloading-on-xbox-360-40 XNA for Silverlight Developers: Part 3- Animation http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+xna-connection-twitter-specific-stream+%28XNA+Connection%27s+Twitter+specific+stream%29 The news from Nokia is definitely something XNA developers will want to keep their eye on http://blogs.forum.nokia.com/blog/nokia-developer-news/2011/02/11/letter-to-developers?sf1066337=1

    Read the article

  • E-Business Suite Technology Sessions at OAUG Collaborate 12

    - by Max Arderius
    Members of our E-Business Suite Applications Technology Group will be at the OAUG Collaborate 12 conference at the Mandalay Bay Convention Center in Las Vegas, Nevada on April 22 to 26, 2012.  Please drop by any of our sessions to hear the latest news and meet up with us. Speaker Sessions Session 9675Planning Your Oracle E-Business Suite Upgrade from Release 11i to 12.1 and BeyondAnne Carlson, Senior Director, Applications Technology Group, OracleSunday, April 22, 2:00 pm - 3:00 pmLocation: Jasmine B Attend this session to hear the latest Oracle E-Business Suite Release 12.1 upgrade planning tips gleaned from customers who have already performed the upgrade. Youll get specific, cross-product advice on how to decide your project's scope, understand the factors that affect your project's duration, develop a robust testing strategy, leverage Oracle Support resources, and more. In a nutshell, this session tells you things you need to know before embarking upon your Release 12.1 upgrade project. Session 9401Minimizing Oracle E-Business Suite Maintenance DowntimesElke Phelps, Principal Product Manager, Applications Technology Group, OracleKevin Hudson, Sr. Director, Applications Technology Group, OracleSunday, April 22, 2:10 pm - 3:10 pmLocation: South Seas EThis session starts with an architecture review of Oracle E-Business Suite fundamentals and then moves to a practical view of the different tools and approaches for downtimes. Topics include patching shortcuts, merging patches, distributing worker processes across multiple servers, running ADPatch in no-interactive mode, staged APPL_TOPs, shared file systems, deferring system-wide database tasks, avoiding resource bottlenecks etc... This session also describes the online patching capabilities coming in Release 12.2. Session 9368Oracle E-Business Suite Technology: Latest Features and RoadmapLisa Parekh, Vice President, Applications Technology Group, Oracle Sunday, April 22, 4:30 pm - 5:30 pmLocation: South Seas EThis session provides an overview of Oracle E-Business Suite technology strategy, the capabilities and associated business benefits of recent releases, as well as a review of the product roadmap. As a cornerstone session for Oracle E-Business Suite technology, come hear about the latest usability enhancements, systems administration and configuration management tools, security-related updates, and tools and options for extending, customizing, and integrating the Oracle E-Business Suite with other applications. Session 10709Oracle E-Business Suite Applications Strategy and General Manager UpdateCliff Godwin, Sr. VP, Application Development, OracleMonday, April 23, 2:30 pm - 3:30 pmLocation: Mandalay Bay DIn this session, hear from Oracle E-Business Suite General Manager Cliff Godwin as he delivers an update on the Oracle E-Business Suite product line. The session covers the value delivered by the current release of Oracle E-Business Suite applications, the momentum, and how Oracle E-Business Suite applications integrate into Oracle’s overall applications strategy. You will come away with an understanding of the value Oracle E-Business Suite applications deliver now and in the future. Session 9398How to Reduce TCO Using Oracle Application Management Suite for Oracle E-Business SuiteAngelo Rosado, Principal Product Manager, Applications Technology Group, OracleKenneth Baxter, Principal Product Strategy Manager, Management Pack Fusion Middleware Management, OracleTuesday, April 24, 8:00 am - 9:00 amLocation: Breakers GThis session covers the methods and tools you can use to gain insights into your end users, troubleshoot performance problems, define service-level objectives, and proactively monitor your end-to-end Oracle E-Business Suite environment to meet your availability and performance targets. Come hear how you can manage, diagnose, and monitor the Oracle E-Business Suite environment from a single console by using Oracle Enterprise Manager together with the Oracle Application Management Suite for Oracle E-Business Suite. Session 9370 Coexistence of Oracle E-Business Suite and Oracle Fusion Applications: Platform Perspective Nadia Bendjedou, Senior Director, Product Strategy, Oracle Tuesday, April 24, 2:00 pm - 3:00 pm Location: South Seas E Join us at this session if you are wondering which tools to integrate your data, your processes and your User Interface. Or what tools to customize and extend your screens and reports (OAF, Forms, ADF, Oracle Reports, BI etc....), what tools to secure, protect and manage your Oracle E-Business Suite etc... Or simply if you are looking for a technical roadmap for your Oracle E-Business Suite infrastructure to CO-EXIST with the rest of your enterprise applications including Oracle Fusion Applications. Session 9375 Oracle E-Business Suite Directions: Deployment and System AdministrationMax Arderius, Manager, Applications Development Group, OracleTuesday, April 24, 4:30 pm - 5:30 pmLocation: Breakers GWhat's coming in the next major version of Oracle E-Business Suite 12? This session covers the latest technology stack, including the use of Oracle WebLogic Server and Oracle Database 11g Release 2. Topics include an architectural overview, installation and upgrade options, new configuration options, and new tools for hot-cloning and automated "lights out" cloning. Learn about how online patching will reduce your database patching downtimes to the time it takes to bounce your database server.Session 9369Oracle E-Business Suite Technology Certification Primer and RoadmapSteven Chan, Sr. Director, Applications Technology Group, Oracle Wednesday, April 25, 8:15 am - 9:15 amLocation: South Seas FThis Oracle Development session summarizes the latest certifications and roadmap for the Oracle E-Business Suite technology stack, including database releases/options, Java, Oracle Forms, Oracle Containers for J2EE, desktop OS, browsers, JRE releases, Office/OpenOffice, development and Web authoring tools, user authentication and management, BI, security options, clouds, Oracle VM etc.... It also covers the most-commonly-asked questions about technology stack component support dates and upgrade implications. Session 9407The Latest Oracle E-Business Suite Release User Interface and Usability EnhancementsGustavo Jimenez, Sr. Manager, Applications Technology Group, Oracle Wednesday, April 25, 1:00 pm - 2:00 pmLocation: South Seas GIn this session, developers will get a detailed look at new features designed to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and Web services. Topics include rich new UI capabilities such as new home page features, Navigator and Favorites pull-down menus, Oracle ADF task flows etc.... In addition, we will cover the personalization/extensibility enhancements, business layer extensions, Oracle ADF integration and much more. Session 9374Best Practices for Oracle E-Business Suite Performance Tuning and Upgrade OptimizationIsam Alyousfi, Senior Director, Applications Performance, OracleUdayan Parvate, Director, Release Engineering, Quality and Release Management, Oracle Thursday, April 26, 8:30 am - 9:30 amLocation: South Seas FThis presentation will offer tips and techniques on tuning all the layers of the Oracle E-Business Suite stack including the various tiers of the Oracle E-Business Suite environment. You will learn about tuning Oracle Forms, Concurrent Manager, Apache, and Oracle Discoverer. Track down memory leaks and other issues on the Java and Java Virtual Machine layers. The session also covers Oracle E-Business Suite product-level tuning, including Oracle Workflow, Oracle Order Management, Oracle Payroll, and other modules.Session 9412 Oracle E-Business Suite 12.1 Desktop Integration: Beyond Oracle Applications Desktop IntegratorGustavo Jimenez, Sr. Manager, Applications Technology Group, OracleThursday, April 26, 8:30 am - 9:30 amLocation: Breakers GThis session describes the new expanded functionality in Oracle Web Applications Desktop Integrator, Oracle Report Manager, and dedicated integrators. You have more options for desktop integration now, not fewer. Topics include an overview of prepackaged solutions for integrating Oracle E-Business Suite with desktop applications such as Microsoft Excel, Word, and Projects. The session also discusses how you can use the Desktop Integration Framework feature to create your own integrators quickly and easily.Session 9533 Upgrading your Customizations to Oracle E-Business Suite Release 12.1Sara Woodhull, Principal Product Manager, Applications Technology Group, Oracle Thursday, April 26, 11:00 am - 12:00 pmLocation: South Seas FHave you personalized Forms or OA Framework screens? Have you used mod_plsql or Applications Express to tailor your Release 11i functionality? Have you extended or customized your Release 11i environment using other tools? This session will help you understand customization scenarios, use cases, tools, and technologies for ensuring that your Oracle E-Business Suite Release 12.1 environment fits your users' needs closely and that any future customizations will be easy to upgrade. Special Interest Groups (SIG) Session 10535OAUG Database SIG- Part IMichael Brown, Colibri Limited Company Sunday, April 22, 3:20 pm - 4:20 pmLocation: South Seas FThis is the annual meeting of the Database SIG at Collaborate. The call for candidates for the chair will be closed at the meeting. Plans include a speaker from Oracle and a presentation on applications performance. The details of the meeting will be posted on http://www.dbsig.com. Guest Presentation: Oracle E-Business Suite Database PerformanceIsam Alyousfi, Senior Director, Applications Performance, Oracle Session 10720OAUG EBS Applications Technology SIG- Part ISrini Chaval, Cummins Monday, April 23, 2:30 pm - 3:30 pmLocation: South Seas F Guest Presentation:Oracle E-Business Suite Technology Certification RoadmapSteven Chan, Sr. Director, Applications Technology Group, Oracle Session 10510OAUG EBS Applications Technology SIG- Part IISrini Chaval, CumminsMonday, April 23, 3:45 pm - 4:45 pmLocation: South Seas F Guest Presentation:Oracle E-Business Suite 12.2 Online Patching Kevin Hudson, Sr. Director, Applications Technology Group, Oracle Session 10522 OAUG Upgrade SIG- Part IISandra Vucinic, VLAD Group, Inc. Wednesday, April 25, 3:00 pm - 4:00 pmLocation: South Seas FUpgrade SIG will host a business meeting followed by panel (Q&A) related to EBS Upgrade topics and Oracle presentation. Guest Presentation:Upgrading E-Business Suite Amrita Mehrok, Director, Financials Product Strategy, Oracle Nadia Bendjedou, Senior Director, Product Strategy, Oracle Session 10722OAUG Upgrade SIG- Part IISandra Vucinic, VLAD Group, Inc. Wednesday, April 25, 4:15 pm - 5:15 pmLocation: South Seas FUpgrade SIG will host a business meeting followed by panel (Q&A) related to EBS Upgrade topics and Oracle presentation. Guest Presentation:Tuning the Oracle E-Business Suite Upgrade Isam Alyousfi, Senior Director, Applications Performance, Oracle Panels Session 9360Oracle E-Business Suite Cloning PanelSandra Vucinic, VLAD Group, Inc. Guest Speaker: Max Arderius, Manager, Applications Technology Group, OracleWednesday, April 25, 9:30 am - 10:30 amLocation: South Seas FThis panel will discuss differences between available release 11i, R12 and R12.1 cloning methods. Advantages and disadvantages of each cloning method will be discussed in depth. This panel of experienced database administrators will lead a discussion focusing on the questions such as “which cloning method is best to use in your particular environment”. Attendees will gain practical knowledge, tips and tricks to assist with cloning of Oracle E-Business Suite release 11i, R12 and R12.1 environments. Session 10022Oracle Applications Tuning PanelMark Farnham, Rightsizing, Inc.Guest Speaker: Isam Alyousfi, Senior Director, Applications Performance, OracleThursday, April 26, 09:45 am - 10:45 amLocation: South Seas FThis applications performance panel session, sponsored by the OAUG Database SIG, provides a Q&A forum focused on helping you address your Oracle Applications (Oracle E-Business Suite and Oracle's PeopleSoft Enterprise and Siebel applications) performance- and scalability-related issues. The panel comprises several well-known Oracle Applications performance experts. Topic areas include Oracle Database; the network; and the applications tier, including patching and upgrade performance. For complete listing of all speaker sessions and other activities, please visit the OAUG Collaborate Web Site.

    Read the article

  • Adventures in MVVM &ndash; My ViewModel Base

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM First, I’d like to say: THIS IS NOT A NEW MVVM FRAMEWORK. I tend to believe that MVVM support code should be specific to the system you are building and the developers working on it.  I have yet to find an MVVM framework that does everything I want it to without doing too much.  Don’t get me wrong… there are some good frameworks out there.  I just like to pick and choose things that make sense for me.  I’d also like to add that some of these features only work in WPF.  As of Silveright 4, they don’t support binding to dynamic properties, so some of the capabilities are lost. That being said, I want to share my ViewModel base class with the world.  I have had several conversations with people about the problems I have solved using this ViewModel base.  A while back, I posted an article about some experiments with a “Rails Inspired ViewModel”.  What followed from those ideas was a ViewModel base class that I take with me and use in my projects.  It has a lot of features, all designed to reduce the friction in writing view models. I have put the code out on Codeplex under the project: ViewModelSupport. Finally, this article focuses on the ViewModel and only glosses over the View and the Model.  Without all three, you don’t have MVVM.  But this base class is for the ViewModel, so that is what I am focusing on. Features: Automatic Command Plumbing Property Change Notification Strongly Typed Property Getter/Setters Dynamic Properties Default Property values Derived Properties Automatic Method Execution Command CanExecute Change Notification Design-Time Detection What about Silverlight? Automatic Command Plumbing This feature takes the plumbing out of creating commands.  The common pattern for commands in a ViewModel is to have an Execute method as well as an optional CanExecute method.  To plumb that together, you create an ICommand Property, and set it in the constructor like so: Before public class AutomaticCommandViewModel { public AutomaticCommandViewModel() { MyCommand = new DelegateCommand(Execute_MyCommand, CanExecute_MyCommand); } public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } public DelegateCommand MyCommand { get; private set; } } With the base class, this plumbing is automatic and the property (MyCommand of type ICommand) is created for you.  The base class uses the convention that methods be prefixed with Execute_ and CanExecute_ in order to be plumbed into commands with the property name after the prefix.  You are left to be expressive with your behavior without the plumbing.  If you are wondering how CanExecuteChanged is raised, see the later section “Command CanExecute Change Notification”. After public class AutomaticCommandViewModel : ViewModelBase { public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } }   Property Change Notification One thing that always kills me when implementing ViewModels is how to make properties that notify when they change (via the INotifyPropertyChanged interface).  There have been many attempts to make this more automatic.  My base class includes one option.  There are others, but I feel like this works best for me. The common pattern (without my base class) is to create a private backing store for the variable and specify a getter that returns the private field.  The setter will set the private field and fire an event that notifies the change, only if the value has changed. Before public class PropertyHelpersViewModel : INotifyPropertyChanged { private string text; public string Text { get { return text; } set { if(text != value) { text = value; RaisePropertyChanged("Text"); } } } protected void RaisePropertyChanged(string propertyName) { var handlers = PropertyChanged; if(handlers != null) handlers(this, new PropertyChangedEventArgs(propertyName)); } public event PropertyChangedEventHandler PropertyChanged; } This way of defining properties is error-prone and tedious.  Too much plumbing.  My base class eliminates much of that plumbing with the same functionality: After public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get<string>("Text"); } set { Set("Text", value);} } }   Strongly Typed Property Getters/Setters It turns out that we can do better than that.  We are using a strongly typed language where the use of “Magic Strings” is often frowned upon.  Lets make the names in the getters and setters strongly typed: A refinement public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get(() => Text); } set { Set(() => Text, value); } } }   Dynamic Properties In C# 4.0, we have the ability to program statically OR dynamically.  This base class lets us leverage the powerful dynamic capabilities in our ecosystem. (This is how the automatic commands are implemented, BTW)  By calling Set(“Foo”, 1), you have now created a dynamic property called Foo.  It can be bound against like any static property.  The opportunities are endless.  One great way to exploit this behavior is if you have a customizable view engine with templates that bind to properties defined by the user.  The base class just needs to create the dynamic properties at runtime from information in the model, and the custom template can bind even though the static properties do not exist. All dynamic properties still benefit from the notifiable capabilities that static properties do. For any nay-sayers out there that don’t like using the dynamic features of C#, just remember this: the act of binding the View to a ViewModel is dynamic already.  Why not exploit it?  Get over it :) Just declare the property dynamically public class DynamicPropertyViewModel : ViewModelBase { public DynamicPropertyViewModel() { Set("Foo", "Bar"); } } Then reference it normally <TextBlock Text="{Binding Foo}" />   Default Property Values The Get() method also allows for default properties to be set.  Don’t set them in the constructor.  Set them in the property and keep the related code together: public string Text { get { return Get(() => Text, "This is the default value"); } set { Set(() => Text, value);} }   Derived Properties This is something I blogged about a while back in more detail.  This feature came from the chaining of property notifications when one property affects the results of another, like this: Before public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); RaisePropertyChanged("Percentage"); RaisePropertyChanged("Output"); } } public int Percentage { get { return (int)(100 * Score); } } public string Output { get { return "You scored " + Percentage + "%."; } } } The problem is: The setter for Score has to be responsible for notifying the world that Percentage and Output have also changed.  This, to me, is backwards.    It certainly violates the “Single Responsibility Principle.” I have been bitten in the rear more than once by problems created from code like this.  What we really want to do is invert the dependency.  Let the Percentage property declare that it changes when the Score Property changes. After public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public int Percentage { get { return (int)(100 * Score); } } [DependsUpon("Percentage")] public string Output { get { return "You scored " + Percentage + "%."; } } }   Automatic Method Execution This one is extremely similar to the previous, but it deals with method execution as opposed to property.  When you want to execute a method triggered by property changes, let the method declare the dependency instead of the other way around. Before public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); WhenScoreChanges(); } } public void WhenScoreChanges() { // Handle this case } } After public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public void WhenScoreChanges() { // Handle this case } }   Command CanExecute Change Notification Back to Commands.  One of the responsibilities of commands that implement ICommand – it must fire an event declaring that CanExecute() needs to be re-evaluated.  I wanted to wait until we got past a few concepts before explaining this behavior.  You can use the same mechanism here to fire off the change.  In the CanExecute_ method, declare the property that it depends upon.  When that property changes, the command will fire a CanExecuteChanged event, telling the View to re-evaluate the state of the command.  The View will make appropriate adjustments, like disabling the button. DependsUpon works on CanExecute methods as well public class CanExecuteViewModel : ViewModelBase { public void Execute_MakeLower() { Output = Input.ToLower(); } [DependsUpon("Input")] public bool CanExecute_MakeLower() { return !string.IsNullOrWhiteSpace(Input); } public string Input { get { return Get(() => Input); } set { Set(() => Input, value);} } public string Output { get { return Get(() => Output); } set { Set(() => Output, value); } } }   Design-Time Detection If you want to add design-time data to your ViewModel, the base class has a property that lets you ask if you are in the designer.  You can then set some default values that let your designer see what things might look like in runtime. Use the IsInDesignMode property public DependantPropertiesViewModel() { if(IsInDesignMode) { Score = .5; } }   What About Silverlight? Some of the features in this base class only work in WPF.  As of version 4, Silverlight does not support binding to dynamic properties.  This, in my opinion, is a HUGE limitation.  Not only does it keep you from using many of the features in this ViewModel, it also keeps you from binding to ViewModels designed in IronRuby.  Does this mean that the base class will not work in Silverlight?  No.  Many of the features outlined in this article WILL work.  All of the property abstractions are functional, as long as you refer to them statically in the View.  This, of course, means that the automatic command hook-up doesn’t work in Silverlight.  You need to plumb it to a static property in order for the Silverlight View to bind to it.  Can I has a dynamic property in SL5?     Good to go? So, that concludes the feature explanation of my ViewModel base class.  Feel free to take it, fork it, whatever.  It is hosted on CodePlex.  When I find other useful additions, I will add them to the public repository.  I use this base class every day.  It is mature, and well tested.  If, however, you find any problems with it, please let me know!  Also, feel free to suggest patches to me via the CodePlex site.  :)

    Read the article

  • Building the Elusive Windows Phone Panorama Control

    When the Windows Phone 7 Developer SDK was released a couple of weeks ago at MIX10 many people noticed the SDK doesnt include a template for a Panorama control.   Here at Clarity we decided to build our own Panorama control for use in some of our prototypes and I figured I would share what we came up with. There have been a couple of implementations of the Panorama control making their way through the interwebs, but I didnt think any of them really nailed the experience that is shown in the simulation videos.   One of the key design principals in the UX Guide for Windows Phone 7 is the use of motion.  The WP7 OS is fairly stripped of extraneous design elements and makes heavy use of typography and motion to give users the necessary visual cues.  Subtle animations and wide layouts help give the user a sense of fluidity and consistency across the phone experience.  When building the panorama control I was fairly meticulous in recreating the motion as shown in the videos.  The effect that is shown in the application hubs of the phone is known as a Parallax Scrolling effect.  This this pseudo-3D technique has been around in the computer graphics world for quite some time. In essence, the background images move slower than foreground images, creating an illusion of depth in 2D.  Here is an example of the traditional use: http://www.mauriciostudio.com/.  One of the animation gems I've learned while building interactive software is the follow animation.  The premise is straightforward: instead of translating content 1:1 with the interaction point, let the content catch up to the mouse or finger.  The difference is subtle, but the impact on the smoothness of the interaction is huge.  That said, it became the foundation of how I achieved the effect shown below.   Source Code Available HERE Before I briefly describe the approach I took in creating this control..and Ill add some **asterisks ** to the code below as my coding skills arent up to snuff with the rest of my colleagues.  This code is meant to be an interpretation of the WP7 panorama control and is not intended to be used in a production application.  1.  Layout the XAML The UI consists of three main components :  The background image, the Title, and the Content.  You can imagine each  these UI Elements existing on their own plane with a corresponding Translate Transform to create the Parallax effect.  2.  Storyboards + Procedural Animations = Sexy As I mentioned above, creating a fluid experience was at the top of my priorities while building this control.  To recreate the smooth scroll effect shown in the video we need to add some place holder storyboards that we can manipulate in code to simulate the inertia and snapping.  Using the easing functions built into Silverlight helps create a very pleasant interaction.    3.  Handle the Manipulation Events With Silverlight 3 we have some new touch event handlers.  The new Manipulation events makes handling the interactivity pretty straight forward.  There are two event handlers that need to be hooked up to enable the dragging and motion effects: the ManipulationDelta event :  (the most relevant code is highlighted in pink) Here we are doing some simple math with the Manipulation Deltas and setting the TO values of the animations appropriately. Modifying the storyboards dynamically in code helps to create a natural feel.something that cant easily be done with storyboards alone.   And secondly, the ManipulationCompleted event:  Here we take the Final Velocities from the Manipulation Completed Event and apply them to the Storyboards to create the snapping and scrolling effects.  Most of this code is determining what the next position of the viewport will be.  The interesting part (shown in pink) is determining the duration of the animation based on the calculated velocity of the flick gesture.  By using velocity as a variable in determining the duration of the animation we can produce a slow animation for a soft flick and a fast animation for a strong flick. Challenges to the Reader There are a couple of things I didnt have time to implement into this control.  And I would love to see other WPF/Silverlight approaches.  1.  A good mechanism for deciphering when the user is manipulating the content within the panorama control and the panorama itself.   In other words, being able to accurately determine what is a flick and what is click. 2.  Dynamically Sizing the panorama control based on the width of its content.  Right now each control panel is 400px, ideally the Panel items would be measured and then panorama control would update its size accordingly.  3.  Background and content wrapping.  The WP7 UX guidelines specify that the content and background should wrap at the end of the list.  In my code I restrict the drag at the ends of the list (like the iPhone).  It would be interesting to see how this would effect the scroll experience.     Well, Its been fun building this control and if you use it Id love to know what you think.  You can download the Source HERE or from the Expression Gallery  Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • Using the “Settings.settings” functionalities in VB.NET can be tricky…

    - by Vincent Grondin
    Sometime you’re searching for something forever and when you find it, you realize it was right under your nose.  Maybe you were distracted by other things around… or maybe that thing right under your nose was so well hidden that it deserves a blog post…   That happened to me a few days ago while using the “Settings.settings” functionalities in my VB.NET application…  I thought it was a cool feature and I decided to use it…  So there I am adding new settings with “USER” scope and StringCollection as the data type, testing my application and everything works perfectly fine...  That was before I decided to modify the “Value” of one of my settings…  After changing the value of one of my settings, I start my application again and, to my surprise, my new values aren’t showing!  Hmmm… That’s odd…  My setting was a pretty long list of strings so I was rather angry at myself for not saving my work after I was done…  So I open up the Settings.setting in the designer and click the ellipsis symbol to enter my string collection again, but to my great pleasure (and disbelief) my strings are there!!!  Alright, you rock VB.NET!  You’ve just save me a bunch of typing time and I’m thinking it’s just a simple Visual Studio glitch…  I hit “Save” then “Save All” (just in case) and finally I rebuild everything and fire up my app once again.  Huh?  Where are my darn strings????????  Ok there’s a bug there…  I open up the app.config and my new strings are there!!!  Alright, let’s recap…  My new strings are in the app.config, they show correctly in the Settings.settings designer UI but they aren’t showing at runtime…  Hmmmm?  Let’s try something else…  Let’s start the application but outside Visual Studio this time… I fire up the exe and BAM!  My strings where there!  I “alt-tab” and hit “F5” and BOOM, no strings!  So it’s a bug in the Visual Studio environment… or could it be a FEATURE?  I must admit that I’m a little confused over what’s a bug and what’s a feature in Visual Studio… lol!   Finally I found out there’s a “cache” for your Visual Studio located here:  C:\Users\<your username>\AppData\Local\Microsoft\<your app name and a very weird temp ID>\<your app version>\user.config When using the “Settings.settings” with a setting of scope “user”, this file is out of sync with your app.config until you manually decide to update it… The button is right there… under your nose… at the top left corner of your screen in the settings designer…  See the big “Synchronize” button there?  Yep…  Now that’s user friendly isn’t it?  Oh, and wait until you see what it does when you click it…  It prompts you and basically says:  “Would you like your settings to start working inside Visual Studio now that you found out that I exist?” and of course the right answer is yes… or rather “OK”…  Unfortunately, you have to do this every time you edit a value… On the other hand, adding and removing settings seem to work flawlessly without having to click this magical button… go figure!  Oh and I almost forgot… this great “feature” is only available for VB.NET…  A project in C# using Settings.settings will work perfectly EVEN when editing values… Here’s a screenshot that shows this important button: Button Using other data types appears to work perfectly well…   Maybe it’s simply related to the StringCollection data type?  If you are a VB.NET programmer, you should pay attention to this when you plan on using the settings functionalities and your scope is “user” and your data type is StringCollection… Happy coding all!

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • how get validation messages from mangomapper using rails console ?

    - by Alex
    Hi, I am basically teaching myself how to use RoR and MongoDB at the same time. I am following the very good book / tutorial : http://railstutorial.org/ I decided to replace Sqlite3 by MongoDB using the mongomapper gem. Everything works out about alright, but I am having some non-blocking little issues that I truly wish I could get rid of. In chapter 6, when working with validation I got 2 issues: - I don't know how to get the validations messages back like with Sqlite3. The "standard" code is: $ rails console --sandbox >> user = User.new(:name => "", :email => "[email protected]") >> user.save => false >> user.valid? => false >> user.errors.full_messages => ["Name can't be blank"] but if I try to do the same with MongoMapper, it throws an error saying that errors is undefined function. So does it mean that this is simply not implemented in mongomapper / mongo driver ? Or is there some other clever way to do this that I could not figure ? Additional, 2 things here: - I following the exemple in the book to the line, so I was expecting to be able to use the console in sandbox mode, but apparently that does not work either: (...)ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/console/sandbox.rb:1:in `<top (required)>': uninitialized constant ActiveRecord (NameError) from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/application.rb:226:in `initialize_console' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/application.rb:153:in `load_console' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands/console.rb:26:in `start' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands/console.rb:8:in `start' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' Also, in the book they call "user" but I need to call "User" (note the capital U) why is that ? Is it like mangomapper does not follow the Ruby naming convention or something ? And finally, I am trying to validate the field email with a regex as shown in the tutorial. It does not throws any errors at the code, but whenever I try to insert it just won't ever accept it unless I comment out the :format option... class User include MongoMapper::Document key :name, String, :required => true, :length => { :maximum => 50 } key :email, String, :required => true, # :format => { :with => email_regex }, :uniqueness => { :case_sentitive => false} timestamps! end Any advices you can provide on those topics would help me a lot ! Thanks, Alex

    Read the article

  • Clean up after Visual Studio

    - by psheriff
    As programmer’s we know that if we create a temporary file during the running of our application we need to make sure it is removed when the application or process is complete. We do this, but why can’t Microsoft do it? Visual Studio leaves tons of temporary files all over your hard drive. This is why, over time, your computer loses hard disk space. This blog post will show you some of the most common places where these files are left and which ones you can safely delete..NET Left OversVisual Studio is a great development environment for creating applications quickly. However, it will leave a lot of miscellaneous files all over your hard drive. There are a few locations on your hard drive that you should be checking to see if there are left-over folders or files that you can delete. I have attempted to gather as much data as I can about the various versions of .NET and operating systems. Of course, your mileage may vary on the folders and files I list here. In fact, this problem is so prevalent that PDSA has created a Computer Cleaner specifically for the Visual Studio developer.  Instructions for downloading our PDSA Developer Utilities (of which Computer Cleaner is one) are at the end of this blog entry.Each version of Visual Studio will create “temporary” files in different folders. The problem is that the files created are not always “temporary”. Most of the time these files do not get cleaned up like they should. Let’s look at some of the folders that you should periodically review and delete files within these folders.Temporary ASP.NET FilesAs you create and run ASP.NET applications from Visual Studio temporary files are placed into the <sysdrive>:\Windows\Microsoft.NET\Framework[64]\<vernum>\Temporary ASP.NET Files folder. The folders and files under this folder can be removed with no harm to your development computer. Do not remove the "Temporary ASP.NET Files" folder itself, just the folders underneath this folder. If you use IIS for ASP.NET development, you may need to run the iisreset.exe utility from the command prompt prior to deleting any files/folder under this folder. IIS will sometimes keep files in use in this folder and iisreset will release the locks so the files/folders can be deleted.Website CacheThis folder is similar to the ASP.NET Temporary Files folder in that it contains files from ASP.NET applications run from Visual Studio. This folder is located in each users local settings folder. The location will be a little different on each operating system. For example on Windows Vista/Windows 7, the folder is located at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\WebsiteCache. If you are running Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\WebsiteCache. Check these locations periodically and delete all files and folders under this directory.Visual Studio BackupThis backup folder is used by Visual Studio to store temporary files while you develop in Visual Studio. This folder never gets cleaned out, so you should periodically delete all files and folders under this directory. On Windows XP, this folder is located at <sysdrive>:\Documents and Settings\<UserName>\My Documents\Visual Studio 200[5|8]\Backup Files. On Windows Vista/Windows 7 this folder is located at <sysdrive>:\Users\<UserName>\Documents\Visual Studio 200[5|8]\.Assembly CacheNo, this is not the global assembly cache (GAC). It appears that this cache is only created when doing WPF or Silverlight development with Visual Studio 2008 or Visual Studio 2010. This folder is located in <sysdrive>:\ Users\<UserName>\AppData\Local\assembly\dl3 on Windows Vista/Windows 7. On Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\assembly. If you have not done any WPF or Silverlight development, you may not find this particular folder on your machine.Project AssembliesThis is yet another folder where Visual Studio stores temporary files. You will find a folder for each project you have opened and worked on. This folder is located at <sysdrive>:\Documents and Settings\<UserName>Local Settings\Application Data\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies on Windows XP. On Microsoft Vista/Windows 7 you will find this folder at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies.Remember not all of these folders will appear on your particular machine. Which ones do show up will depend on what version of Visual Studio you are using, whether or not you are doing desktop or web development, and the operating system you are using.SummaryTaking the time to periodically clean up after Visual Studio will aid in keeping your computer running quickly and increase the space on your hard drive. Another place to make sure you are cleaning up is your TEMP folder. Check your OS settings for the location of your particular TEMP folder and be sure to delete any files in here that are not in use. I routinely clean up the files and folders described in this blog post and I find that I actually eliminate errors in Visual Studio and I increase my hard disk space.NEW! PDSA has just published a “pre-release” of our PDSA Developer Utilities at http://www.pdsa.com/DeveloperUtilities that contains a Computer Cleaner utility which will clean up the above-mentioned folders, as well as a lot of other miscellaneous folders that get Visual Studio build-up. You can download a free trial at http://www.pdsa.com/DeveloperUtilities. If you wish to purchase our utilities through the month of November, 2011 you can use the RSVP code: DUNOV11 to get them for only $39. This is $40 off the regular price.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Developer Machine Clean Up” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Juniper Strategy, LLC is hiring SharePoint Developers&hellip;

    - by Mark Rackley
    Isn’t everybody these days? It seems as though there are definitely more jobs than qualified devs these days, but yes, we are looking for a few good devs to help round out our burgeoning SharePoint team. Juniper Strategy is located in the DC area, however we will consider remote devs for the right fit. This is your chance to get in on the ground floor of a bright company that truly “gets it” when it comes to SharePoint, Project Management, and Information Assurance. We need like-minded people who “get it”, enjoy it, and who are looking for more than just a job. We have government and commercial opportunities as well as our own internal product that has a bright future of its own. Our immediate needs are for SharePoint .NET developers, but feel free to submit your resume for us to keep on file as it looks as though we’ll need several people in the coming months. Please email us your resume and salary requirements to [email protected] Below are our official job postings. Thanks for stopping by, we look forward to  hearing from you. Senior SharePoint .NET Developer Senior developer will focus on design and coding of custom, end-to-end business process solutions within the SharePoint framework. Senior developer with the ability to serve as a senior developer/mentor and manage day-to-day development tasks. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. For selected development path, prepare technical specification and build the solution. Assist project manager with defining development task schedule and level-of-effort. Lead technical solution deployment. Job Requirements Minimum of 7 years experience in agile development, with at least 3 years of SharePoint-related development experience (SPS, SharePoint 2007/2010, WSS2-4). Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Experience with using multiple data sources/repositories for database CRUD activities, including relational databases, SAP, Oracle e-Business. Experience with designing and deploying performance-based solutions in SharePoint for business processes that involve a very large number of records. Experience designing dynamic dashboards and mashups with data from multiple sources (internal to SharePoint as well as from external sources). Experience designing custom forms to facilitate user data entry, both with and without leveraging Forms Services. Experience building custom web part solutions. Experience with designing custom solutions for processing underlying business logic requirements including, but not limited to, SQL stored procedures, C#/ASP.Net workflows/event handlers (including timer jobs) to support multi-tiered decision trees and associated computations. Ability to create complex solution packages for deployment (e.g., feature-stapled site definitions). Must have impeccable communication skills, both written and verbal. Seeking a "tinkerer"; proactive with a thirst for knowledge (and a sense of humor). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. Applicants must pass a skills assessment test. MCP/MCTS or comparable certification preferred. Salary & Travel Negotiable SharePoint Project Lead Define project task schedule, work breakdown structure and level-of-effort. Serve as principal liaison to the customer to manage deliverables and expectations. Day-to-day project and team management, including preparation and maintenance of project plans, budgets, and status reports. Prepare technical briefings and presentation decks, provide briefs to C-level stakeholders. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. The SharePoint Project Lead will be working with SharePoint architects and system owners to perform requirements/gap analysis and develop the underlying functional specifications. Once we have functional specifications as close to "final" as possible, the Project Lead will be responsible for preparation of the associated technical specification/development blueprint, along with assistance in preparing IV&V/test plan materials with support from other team members. This person will also be responsible for day-to-day management of "developers", but is also expected to engage in development directly as needed.  Job Requirements Minimum 8 years of technology project management across the software development life-cycle, with a minimum of 3 years of project management relating specifically to SharePoint (SPS 2003, SharePoint2007/2010) and/or Project Server. Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Ability to interact and collaborate effectively with team members and stakeholders of different skill sets, personalities and needs. General "development" skill set required is a fundamental understanding of MOSS 2007 Enterprise, SP1/SP2, from the top-level of skinning to the core of the SharePoint object model. Impeccable communication skills, both written and verbal, and a sense of humor are required. The projects will require being at a client site at least 50% of the time in Washington DC (NW DC) and Maryland (near Suitland). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. PMP certification, PgMP preferred. Salary & Travel Negotiable

    Read the article

  • Making Sense of ASP.NET Paths

    - by Renso
    Making Sense of ASP.NET Paths ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject portUri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString();

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >