Search Results

Search found 61242 results on 2450 pages for 'service name'.

Page 527/2450 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • Anatomy of a .NET Assembly - CLR metadata 1

    - by Simon Cooper
    Before we look at the bytes comprising the CLR-specific data inside an assembly, we first need to understand the logical format of the metadata (For this post I only be looking at simple pure-IL assemblies; mixed-mode assemblies & other things complicates things quite a bit). Metadata streams Most of the CLR-specific data inside an assembly is inside one of 5 streams, which are analogous to the sections in a PE file. The name of each section in a PE file starts with a ., and the name of each stream in the CLR metadata starts with a #. All but one of the streams are heaps, which store unstructured binary data. The predefined streams are: #~ Also called the metadata stream, this stream stores all the information on the types, methods, fields, properties and events in the assembly. Unlike the other streams, the metadata stream has predefined contents & structure. #Strings This heap is where all the namespace, type & member names are stored. It is referenced extensively from the #~ stream, as we'll be looking at later. #US Also known as the user string heap, this stream stores all the strings used in code directly. All the strings you embed in your source code end up in here. This stream is only referenced from method bodies. #GUID This heap exclusively stores GUIDs used throughout the assembly. #Blob This heap is for storing pure binary data - method signatures, generic instantiations, that sort of thing. Items inside the heaps (#Strings, #US, #GUID and #Blob) are indexed using a simple binary offset from the start of the heap. At that offset is a coded integer giving the length of that item, then the item's bytes immediately follow. The #GUID stream is slightly different, in that GUIDs are all 16 bytes long, so a length isn't required. Metadata tables The #~ stream contains all the assembly metadata. The metadata is organised into 45 tables, which are binary arrays of predefined structures containing information on various aspects of the metadata. Each entry in a table is called a row, and the rows are simply concatentated together in the file on disk. For example, each row in the TypeRef table contains: A reference to where the type is defined (most of the time, a row in the AssemblyRef table). An offset into the #Strings heap with the name of the type An offset into the #Strings heap with the namespace of the type. in that order. The important tables are (with their table number in hex): 0x2: TypeDef 0x4: FieldDef 0x6: MethodDef 0x14: EventDef 0x17: PropertyDef Contains basic information on all the types, fields, methods, events and properties defined in the assembly. 0x1: TypeRef The details of all the referenced types defined in other assemblies. 0xa: MemberRef The details of all the referenced members of types defined in other assemblies. 0x9: InterfaceImpl Links the types defined in the assembly with the interfaces that type implements. 0xc: CustomAttribute Contains information on all the attributes applied to elements in this assembly, from method parameters to the assembly itself. 0x18: MethodSemantics Links properties and events with the methods that comprise the get/set or add/remove methods of the property or method. 0x1b: TypeSpec 0x2b: MethodSpec These tables provide instantiations of generic types and methods for each usage within the assembly. There are several ways to reference a single row within a table. The simplest is to simply specify the 1-based row index (RID). The indexes are 1-based so a value of 0 can represent 'null'. In this case, which table the row index refers to is inferred from the context. If the table can't be determined from the context, then a particular row is specified using a token. This is a 4-byte value with the most significant byte specifying the table, and the other 3 specifying the 1-based RID within that table. This is generally how a metadata table row is referenced from the instruction stream in method bodies. The third way is to use a coded token, which we will look at in the next post. So, back to the bytes Now we've got a rough idea of how the metadata is logically arranged, we can now look at the bytes comprising the start of the CLR data within an assembly: The first 8 bytes of the .text section are used by the CLR loader stub. After that, the CLR-specific data starts with the CLI header. I've highlighted the important bytes in the diagram. In order, they are: The size of the header. As the header is a fixed size, this is always 0x48. The CLR major version. This is always 2, even for .NET 4 assemblies. The CLR minor version. This is always 5, even for .NET 4 assemblies, and seems to be ignored by the runtime. The RVA and size of the metadata header. In the diagram, the RVA 0x20e4 corresponds to the file offset 0x2e4 Various flags specifying if this assembly is pure-IL, whether it is strong name signed, and whether it should be run as 32-bit (this is how the CLR differentiates between x86 and AnyCPU assemblies). A token pointing to the entrypoint of the assembly. In this case, 06 (the last byte) refers to the MethodDef table, and 01 00 00 refers to to the first row in that table. (after a gap) RVA of the strong name signature hash, which comes straight after the CLI header. The RVA 0x2050 corresponds to file offset 0x250. The rest of the CLI header is mainly used in mixed-mode assemblies, and so is zeroed in this pure-IL assembly. After the CLI header comes the strong name hash, which is a SHA-1 hash of the assembly using the strong name key. After that comes the bodies of all the methods in the assembly concatentated together. Each method body starts off with a header, which I'll be looking at later. As you can see, this is a very small assembly with only 2 methods (an instance constructor and a Main method). After that, near the end of the .text section, comes the metadata, containing a metadata header and the 5 streams discussed above. We'll be looking at this in the next post. Conclusion The CLI header data doesn't have much to it, but we've covered some concepts that will be important in later posts - the logical structure of the CLR metadata and the overall layout of CLR data within the .text section. Next, I'll have a look at the contents of the #~ stream, and how the table data is arranged on disk.

    Read the article

  • April Oracle Database Events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 17-18, 2012 – Moscow, Russia Oracle Develop Conference The Oracle database developer conference, Oracle Develop, will visit Moscow, Russia and Hyderabad, India this spring. Oracle Develop includes a database development track, which contains .NET sessions and hands-on lab led by an Oracle .NET product manager. Register today before the event fills up. http://www.oracle.com/javaone/ru-en/index.html Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 24, 26 – San Diego, CA & San Jose, CA ISC2 Leadership Regional Event Series Oracle at (ISC)2 Security Leadership Series: Herding Clouds -- Managing Cloud Security Concerns and Expectations Join us for his interactive day-long session where industry leaders, including Oracle solution experts, will talk about how to: dispel the top Cloud security myths minimize "rogue Cloud" implementations identify potential compliance pitfalls and how to avoid them develop contract language for your cloud providers manage users across your fractured datacenter leverage existing technologies to protect data as it moves from the enterprise to the cloud http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=146972&src=7239493&src=7239493&Act=228 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 22-26, 2012 – Las Vegas, NV IOUG Collaborate 12 From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12.  Your COLLABORATE 12 - IOUG Forum registration comes complete with a bonus- a full day Deep Dive education program on Sunday! Choose from numerous hot topics, including Real World Performance, High Availability and more, presented by legendary and seasoned Oracle pros, including Tom Kyte and Craig Shallahamer. http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=143637&src=7360364&src=7360364&Act=5 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Independent Oracle User Group (IOUG) Regional Events: April 16, 2012 – Columbus, Ohio Ohio Oracle Users Group Higher Performance PL/PQL and Oracle Database 11g PL/SQL New Features – Featuring Steven Feuerstein http://www.ooug.org/ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 26, 2012- Irving, TX Dallas Oracle Users Group Oracle Database Forum: 5-7PM Oracle Corporation, 6031 Connection Drive Irving, TX http://memberservices.membee.com/538/irmevents.aspx?id=64 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Apr 30, 2012- Los Angeles, CA Real World Performance Tour A full day of real world database performance with: Tom Kyte, author of the ever popular AskTom Blog, Andrew Holdsworth, head of Oracle's Real World Performance Team, and Graham Wood, renowned Oracle Database performance architect. Through discussion, debate and demos, they’ll show you how to master performance engineering topics like: Best practices for designing hardware architectures and how to spot and fix bad design. How to develop applications that deliver the fastest possible performance without sacrificing accuracy.  New for 2012! Updates on Enterprise Manager, Exadata, and what these technologies mean to your current systems. http://www.ioug.org/Events/ADayofRealWorldPerformance/tabid/194/Default.aspx

    Read the article

  • Pluggable Rules for Entity Framework Code First

    - by Ricardo Peres
    Suppose you want a system that lets you plug custom validation rules on your Entity Framework context. The rules would control whether an entity can be saved, updated or deleted, and would be implemented in plain .NET. Yes, I know I already talked about plugable validation in Entity Framework Code First, but this is a different approach. An example API is in order, first, a ruleset, which will hold the collection of rules: 1: public interface IRuleset : IDisposable 2: { 3: void AddRule<T>(IRule<T> rule); 4: IEnumerable<IRule<T>> GetRules<T>(); 5: } Next, a rule: 1: public interface IRule<T> 2: { 3: Boolean CanSave(T entity, DbContext ctx); 4: Boolean CanUpdate(T entity, DbContext ctx); 5: Boolean CanDelete(T entity, DbContext ctx); 6: String Name 7: { 8: get; 9: } 10: } Let’s analyze what we have, starting with the ruleset: Only has methods for adding a rule, specific to an entity type, and to list all rules of this entity type; By implementing IDisposable, we allow it to be cancelled, by disposing of it when we no longer want its rules to be applied. A rule, on the other hand: Has discrete methods for checking if a given entity can be saved, updated or deleted, which receive as parameters the entity itself and a pointer to the DbContext to which the ruleset was applied; Has a name property for helping us identifying what failed. A ruleset really doesn’t need a public implementation, all we need is its interface. The private (internal) implementation might look like this: 1: sealed class Ruleset : IRuleset 2: { 3: private readonly IDictionary<Type, HashSet<Object>> rules = new Dictionary<Type, HashSet<Object>>(); 4: private ObjectContext octx = null; 5:  6: internal Ruleset(ObjectContext octx) 7: { 8: this.octx = octx; 9: } 10:  11: public void AddRule<T>(IRule<T> rule) 12: { 13: if (this.rules.ContainsKey(typeof(T)) == false) 14: { 15: this.rules[typeof(T)] = new HashSet<Object>(); 16: } 17:  18: this.rules[typeof(T)].Add(rule); 19: } 20:  21: public IEnumerable<IRule<T>> GetRules<T>() 22: { 23: if (this.rules.ContainsKey(typeof(T)) == true) 24: { 25: foreach (IRule<T> rule in this.rules[typeof(T)]) 26: { 27: yield return (rule); 28: } 29: } 30: } 31:  32: public void Dispose() 33: { 34: this.octx.SavingChanges -= RulesExtensions.OnSaving; 35: RulesExtensions.rulesets.Remove(this.octx); 36: this.octx = null; 37:  38: this.rules.Clear(); 39: } 40: } Basically, this implementation: Stores the ObjectContext of the DbContext to which it was created for, this is so that later we can remove the association; Has a collection - a set, actually, which does not allow duplication - of rules indexed by the real Type of an entity (because of proxying, an entity may be of a type that inherits from the class that we declared); Has generic methods for adding and enumerating rules of a given type; Has a Dispose method for cancelling the enforcement of the rules. A (really dumb) rule applied to Product might look like this: 1: class ProductRule : IRule<Product> 2: { 3: #region IRule<Product> Members 4:  5: public String Name 6: { 7: get 8: { 9: return ("Rule 1"); 10: } 11: } 12:  13: public Boolean CanSave(Product entity, DbContext ctx) 14: { 15: return (entity.Price > 10000); 16: } 17:  18: public Boolean CanUpdate(Product entity, DbContext ctx) 19: { 20: return (true); 21: } 22:  23: public Boolean CanDelete(Product entity, DbContext ctx) 24: { 25: return (true); 26: } 27:  28: #endregion 29: } The DbContext is there because we may need to check something else in the database before deciding whether to allow an operation or not. And here’s how to apply this mechanism to any DbContext, without requiring the usage of a subclass, by means of an extension method: 1: public static class RulesExtensions 2: { 3: private static readonly MethodInfo getRulesMethod = typeof(IRuleset).GetMethod("GetRules"); 4: internal static readonly IDictionary<ObjectContext, Tuple<IRuleset, DbContext>> rulesets = new Dictionary<ObjectContext, Tuple<IRuleset, DbContext>>(); 5:  6: private static Type GetRealType(Object entity) 7: { 8: return (entity.GetType().Assembly.IsDynamic == true ? entity.GetType().BaseType : entity.GetType()); 9: } 10:  11: internal static void OnSaving(Object sender, EventArgs e) 12: { 13: ObjectContext octx = sender as ObjectContext; 14: IRuleset ruleset = rulesets[octx].Item1; 15: DbContext ctx = rulesets[octx].Item2; 16:  17: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Added)) 18: { 19: Object entity = entry.Entity; 20: Type realType = GetRealType(entity); 21:  22: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 23: { 24: if (rule.CanSave(entity, ctx) == false) 25: { 26: throw (new Exception(String.Format("Cannot save entity {0} due to rule {1}", entity, rule.Name))); 27: } 28: } 29: } 30:  31: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted)) 32: { 33: Object entity = entry.Entity; 34: Type realType = GetRealType(entity); 35:  36: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 37: { 38: if (rule.CanDelete(entity, ctx) == false) 39: { 40: throw (new Exception(String.Format("Cannot delete entity {0} due to rule {1}", entity, rule.Name))); 41: } 42: } 43: } 44:  45: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Modified)) 46: { 47: Object entity = entry.Entity; 48: Type realType = GetRealType(entity); 49:  50: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 51: { 52: if (rule.CanUpdate(entity, ctx) == false) 53: { 54: throw (new Exception(String.Format("Cannot update entity {0} due to rule {1}", entity, rule.Name))); 55: } 56: } 57: } 58: } 59:  60: public static IRuleset CreateRuleset(this DbContext context) 61: { 62: Tuple<IRuleset, DbContext> ruleset = null; 63: ObjectContext octx = (context as IObjectContextAdapter).ObjectContext; 64:  65: if (rulesets.TryGetValue(octx, out ruleset) == false) 66: { 67: ruleset = rulesets[octx] = new Tuple<IRuleset, DbContext>(new Ruleset(octx), context); 68: 69: octx.SavingChanges += OnSaving; 70: } 71:  72: return (ruleset.Item1); 73: } 74: } It relies on the SavingChanges event of the ObjectContext to intercept the saving operations before they are actually issued. Yes, it uses a bit of dynamic magic! Very handy, by the way! So, let’s put it all together: 1: using (MyContext ctx = new MyContext()) 2: { 3: IRuleset rules = ctx.CreateRuleset(); 4: rules.AddRule(new ProductRule()); 5:  6: ctx.Products.Add(new Product() { Name = "xyz", Price = 50000 }); 7:  8: ctx.SaveChanges(); //an exception is fired here 9:  10: //when we no longer need to apply the rules 11: rules.Dispose(); 12: } Feel free to use it and extend it any way you like, and do give me your feedback! As a final note, this can be easily changed to support plain old Entity Framework (not Code First, that is), if that is what you are using.

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • Design pattern for cost calculator app?

    - by Anders Svensson
    Hi, I have a problem that I’ve tried to get help for before, but I wasn’t able to solve it then, so I’m trying to simplify the problem now to see if I can get some more concrete help with this because it is driving me crazy… Basically, I have a working (more complex) version of this application, which is a project cost calculator. But because I am at the same time trying to learn to design my applications better, I would like some input on how I could improve this design. Basically the main thing I want is input on the conditionals that (here) appear repeated in two places. The suggestions I got before was to use the strategy pattern or factory pattern. I also know about the Martin Fowler book with the suggestion to Refactor conditional with polymorphism. I understand that principle in his simpler example. But how can I do either of these things here (if any would be suitable)? The way I see it, the calculation is dependent on a couple of conditions: 1. What kind of service is it, writing or analysis? 2. Is the project small, medium or large? (Please note that there may be other parameters as well, equally different, such as “are the products new or previously existing?” So such parameters should be possible to add, but I tried to keep the example simple with only two parameters to be able to get concrete help) So refactoring with polymorphism would imply creating a number of subclasses, which I already have for the first condition (type of service), and should I really create more subclasses for the second condition as well (size)? What would that become, AnalysisSmall, AnalysisMedium, AnalysisLarge, WritingSmall, etc…??? No, I know that’s not good, I just don’t see how to work with that pattern anyway else? I see the same problem basically for the suggestions of using the strategy pattern (and the factory pattern as I see it would just be a helper to achieve the polymorphism above). So please, if anyone has concrete suggestions as to how to design these classes the best way I would be really grateful! Please also consider whether I have chosen the objects correctly too, or if they need to be redesigned. (Responses like "you should consider the factory pattern" will obviously not be helpful... I've already been down that road and I'm stumped at precisely how in this case) Regards, Anders The code (very simplified, don’t mind the fact that I’m using strings instead of enums, not using a config file for data etc, that will be done as necessary in the real application once I get the hang of these design problems): public abstract class Service { protected Dictionary<string, int> _hours; protected const int SMALL = 2; protected const int MEDIUM = 8; public int NumberOfProducts { get; set; } public abstract int GetHours(); } public class Writing : Service { public Writing(int numberOfProducts) { NumberOfProducts = numberOfProducts; _hours = new Dictionary<string, int> { { "small", 125 }, { "medium", 100 }, { "large", 60 } }; } public override int GetHours() { if (NumberOfProducts <= SMALL) return _hours["small"] * NumberOfProducts; if (NumberOfProducts <= MEDIUM) return (_hours["small"] * SMALL) + (_hours["medium"] * (NumberOfProducts - SMALL)); return (_hours["small"] * SMALL) + (_hours["medium"] * (MEDIUM - SMALL)) + (_hours["large"] * (NumberOfProducts - MEDIUM)); } } public class Analysis : Service { public Analysis(int numberOfProducts) { NumberOfProducts = numberOfProducts; _hours = new Dictionary<string, int> { { "small", 56 }, { "medium", 104 }, { "large", 200 } }; } public override int GetHours() { if (NumberOfProducts <= SMALL) return _hours["small"]; if (NumberOfProducts <= MEDIUM) return _hours["medium"]; return _hours["large"]; } } public partial class Form1 : Form { public Form1() { InitializeComponent(); List<int> quantities = new List<int>(); for (int i = 0; i < 100; i++) { quantities.Add(i); } comboBoxNumberOfProducts.DataSource = quantities; } private void comboBoxNumberOfProducts_SelectedIndexChanged(object sender, EventArgs e) { Service writing = new Writing((int) comboBoxNumberOfProducts.SelectedItem); Service analysis = new Analysis((int) comboBoxNumberOfProducts.SelectedItem); labelWriterHours.Text = writing.GetHours().ToString(); labelAnalysisHours.Text = analysis.GetHours().ToString(); } }

    Read the article

  • Accelerated C++, problem 5-6 (copying values from inside a vector to the front)

    - by Darel
    Hello, I'm working through the exercises in Accelerated C++ and I'm stuck on question 5-6. Here's the problem description: (somewhat abbreviated, I've removed extraneous info.) 5-6. Write the extract_fails function so that it copies the records for the passing students to the beginning of students, and then uses the resize function to remove the extra elements from the end of students. (students is a vector of student structures. student structures contain an individual student's name and grades.) More specifically, I'm having trouble getting the vector.insert function to properly copy the passing student structures to the start of the vector students. Here's the extract_fails function as I have it so far (note it doesn't resize the vector yet, as directed by the problem description; that should be trivial once I get past my current issue.) // Extract the students who failed from the "students" vector. void extract_fails(vector<Student_info>& students) { typedef vector<Student_info>::size_type str_sz; typedef vector<Student_info>::iterator iter; iter it = students.begin(); str_sz i = 0, count = 0; while (it != students.end()) { // fgrade tests wether or not the student failed if (!fgrade(*it)) { // if student passed, copy to front of vector students.insert(students.begin(), it, it); // tracks of the number of passing students(so we can properly resize the array) count++; } cout << it->name << endl; // output to verify that each student is iterated to it++; } } The code compiles and runs, but the students vector isn't adding any student structures to its front. My program's output displays that the students vector is unchanged. Here's my complete source code, followed by a sample input file (I redirect input from the console by typing " < grades" after the compiled program name at the command prompt.) #include <iostream> #include <string> #include <algorithm> // to get the declaration of `sort' #include <stdexcept> // to get the declaration of `domain_error' #include <vector> // to get the declaration of `vector' //driver program for grade partitioning examples using std::cin; using std::cout; using std::endl; using std::string; using std::domain_error; using std::sort; using std::vector; using std::max; using std::istream; struct Student_info { std::string name; double midterm, final; std::vector<double> homework; }; bool compare(const Student_info&, const Student_info&); std::istream& read(std::istream&, Student_info&); std::istream& read_hw(std::istream&, std::vector<double>&); double median(std::vector<double>); double grade(double, double, double); double grade(double, double, const std::vector<double>&); double grade(const Student_info&); bool fgrade(const Student_info&); void extract_fails(vector<Student_info>& v); int main() { vector<Student_info> vs; Student_info s; string::size_type maxlen = 0; while (read(cin, s)) { maxlen = max(maxlen, s.name.size()); vs.push_back(s); } sort(vs.begin(), vs.end(), compare); extract_fails(vs); // display the new, modified vector - it should be larger than // the input vector, due to some student structures being // added to the front of the vector. cout << "count: " << vs.size() << endl << endl; vector<Student_info>::iterator it = vs.begin(); while (it != vs.end()) cout << it++->name << endl; return 0; } // Extract the students who failed from the "students" vector. void extract_fails(vector<Student_info>& students) { typedef vector<Student_info>::size_type str_sz; typedef vector<Student_info>::iterator iter; iter it = students.begin(); str_sz i = 0, count = 0; while (it != students.end()) { // fgrade tests wether or not the student failed if (!fgrade(*it)) { // if student passed, copy to front of vector students.insert(students.begin(), it, it); // tracks of the number of passing students(so we can properly resize the array) count++; } cout << it->name << endl; // output to verify that each student is iterated to it++; } } bool compare(const Student_info& x, const Student_info& y) { return x.name < y.name; } istream& read(istream& is, Student_info& s) { // read and store the student's name and midterm and final exam grades is >> s.name >> s.midterm >> s.final; read_hw(is, s.homework); // read and store all the student's homework grades return is; } // read homework grades from an input stream into a `vector<double>' istream& read_hw(istream& in, vector<double>& hw) { if (in) { // get rid of previous contents hw.clear(); // read homework grades double x; while (in >> x) hw.push_back(x); // clear the stream so that input will work for the next student in.clear(); } return in; } // compute the median of a `vector<double>' // note that calling this function copies the entire argument `vector' double median(vector<double> vec) { typedef vector<double>::size_type vec_sz; vec_sz size = vec.size(); if (size == 0) throw domain_error("median of an empty vector"); sort(vec.begin(), vec.end()); vec_sz mid = size/2; return size % 2 == 0 ? (vec[mid] + vec[mid-1]) / 2 : vec[mid]; } // compute a student's overall grade from midterm and final exam grades and homework grade double grade(double midterm, double final, double homework) { return 0.2 * midterm + 0.4 * final + 0.4 * homework; } // compute a student's overall grade from midterm and final exam grades // and vector of homework grades. // this function does not copy its argument, because `median' does so for us. double grade(double midterm, double final, const vector<double>& hw) { if (hw.size() == 0) throw domain_error("student has done no homework"); return grade(midterm, final, median(hw)); } double grade(const Student_info& s) { return grade(s.midterm, s.final, s.homework); } // predicate to determine whether a student failed bool fgrade(const Student_info& s) { return grade(s) < 60; } Sample input file: Moo 100 100 100 100 100 100 100 100 Fail1 45 55 65 80 90 70 65 60 Moore 75 85 77 59 0 85 75 89 Norman 57 78 73 66 78 70 88 89 Olson 89 86 70 90 55 73 80 84 Peerson 47 70 82 73 50 87 73 71 Baker 67 72 73 40 0 78 55 70 Davis 77 70 82 65 70 77 83 81 Edwards 77 72 73 80 90 93 75 90 Fail2 55 55 65 50 55 60 65 60 Thanks to anyone who takes the time to look at this!

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • &lt;%: %&gt;, HtmlEncode, IHtmlString and MvcHtmlString

    - by Shaun
    One of my colleague and friend, Robin is playing and struggling with the ASP.NET MVC 2 on a project these days while I’m struggling with a annoying client. Since it’s his first time to use ASP.NET MVC he was meetings with a lot of problem and I was very happy to share my experience to him. Yesterday he asked me when he attempted to insert a <br /> element into his page he found that the page was rendered like this which is bad. He found his <br /> was shown as a part of the string rather than creating a new line. After checked a bit in his code I found that it’s because he utilized a new ASP.NET markup supported in .NET 4.0 – “<%: %>”. If you have been using ASP.NET MVC 1 or in .NET 3.5 world it would be very common that using <%= %> to show something on the page from the backend code. But when you do it you must ensure that the string that are going to be displayed should be Html-safe, which means all the Html markups must be encoded. Otherwise this might cause an XSS (cross-site scripting) problem. So that you’d better use the code like this below to display anything on the page. In .NET 4.0 Microsoft introduced a new markup to solve this problem which is <%: %>. It will encode the content automatically so that you will no need to check and verify your code manually for the XSS issue mentioned below. But this also means that it will encode all things, include the Html element you want to be rendered. So I changed his code like this and it worked well. After helped him solved this problem and finished a spreadsheet for my boring project I considered a bit more on the <%: %>. Since it will encode all thing why it renders correctly when we use “<%: Html.TextBox(“name”) %>” to show a text box? As you know the Html.TextBox will render a “<input name="name" id="name" type="text"/>” element on the page. If <%: %> will encode everything it should not display a text box. So I dig into the source code of the MVC and found some comments in the class MvcHtmlString. 1: // In ASP.NET 4, a new syntax <%: %> is being introduced in WebForms pages, where <%: expression %> is equivalent to 2: // <%= HttpUtility.HtmlEncode(expression) %>. The intent of this is to reduce common causes of XSS vulnerabilities 3: // in WebForms pages (WebForms views in the case of MVC). This involves the addition of an interface 4: // System.Web.IHtmlString and a static method overload System.Web.HttpUtility::HtmlEncode(object). The interface 5: // definition is roughly: 6: // public interface IHtmlString { 7: // string ToHtmlString(); 8: // } 9: // And the HtmlEncode(object) logic is roughly: 10: // - If the input argument is an IHtmlString, return argument.ToHtmlString(), 11: // - Otherwise, return HtmlEncode(Convert.ToString(argument)). 12: // 13: // Unfortunately this has the effect that calling <%: Html.SomeHelper() %> in an MVC application running on .NET 4 14: // will end up encoding output that is already HTML-safe. As a result, we're changing out HTML helpers to return 15: // MvcHtmlString where appropriate. <%= Html.SomeHelper() %> will continue to work in both .NET 3.5 and .NET 4, but 16: // changing the return types to MvcHtmlString has the added benefit that <%: Html.SomeHelper() %> will also work 17: // properly in .NET 4 rather than resulting in a double-encoded output. MVC developers in .NET 4 will then be able 18: // to use the <%: %> syntax almost everywhere instead of having to remember where to use <%= %> and where to use 19: // <%: %>. This should help developers craft more secure web applications by default. 20: // 21: // To create an MvcHtmlString, use the static Create() method instead of calling the protected constructor. The comment said the encoding rule of the <%: %> would be: If the type of the content is IHtmlString it will NOT encode since the IHtmlString indicates that it’s Html-safe. Otherwise it will use HtmlEncode to encode the content. If we check the return type of the Html.TextBox method we will find that it’s MvcHtmlString, which was implemented the IHtmlString interface dynamically. That is the reason why the “<input name="name" id="name" type="text"/>” was not encoded by <%: %>. So if we want to tell ASP.NET MVC, or I should say the ASP.NET runtime that the content is Html-safe and no need, or should not be encoded we can convert the content into IHtmlString. So another resolution would be like this. Also we can create an extension method as well for better developing experience. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace ShaunXu.Blogs.IHtmlStringIssue 8: { 9: public static class Helpers 10: { 11: public static MvcHtmlString IsHtmlSafe(this string content) 12: { 13: return MvcHtmlString.Create(content); 14: } 15: } 16: } Then the view would be like this. And the page rendered correctly.         Summary In this post I explained a bit about the new markup in .NET 4.0 – <%: %> and its usage. I also explained a bit about how to control the page content, whether it should be encoded or not. We can see the ASP.NET MVC gives us more points to control the web pages.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • D2K to OA Framework Transition

    - by PRajkumar
    What is the difference between D2K form and OA Framework? It is a very innocent but important question for someone that desires to make transition from D2K to OA Framework. I hope you have already read and implemented OA Framework Getting Started. I will re-visit my own experience of implementing HelloWorld program in "OA Framework". When I implemented HelloWorld a year ago, I had no clue as to what I was doing & why I was doing those steps. I merely copied the steps from Oracle Tutorial without understanding them. Hence in this blog, I will try to explain in simple manner the meaning of OA Framework HelloWorld Program and compare the steps to D2K form [where possible]. To keep things simple, only basics will be discussed. Following key Steps were needed for HelloWorld Step 1 Create a new Workspace and a new Project as dictated by Oracle's tutorial. When defining project, you will specify a default package, which in this case was oracle.apps.ak.hello This means the following: - ak is the short name of the Application in Oracle           [means fnd_applications.short_name] hello is the name of your project Step 2 Next, you will create a OA Page within hello project Think OA Page as the fmx file itself in D2K. I am saying so because this page gets attached to the form function. This page will be created within hello project, hence the package name oracle.apps.ak.hello.webui Note the webui, it is a convention to have page in webui, means this page represents the Web User Interface You will assign the default AM [OAApplicationModule]. Think of AM "Connection Manager" and "Transaction State Manager" for your page          I can't co-relate this to anything in D2k, as there is no concept of Connection Pooling and that D2k is not stateless. Reason being that as soon as you kick off a D2K Form, it connects to a single session of Oracle and sticks to that single Oracle database session. So is not the case in OAF, hence AM is needed. Step 3 You create Region within the Page. ·         Region is what will store your fields. Text input fields will be of type messageTextInput. Think of Canvas in D2K. You can have nested regions. Stacked Canvas in D2K comes the closest to this component of OA Framework Step 4 Add a button to one of the nested regions The itemStyle should be submitButton, in case you want the page to be submitted when this button is clicked There is no WHEN-BUTTON-PRESSED trigger in OAF. In Framework, you will add a controller java code to handle events like Form Submit button clicks. JDeveloper generates the default code for you. Primarily two functions [should I call methods] will be created processRequest [for UI Rendering Handling] and processFormRequest          Think of processRequest as WHEN-NEW-FORM-INSTANCE, though processRequest is very restrictive. Note What is the difference between processRequest and processFormRequest? These two methods are available in the Default Controller class that gets created. processFormRequest This method is commonly used to react/respond to the event that has taken place, for example click of a button. Some examples are if(oapagecontext.getParameter("Cancel") != null) (Do your processing for Cancellation/ Rollback) if(oapagecontext.getParameter("Submit") != null) (Do your validations and commit here) if(oapagecontext.getParameter("Update") != null) (Do your validations and commit here) In the above three examples, you could be calling oapagecontext.forwardImmediately to re-direct the page navigation to some other page if needed. processRequest In this method, usually page rendering related code is written. Effectively, each GUI component is a bean that gets initialised during processRequest. Those who are familiar with D2K forms, something like pre-query may be written in this method. Step 5 In the controller to access the value in field "HelloName" the command is String userContent = pageContext.getParameter("HelloName"); In D2k, we used :block.field. In OAFramework, at submission of page, all the field values get passed into to OAPageContext object. Use getParameter to access the field value To set the value of the field, use OAMessageTextInputBean field HelloName = (OAMessageTextInputBean)webBean.findChildRecursive("HelloName"); fieldHelloName.setText(pageContext,"Setting the default value" ); Note when setting field value in controller: Note 1. Do not set the value in processFormRequest Note 2. If the field comes from View Object, then do not use setText in controller Note 3. For control fields [that are not based on View Objects], you can use setText to assign values in processRequest method Lets take some notes to expand beyond the HelloWorld Project Note 1 In D2K-forms we sort of created a Window, attached to Canvas, and then fields within that Canvas. However in OA Framework, think of Page being fmx/Window, think of Region being a Canvas, and fields being within Regions. This is not a formal/accurate understanding of analogy between D2k and Framework, but is close to being logical. Note 2 In D2k, your Forms fmb file was compiled to fmx. It was fmx file that was deployed on mid-tier. In case of OAF, your OA Page is nothing but a XML file. We call this MDS [meta data]. Whatever name you give to "Page" in OAF, an XML file of the same name gets created. This xml file must then be loaded into database by using XML Importer command. Note 3 Apart from MDS XML file, almost everything else is merely deployed to your mid-tier. Usually this is underneath $JAVA_TOP/oracle/apps/../.. All java files will go underneath java top/oracle/apps/../.. etc. Note 4 When building tutorial, ignore the steps for setting "Attribute Sets". These are not mandatory. Oracle might just have developed their tutorials without including these. Think of these like Visual Attributes of D2K forms Note 5 Controller is where you will write any java code in OA Framework. You can create a Controller per Page or have a different Controller for each of the Regions with the same Page. Note 6 In the method processFormRequest of the Controller, you can access the values of the page by using notation pageContext.getParameter("<fieldname here>"). This method processFormRequest is executed when the OAF Screen/Page is submitted by click of a button. Note 7 Inside the controller, all the Database Related interactions for example interaction with View Objects happen via Application Module. But why so? Because Application Module Manages the transaction state of the Application. OAApplicationModuleImpl oaapplicationmoduleimpl = OAApplicationModuleImpl)oapagecontext.getApplicationModule(oawebbean); OADBTransaction oadbtransaction = OADBTransaction)oaapplicationmoduleimpl.getDBTransaction(); Note 8 In D2K, we have control block or a block based on database view. Similarly, in OA Framework, if the field does not have view Object attached, then it is like a control field. Hence in HelloWorld example, field HelloName is a control field [in D2K terminology]. A view Object can either be based on a view/table, synonym or on a SQL statement. Note 9 I wish to access the fields in multi record block that is based on view Object. Can I do this in Controller? Sure you can. To traverse through those records, do the below ·         Get the reference to the View Object using (OAViewObject)oapagecontext.getApplicationModule(oawebbean).findViewObject("VO Name Here") ·         Loop through the records in View Objects using count returned from oaviewobject.getFetchedRowCount() ·         For each record, fetch the value of the fields within the loop as oracle.jbo.Row row = oaviewobject.getRowAtRangeIndex(loop index here); (String)row.getAttribute("Column name of VO here ");

    Read the article

  • Finding the groups of a user in WLS with OPSS

    - by user12587121
    How to find the group memberships for a user from a web application running in Weblogic server ?  This is useful for building up the profile of the user for security purposes for example. WLS as a container offers an identity store service which applications can access to query and manage identities known to the container.  This article for example shows how to recover the groups of the current user, but how can we find the same information for an arbitrary user ? It is the Oracle Platform for Securtiy Services (OPSS) that looks after the identity store in WLS and so it is in the OPSS APIs that we can find the way to recover this information. This is explained in the following documents.  Starting from the FMW 11.1.1.5 book list, with the Security Overview document we can see how WLS uses OPSS: Proceeding to the more detailed Application Security document, we find this list of useful references for security in FMW. We can follow on into the User/Role API javadoc. The Application Security document explains how to ensure that the identity store is configured appropriately to allow the OPSS APIs to work.  We must verify that the jps-config.xml file where the application  is deployed has it's identity store configured--look for the following elements in that file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> The document contains a code sample for using the identity store here. Once we have the identity store reference we can recover the user's group memberships using the RoleManager interface:             RoleManager roleManager = idStore.getRoleManager();            SearchResponse grantedRoles = null;            try{                System.out.println("Retrieving granted WLS roles for user " + userPrincipal.getName());                grantedRoles = roleManager.getGrantedRoles(userPrincipal, false);                while( grantedRoles.hasNext()){                      Identity id = grantedRoles.next();                      System.out.println("  disp name=" + id.getDisplayName() +                                  " Name=" + id.getName() +                                  " Principal=" + id.getPrincipal() +                                  "Unique Name=" + id.getUniqueName());                     // Here, we must use WLSGroupImpl() to build the Principal otherwise                     // OES does not recognize it.                      retSubject.getPrincipals().add(new WLSGroupImpl(id.getPrincipal().getName()));                 }            }catch(Exception ex) {                System.out.println("Error getting roles for user " + ex.getMessage());                ex.printStackTrace();            }        }catch(Exception ex) {            System.out.println("OESGateway: Got exception instantiating idstore reference");        } This small JDeveloper project has a simple servlet that executes a request for the user weblogic's roles on executing a get on the default URL.  The full code to recover a user's goups is in the getSubjectWithRoles() method in the project.

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • Maven artifacts could not be resolved

    - by Adam Fisher
    I added the spring and jboss repositories to my pom.xml like below: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <name>MyProject</name> <url>http://www.myproject.com</url> <modelVersion>4.0.0</modelVersion> <groupId>com.myproject</groupId> <artifactId>myproject</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <dependencies> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-api</artifactId> <version>2.1.3-b02</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-impl</artifactId> <version>2.1.3_01</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.0.2</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>3.0-alpha-1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>taglibs</groupId> <artifactId>standard</artifactId> <version>1.1.2</version> <scope>runtime</scope> </dependency> <!-- SPRING DEPENDENCIES --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>3.0.6.RELEASE</version> </dependency> <!-- HIBERNATE DEPENDENCIES --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>3.5.4-Final</version> </dependency> <!-- PRIMEFACES --> <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.0.M4</version> </dependency> <dependency> <groupId>org.primefaces.themes</groupId> <artifactId>aristo</artifactId> <version>1.0.1</version> </dependency> <!-- OTHER DEPENDENCIES --> <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.5</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.18</version> <scope>provided</scope> </dependency> <dependency> <groupId>net.authorize</groupId> <artifactId>java-anet-sdk</artifactId> <version>1.4.2</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk</artifactId> <version>1.2.12</version> </dependency> <dependency> <groupId>com.ocpsoft</groupId> <artifactId>prettyfaces-jsf2</artifactId> <version>3.3.2</version> </dependency> <dependency> <groupId>javax</groupId> <artifactId>javaee-web-api</artifactId> <version>6.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.1</version> <scope>test</scope> </dependency> </dependencies> <properties> <endorsed.dir>${project.build.directory}/endorsed</endorsed.dir> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <netbeans.hint.j2eeVersion>1.6</netbeans.hint.j2eeVersion> <netbeans.hint.deploy.server>gfv3ee6</netbeans.hint.deploy.server> </properties> <repositories> <repository> <id>jsf20</id> <name>Repository for library Library[jsf20]</name> <url>http://download.java.net/maven/2/</url> <layout>default</layout> </repository> <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> <layout>default</layout> </repository> <repository> <id>jboss-public-repository-group</id> <name>JBoss Public Maven Repository Group</name> <url>https://repository.jboss.org/nexus/content/repositories/releases/</url> <layout>default</layout> </repository> <repository> <id>spring-release</id> <name>Spring Release Repository</name> <url>http://maven.springframework.org/release</url> <layout>default</layout> </repository> </repositories> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> <compilerArguments> <endorseddirs>${endorsed.dir}</endorseddirs> </compilerArguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.1</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.1</version> <executions> <execution> <phase>validate</phase> <goals> <goal>copy</goal> </goals> <configuration> <outputDirectory>${endorsed.dir}</outputDirectory> <silent>true</silent> <artifactItems> <artifactItem> <groupId>javax</groupId> <artifactId>javaee-endorsed-api</artifactId> <version>6.0</version> <type>jar</type> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> </plugins> <finalName>${project.artifactId}</finalName> </build> <!--pluginRepositories> <pluginRepository> <id>caucho</id> <name>Caucho</name> <url>http://caucho.com/m2</url> </pluginRepository> </pluginRepositories--> </project> But when I build, I get an error: The following artifacts could not be resolved: org.springframework:spring:jar:3.0.6.RELEASE, org.hibernate:hibernate:jar:3.5.4-Final: Could not find artifact org.springframework:spring:jar:3.0.6.RELEASE in jsf20 (http://download.java.net/maven/2/) -> [Help 1] It's like maven only looks at the first repository and not the ones defined for spring and hibernate.

    Read the article

  • XSLT templates and recursion

    - by user333411
    Hi All, Im new to XSLT and am having some problems trying to format an XML document which has recursive nodes. My XML Code: Hopefully my XML shows: All <item> are nested with <items> An item can have either just attributes, or sub nodes The level to which <item> nodes are nested can be infinently deep <?xml version="1.0" encoding="utf-8" ?> - <items> <item groupID="1" name="Home" url="//" /> - <item groupID="2" name="Guides" url="/Guides/"> - <items> - <item groupID="26" name="Online-Poker-Guide" url="/Guides/Online-Poker-Guide/"> - <items> - <item> <id>107</id> - <title> - <![CDATA[ Poker Betting - Online Poker Betting Structures ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-betting-structures ]]> </url> </item> - <item> <id>114</id> - <title> - <![CDATA[ Beginners&#39; Poker - Poker Hand Ranking ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-hand-ranking ]]> </url> </item> - <item> <id>115</id> - <title> - <![CDATA[ Poker Terms - 4th Street and 5th Street ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-poker-terms ]]> </url> </item> - <item> <id>116</id> - <title> - <![CDATA[ Popular Poker - The Popularity of Texas Hold&#39;em ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-popularity-texas-holdem ]]> </url> </item> - <item> <id>364</id> - <title> - <![CDATA[ The Impact of Traditional Poker on Online Poker (and vice versa) ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-tradional-vs-online ]]> </url> </item> - <item> <id>365</id> - <title> - <![CDATA[ The Ultimate, Absolute Online Poker Scandal ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-scandal ]]> </url> </item> </items> - <items> - <item groupID="27" name="Beginners-Poker" url="/Guides/Online-Poker-Guide/Beginners-Poker/"> - <items> + <item> <id>101</id> - <title> - <![CDATA[ Poker Betting - All-in On the Flop ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-all-in-on-the-flop ]]> </url> </item> + <item> <id>102</id> - <title> - <![CDATA[ Beginners&#39; Poker - Choosing an Online Poker Room ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-choosing-a-room ]]> </url> </item> + <item> <id>105</id> - <title> - <![CDATA[ Beginners&#39; Poker - Choosing What Type of Poker to Play ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-choosing-type-to-play ]]> </url> </item> + <item> <id>106</id> - <title> - <![CDATA[ Online Poker - Different Types of Online Poker ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker ]]> </url> </item> + <item> <id>109</id> - <title> - <![CDATA[ Online Poker - Opening an Account at an Online Poker Site ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-opening-an-account ]]> </url> </item> + <item> <id>111</id> - <title> - <![CDATA[ Beginners&#39; Poker - Poker Glossary ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-glossary ]]> </url> </item> + <item> <id>117</id> - <title> - <![CDATA[ Poker Betting - What is a Blind? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-what-is-a-blind ]]> </url> </item> - <item> <id>118</id> - <title> - <![CDATA[ Poker Betting - What is an Ante? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-what-is-an-ante ]]> </url> </item> + <item> <id>119</id> - <title> - <![CDATA[ Beginners Poker - What is Bluffing? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-bluffing ]]> </url> </item> - <item> <id>120</id> - <title> - <![CDATA[ Poker Games - What is Community Card Poker? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-community-card-poker ]]> </url> </item> - <item> <id>121</id> - <title> - <![CDATA[ Online Poker - What is Online Poker? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-online-poker ]]> </url> </item> </items> </item> </items> </item> </items> </item> </items> The XSL code: <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html" indent="yes"/> <xsl:template name="loop"> <xsl:for-each select="items/item"> <ul> <li><xsl:value-of select="@name" /></li> <xsl:if test="@name and child::node()"> <ul> <xsl:for-each select="items/item"> <li><xsl:value-of select="@name" />test</li> </xsl:for-each> </ul> <xsl:call-template name="loop" /> </xsl:if> <xsl:if test="child::node() and not(@name)"> <xsl:for-each select="/items"> <li><xsl:value-of select="id" /></li> </xsl:for-each> </xsl:if> </ul> </xsl:for-each> <xsl:for-each select="item/items/item"> <li>hi</li> </xsl:for-each> </xsl:template> <xsl:template match="/" name="test"> <xsl:call-template name="loop" /> </xsl:template> </xsl:stylesheet> Im trying to write the XSL so that every <items> node will render a <ul> and every <items> node will render an <li>. The XSL needs to be recursive because i cant tell how deep the nested nodes will go. Can anyone help? Regards, Al

    Read the article

  • PHP use of undefined constant error

    - by user272899
    Using a great script to grab details from imdb, I would like to thank Fabian Beiner. Just one error i have encountered with it is: Use of undefined constant sys_get_temp_dir assumed 'sys_get_temp_dir' in '/path/to/directory' on line 49 This is the complete script <?php /** * IMDB PHP Parser * * This class can be used to retrieve data from IMDB.com with PHP. This script will fail once in * a while, when IMDB changes *anything* on their HTML. Guys, it's time to provide an API! * * @link http://fabian-beiner.de * @copyright 2010 Fabian Beiner * @author Fabian Beiner (mail [AT] fabian-beiner [DOT] de) * @license MIT License * * @version 4.1 (February 1st, 2010) * */ class IMDB { private $_sHeader = null; private $_sSource = null; private $_sUrl = null; private $_sId = null; public $_bFound = false; private $_oCookie = '/tmp/imdb-grabber-fb.tmp'; const IMDB_CAST = '#<a href="/name/(\w+)/" onclick="\(new Image\(\)\)\.src=\'/rg/castlist/position-(\d|\d\d)/images/b\.gif\?link=/name/(\w+)/\';">(.*)</a>#Ui'; const IMDB_COUNTRY = '#<a href="/Sections/Countries/(\w+)/">#Ui'; const IMDB_DIRECTOR = '#<a href="/name/(\w+)/" onclick="\(new Image\(\)\)\.src=\'/rg/directorlist/position-(\d|\d\d)/images/b.gif\?link=name/(\w+)/\';">(.*)</a><br/>#Ui'; const IMDB_GENRE = '#<a href="/Sections/Genres/(\w+|\w+\-\w+)/">(\w+|\w+\-\w+)</a>#Ui'; const IMDB_MPAA = '#<h5><a href="/mpaa">MPAA</a>:</h5>\s*<div class="info-content">\s*(.*)\s*</div>#Ui'; const IMDB_PLOT = '#<h5>Plot:</h5>\s*<div class="info-content">\s*(.*)\s*<a#Ui'; const IMDB_POSTER = '#<a name="poster" href="(.*)" title="(.*)"><img border="0" alt="(.*)" title="(.*)" src="(.*)" /></a>#Ui'; const IMDB_RATING = '#<b>(\d\.\d/10)</b>#Ui'; const IMDB_RELEASE_DATE = '#<h5>Release Date:</h5>\s*\s*<div class="info-content">\s*(.*) \((.*)\)#Ui'; const IMDB_RUNTIME = '#<h5>Runtime:</h5>\s*<div class="info-content">\s*(.*)\s*</div>#Ui'; const IMDB_SEARCH = '#<b>Media from&nbsp;<a href="/title/tt(\d+)/"#i'; const IMDB_TAGLINE = '#<h5>Tagline:</h5>\s*<div class="info-content">\s*(.*)\s*</div>#Ui'; const IMDB_TITLE = '#<title>(.*) \((.*)\)</title>#Ui'; const IMDB_URL = '#http://(.*\.|.*)imdb.com/(t|T)itle(\?|/)(..\d+)#i'; const IMDB_VOTES = '#&nbsp;&nbsp;<a href="ratings" class="tn15more">(.*) votes</a>#Ui'; const IMDB_WRITER = '#<a href="/name/(\w+)/" onclick="\(new Image\(\)\)\.src=\'/rg/writerlist/position-(\d|\d\d)/images/b\.gif\?link=name/(\w+)/\';">(.*)</a>#Ui'; const IMDB_REDIRECT = '#Location: (.*)#'; /** * Public constructor. * * @param string $sSearch */ public function __construct($sSearch) { if (function_exists(sys_get_temp_dir)) { $this->_oCookie = tempnam(sys_get_temp_dir(), 'imdb'); } $sUrl = $this->findUrl($sSearch); if ($sUrl) { $bFetch = $this->fetchUrl($this->_sUrl); $this->_bFound = true; } } /** * Little REGEX helper. * * @param string $sRegex * @param string $sContent * @param int $iIndex; */ private function getMatch($sRegex, $sContent, $iIndex = 1) { preg_match($sRegex, $sContent, $aMatches); if ($iIndex > count($aMatches)) return; if ($iIndex == null) { return $aMatches; } return $aMatches[(int)$iIndex]; } /** * Little REGEX helper, I should find one that works for both... ;/ * * @param string $sRegex * @param int $iIndex; */ private function getMatches($sRegex, $iIndex = null) { preg_match_all($sRegex, $this->_sSource, $aMatches); if ((int)$iIndex) return $aMatches[$iIndex]; return $aMatches; } /** * Save an image. * * @param string $sUrl */ private function saveImage($sUrl) { $sUrl = trim($sUrl); $bolDir = false; if (!is_dir(getcwd() . '/posters')) { if (mkdir(getcwd() . '/posters', 0777)) { $bolDir = true; } } $sFilename = getcwd() . '/posters/' . preg_replace("#[^0-9]#", "", basename($sUrl)) . '.jpg'; if (file_exists($sFilename)) { return 'posters/' . basename($sFilename); } if (is_dir(getcwd() . '/posters') OR $bolDir) { if (function_exists('curl_init')) { $oCurl = curl_init($sUrl); curl_setopt_array($oCurl, array ( CURLOPT_VERBOSE => 0, CURLOPT_HEADER => 0, CURLOPT_RETURNTRANSFER => 1, CURLOPT_TIMEOUT => 5, CURLOPT_CONNECTTIMEOUT => 5, CURLOPT_REFERER => $sUrl, CURLOPT_BINARYTRANSFER => 1)); $sOutput = curl_exec($oCurl); curl_close($oCurl); $oFile = fopen($sFilename, 'x'); fwrite($oFile, $sOutput); fclose($oFile); return 'posters/' . basename($sFilename); } else { $oImg = imagecreatefromjpeg($sUrl); imagejpeg($oImg, $sFilename); return 'posters/' . basename($sFilename); } return false; } return false; } /** * Find a valid Url out of the passed argument. * * @param string $sSearch */ private function findUrl($sSearch) { $sSearch = trim($sSearch); if ($aUrl = $this->getMatch(self::IMDB_URL, $sSearch, 4)) { $this->_sId = 'tt' . preg_replace('[^0-9]', '', $aUrl); $this->_sUrl = 'http://www.imdb.com/title/' . $this->_sId .'/'; return true; } else { $sTemp = 'http://www.imdb.com/find?s=all&q=' . str_replace(' ', '+', $sSearch) . '&x=0&y=0'; $bFetch = $this->fetchUrl($sTemp); if( $this->isRedirect() ) { return true; } else if ($bFetch) { if ($strMatch = $this->getMatch(self::IMDB_SEARCH, $this->_sSource)) { $this->_sUrl = 'http://www.imdb.com/title/tt' . $strMatch . '/'; unset($this->_sSource); return true; } } } return false; } /** * Find if result is redirected directly to exact movie. */ private function isRedirect() { if ($strMatch = $this->getMatch(self::IMDB_REDIRECT, $this->_sHeader)) { $this->_sUrl = $strMatch; unset($this->_sSource); unset($this->_sHeader); return true; } return false; } /** * Fetch data from given Url. * Uses cURL if installed, otherwise falls back to file_get_contents. * * @param string $sUrl * @param int $iTimeout; */ private function fetchUrl($sUrl, $iTimeout = 15) { $sUrl = trim($sUrl); if (function_exists('curl_init')) { $oCurl = curl_init($sUrl); curl_setopt_array($oCurl, array ( CURLOPT_VERBOSE => 0, CURLOPT_HEADER => 1, CURLOPT_FRESH_CONNECT => true, CURLOPT_RETURNTRANSFER => 1, CURLOPT_TIMEOUT => (int)$iTimeout, CURLOPT_CONNECTTIMEOUT => (int)$iTimeout, CURLOPT_REFERER => $sUrl, CURLOPT_FOLLOWLOCATION => 0, CURLOPT_COOKIEFILE => $this->_oCookie, CURLOPT_COOKIEJAR => $this->_oCookie, CURLOPT_USERAGENT => 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6' )); $sOutput = curl_exec($oCurl); if ($sOutput === false) { return false; } $aInfo = curl_getinfo($oCurl); if ($aInfo['http_code'] != 200 && $aInfo['http_code'] != 302) { return false; } $sTmpHeader = strpos($sOutput, "\r\n\r\n"); $this->_sHeader = substr($sOutput, 0, $sTmpHeader); $this->_sSource = str_replace("\n", '', substr($sOutput, $sTmpHeader+1)); curl_close($oCurl); return true; } else { $sOutput = @file_get_contents($sUrl, 0); if (strpos($http_response_header[0], '200') === false){ return false; } $this->_sSource = str_replace("\n", '', (string)$sOutput); return true; } return false; } /** * Returns the cast. */ public function getCast($iOutput = null, $bMore = true) { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_CAST, 4); if (is_array($sReturned)) { if ($iOutput) { foreach ($sReturned as $i => $sName) { if ($i >= $iOutput) break; $sReturn[] = $sName; } return implode(' / ', $sReturn) . (($bMore) ? '&hellip;' : ''); } return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the cast as links. */ public function getCastAsUrl($iOutput = null, $bMore = true) { if ($this->_sSource) { $sReturned1 = $this->getMatches(self::IMDB_CAST, 4); $sReturned2 = $this->getMatches(self::IMDB_CAST, 3); if (is_array($sReturned1)) { if ($iOutput) { foreach ($sReturned1 as $i => $sName) { if ($i >= $iOutput) break; $aReturn[] = '<a href="http://www.imdb.com/name/' . $sReturned2[$i] . '/">' . $sName . '</a>';; } return implode(' / ', $aReturn) . (($bMore) ? '&hellip;' : ''); } return implode(' / ', $sReturned); } return '<a href="http://www.imdb.com/name/' . $sReturned2 . '/">' . $sReturned1 . '</a>';; } return 'n/A'; } /** * Returns the countr(y|ies). */ public function getCountry() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_COUNTRY, 1); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the countr(y|ies) as link(s). */ public function getCountryAsUrl() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_COUNTRY, 1); if (is_array($sReturned)) { foreach ($sReturned as $sCountry) { $aReturn[] = '<a href="http://www.imdb.com/Sections/Countries/' . $sCountry . '/">' . $sCountry . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/Sections/Countries/' . $sReturned . '/">' . $sReturned . '</a>'; } return 'n/A'; } /** * Returns the director(s). */ public function getDirector() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_DIRECTOR, 4); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the director(s) as link(s). */ public function getDirectorAsUrl() { if ($this->_sSource) { $sReturned1 = $this->getMatches(self::IMDB_DIRECTOR, 4); $sReturned2 = $this->getMatches(self::IMDB_DIRECTOR, 1); if (is_array($sReturned1)) { foreach ($sReturned1 as $i => $sDirector) { $aReturn[] = '<a href="http://www.imdb.com/name/' . $sReturned2[$i] . '/">' . $sDirector . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/name/' . $sReturned2 . '/">' . $sReturned1 . '</a>'; } return 'n/A'; } /** * Returns the genre(s). */ public function getGenre() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_GENRE, 1); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the genre(s) as link(s). */ public function getGenreAsUrl() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_GENRE, 1); if (is_array($sReturned)) { foreach ($sReturned as $i => $sGenre) { $aReturn[] = '<a href="http://www.imdb.com/Sections/Genres/' . $sGenre . '/">' . $sGenre . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/Sections/Genres/' . $sReturned . '/">' . $sReturned . '</a>'; } return 'n/A'; } /** * Returns the mpaa. */ public function getMpaa() { if ($this->_sSource) { return implode('' , $this->getMatches(self::IMDB_MPAA, 1)); } return 'n/A'; } /** * Returns the plot. */ public function getPlot() { if ($this->_sSource) { return implode('' , $this->getMatches(self::IMDB_PLOT, 1)); } return 'n/A'; } /** * Download the poster, cache it and return the local path to the image. */ public function getPoster() { if ($this->_sSource) { if ($sPoster = $this->saveImage(implode("", $this->getMatches(self::IMDB_POSTER, 5)), 'poster.jpg')) { return $sPoster; } return implode('', $this->getMatches(self::IMDB_POSTER, 5)); } return 'n/A'; } /** * Returns the rating. */ public function getRating() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_RATING, 1)); } return 'n/A'; } /** * Returns the release date. */ public function getReleaseDate() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_RELEASE_DATE, 1)); } return 'n/A'; } /** * Returns the runtime of the current movie. */ public function getRuntime() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_RUNTIME, 1)); } return 'n/A'; } /** * Returns the tagline. */ public function getTagline() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_TAGLINE, 1)); } return 'n/A'; } /** * Get the release date of the current movie. */ public function getTitle() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_TITLE, 1)); } return 'n/A'; } /** * Returns the url. */ public function getUrl() { return $this->_sUrl; } /** * Get the votes of the current movie. */ public function getVotes() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_VOTES, 1)); } return 'n/A'; } /** * Get the year of the current movie. */ public function getYear() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_TITLE, 2)); } return 'n/A'; } /** * Returns the writer(s). */ public function getWriter() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_WRITER, 4); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the writer(s) as link(s). */ public function getWriterAsUrl() { if ($this->_sSource) { $sReturned1 = $this->getMatches(self::IMDB_WRITER, 4); $sReturned2 = $this->getMatches(self::IMDB_WRITER, 1); if (is_array($sReturned1)) { foreach ($sReturned1 as $i => $sWriter) { $aReturn[] = '<a href="http://www.imdb.com/name/' . $sReturned2[$i] . '/">' . $sWriter . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/name/' . $sReturned2 . '/">' . $sReturned1 . '</a>'; } return 'n/A'; } } ?>

    Read the article

  • Integrating HTML into Silverlight Applications

    - by dwahlin
    Looking for a way to display HTML content within a Silverlight application? If you haven’t tried doing that before it can be challenging at first until you know a few tricks of the trade.  Being able to display HTML is especially handy when you’re required to display RSS feeds (with embedded HTML), SQL Server Reporting Services reports, PDF files (not actually HTML – but the techniques discussed will work), or other HTML content.  In this post I'll discuss three options for displaying HTML content in Silverlight applications and describe how my company is using these techniques in client applications. Displaying HTML Overlays If you need to display HTML over a Silverlight application (such as an RSS feed containing HTML data in it) you’ll need to set the Silverlight control’s windowless parameter to true. This can be done using the object tag as shown next: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/HTMLAndSilverlight.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <param name="windowless" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object> By setting the control to “windowless” you can overlay HTML objects by using absolute positioning and other CSS techniques. Keep in mind that on Windows machines the windowless setting can result in a performance hit when complex animations or HD video are running since the plug-in content is displayed directly by the browser window. It goes without saying that you should only set windowless to true when you really need the functionality it offers. For example, if I want to display my blog’s RSS content on top of a Silverlight application I could set windowless to true and create a user control that grabbed the content and output it using a DataList control: <style type="text/css"> a {text-decoration:none;font-weight:bold;font-size:14pt;} </style> <div style="margin-top:10px; margin-left:10px;margin-right:5px;"> <asp:DataList ID="RSSDataList" runat="server" DataSourceID="RSSDataSource"> <ItemTemplate> <a href='<%# XPath("link") %>'><%# XPath("title") %></a> <br /> <%# XPath("description") %> <br /> </ItemTemplate> </asp:DataList> <asp:XmlDataSource ID="RSSDataSource" DataFile="http://weblogs.asp.net/dwahlin/rss.aspx" XPath="rss/channel/item" CacheDuration="60" runat="server" /> </div> The user control can then be placed in the page hosting the Silverlight control as shown below. This example adds a Close button, additional content to display in the overlay window and the HTML generated from the user control. <div id="RSSDiv"> <div style="background-color:#484848;border:1px solid black;height:35px;width:100%;"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideOverlay();" style="cursor:pointer;" /> </div> <div style="overflow:auto;width:800px;height:565px;"> <div style="float:left;width:100px;height:103px;margin-left:10px;margin-top:5px;"> <img src="http://weblogs.asp.net/blogs/dwahlin/dan2008.jpg" style="border:1px solid Gray" /> </div> <div style="float:left;width:300px;height:103px;margin-top:5px;"> <a href="http://weblogs.asp.net/dwahlin" style="margin-left:10px;font-size:20pt;">Dan Wahlin's Blog</a> </div> <br /><br /><br /> <div style="clear:both;margin-top:20px;"> <uc:BlogRoller ID="BlogRoller" runat="server" /> </div> </div> </div> Of course, we wouldn’t want the RSS HTML content to be shown until requested. Once it’s requested the absolute position of where it should show above the Silverlight control can be set using standard CSS styles. The following ID selector named #RSSDiv handles hiding the overlay div shown above and determines where it will be display on the screen. #RSSDiv { background-color:White; position:absolute; top:100px; left:300px; width:800px; height:600px; border:1px solid black; display:none; } Now that the HTML content to display above the Silverlight control is set, how can we show it as a user clicks a HyperlinkButton or other control in the application? Fortunately, Silverlight provides an excellent HTML bridge that allows direct access to content hosted within a page. The following code shows two JavaScript functions that can be called from Siverlight to handle showing or hiding HTML overlay content. The two functions rely on jQuery (http://www.jQuery.com) to make it easy to select HTML objects and manipulate their properties: function ShowOverlay() { rssDiv.css('display', 'block'); } function HideOverlay() { rssDiv.css('display', 'none'); } Calling the ShowOverlay function is as simple as adding the following code into the Silverlight application within a button’s Click event handler: private void OverlayHyperlinkButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Invoke("ShowOverlay"); } The result of setting the Silverlight control’s windowless parameter to true and showing the HTML overlay content is shown in the following screenshot:   Thinking Outside the Box to Show HTML Content Setting the windowless parameter to true may not be a viable option for some Silverlight applications or you may simply want to go about showing HTML content a different way. The next technique I’ll show takes advantage of simple HTML, CSS and JavaScript code to handle showing HTML content while a Silverlight application is running in the browser. Keep in mind that with Silverlight’s HTML bridge feature you can always pop-up HTML content in a new browser window using code similar to the following: System.Windows.Browser.HtmlPage.Window.Navigate( new Uri("http://silverlight.net"), "_blank"); For this example I’ll demonstrate how to hide the Silverlight application while maximizing a container div containing the HTML content to show. This allows HTML content to take up the full screen area of the browser without having to set windowless to true and when done right can make the user feel like they never left the Silverlight application. The following HTML shows several div elements that are used to display HTML within the same browser window as the Silverlight application: <div id="JobPlanDiv"> <div style="vertical-align:middle"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideJobPlanIFrame();" style="cursor:pointer;" /> </div> <div id="JobPlan_IFrame_Container" style="height:95%;width:100%;margin-top:37px;"></div> </div> The JobPlanDiv element acts as a container for two other divs that handle showing a close button and hosting an iframe that will be added dynamically at runtime. JobPlanDiv isn’t visible when the Silverlight application loads due to the following ID selector added into the page: #JobPlanDiv { position:absolute; background-color:#484848; overflow:hidden; left:0; top:0; height:100%; width:100%; display:none; } When the HTML content needs to be shown or hidden the JavaScript functions shown next can be used: var jobPlanIFrameID = 'JobPlan_IFrame'; var slHost = null; var jobPlanContainer = null; var jobPlanIFrameContainer = null; var rssDiv = null; $(document).ready(function () { slHost = $('#silverlightControlHost'); jobPlanContainer = $('#JobPlanDiv'); jobPlanIFrameContainer = $('#JobPlan_IFrame_Container'); rssDiv = $('#RSSDiv'); }); function ShowJobPlanIFrame(url) { jobPlanContainer.css('display', 'block'); $('<iframe id="' + jobPlanIFrameID + '" src="' + url + '" style="height:100%;width:100%;" />') .appendTo(jobPlanIFrameContainer); slHost.css('width', '0%'); } function HideJobPlanIFrame() { jobPlanContainer.css('display', 'none'); $('#' + jobPlanIFrameID).remove(); slHost.css('width', '100%'); } ShowJobPlanIFrame() handles showing the JobPlanDiv div and adding an iframe into it dynamically. Once JobPlanDiv is shown, the Silverlight control host has its width set to a value of 0% to allow the control to stay alive while making it invisible to the user. I found that this technique works better across multiple browsers as opposed to manipulating the Silverlight control host div’s display or visibility properties. Now that you’ve seen the code to handle showing and hiding the HTML content area, let’s switch focus to the Silverlight application. As a user clicks on a link such as “View Report” the ShowJobPlanIFrame() JavaScript function needs to be called. The following code handles that task: private void ReportHyperlinkButton_Click(object sender, RoutedEventArgs e) { ShowBrowser(_BaseUrl + "/Report.aspx"); } public void ShowBrowser(string url) { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } Any URL can be passed into the ShowBrowser() method which handles invoking the JavaScript function. This includes standard web pages or even PDF files. We’ve used this technique frequently with our SmartPrint control (http://www.smartwebcontrols.com) which converts Silverlight screens into PDF documents and displays them. Here’s an example of the content generated:   Silverlight 4’s WebBrowser Control Both techniques shown to this point work well when Silverlight is running in-browser but not so well when it’s running out-of-browser since there’s no host page that you can access using the HTML bridge. Fortunately, Silverlight 4 provides a WebBrowser control that can be used to perform the same functionality quite easily. We’re currently using it in client applications to display PDF documents, SSRS reports and standard HTML content. Using the WebBrowser control simplifies the application quite a bit since no JavaScript is required if the application only runs out-of-browser. Here’s a simple example of defining the WebBrowser control in XAML. I typically define it in MainPage.xaml when a Silverlight Navigation template is used to create the project so that I can re-use the functionality across multiple screens. <Grid x:Name="WebBrowserGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Visibility="Collapsed"> <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Border Background="#484848" HorizontalAlignment="Stretch" Height="40"> <Image x:Name="WebBrowserImage" Width="100" Height="33" Cursor="Hand" HorizontalAlignment="Right" Source="/HTMLAndSilverlight;component/Assets/Images/Close.png" MouseLeftButtonDown="WebBrowserImage_MouseLeftButtonDown" /> </Border> <WebBrowser x:Name="JobPlanReportWebBrowser" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </StackPanel> </Grid> Looking through the XAML you can see that a close image is defined along with the WebBrowser control. Because the URL that the WebBrowser should navigate to isn’t known at design time no value is assigned to the control’s Source property. If the XAML shown above is left “as is” you’ll find that any HTML content assigned to the WebBrowser doesn’t display properly. This is due to no height or width being set on the control. To handle this issue the following code is added into the XAML’s code-behind file to dynamically determine the height and width of the page and assign it to the WebBrowser. This is done by handling the SizeChanged event. void MainPage_SizeChanged(object sender, SizeChangedEventArgs e) { WebBrowserGrid.Height = JobPlanReportWebBrowser.Height = ActualHeight; WebBrowserGrid.Width = JobPlanReportWebBrowser.Width = ActualWidth; } When the user wants to view HTML content they click a button which executes the code shown in next: public void ShowBrowser(string url) { if (Application.Current.IsRunningOutOfBrowser) { JobPlanReportWebBrowser.NavigateToString("<html><body><iframe src='" + url + "' style='width:100%;height:97%;' /></body></html>"); WebBrowserGrid.Visibility = Visibility.Visible; } else { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } } private void WebBrowserImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { WebBrowserGrid.Visibility = Visibility.Collapsed; }   Looking through the code you’ll see that it checks to see if the Silverlight application is running out-of-browser and then either displays the WebBrowser control or runs the JavaScript function discussed earlier. Although the WebBrowser control’s Source property could be assigned the URI of the page to navigate to, by assigning HTML content using the NavigateToString() method and adding an iframe, content can be shown from any site including cross-domain sites. This is especially handy when you need to grab a page from a reporting site that’s in a different domain than the Silverlight application. Here’s an example of viewing  PDF file inside of an out-of-browser application. The first image shows the application running out-of-browser before the user clicks a PDF HyperlinkButton.  The second image shows the PDF being displayed.   While there are certainly other techniques that can be used, the ones shown here have worked well for us in different applications and provide the ability to display HTML content in-browser or out-of-browser. Feel free to add a comment if you have another tip or trick you like to use when working with HTML content in Silverlight applications.   Download Code Sample   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • PCI scan failure for SSL Certificate with Wrong Hostname?

    - by Rob Mangiafico
    A client had a PCI scan completed by SecurityMetrics, and it now says they failed due to the SSL certificate for the SMTP port 25 (and POP3s/IMAPS) not matching the domain scanned. Specifically: Description: SSL Certificate with Wrong Hostname Synoposis: The SSL certificate for this service is for a different host. Impact: The commonName (CN) of the SSL certificate presented on this service is for a different machine. The mail server uses sendmail (patched) and provides email service for a number of domains. The server itself has a valid SSL certificate, but it does not match each domain (as we add/remove domains all the time as clients move around). Seems SecurityMerics is the only ASV that marks this as failing PCI. Trustwave, McAfee, etc... do not see this as failing PCI. Is this issue truly a PCI failure? Or is it just SecuritMetrics being wrong?

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • NPS will not add Radius client

    - by Neobyte
    Hi all, I've just installed a fresh copy of NPS on a new 2008 R2 Std server. When I go to add a Radius client, I get "NPS Error: The service being accessed is licensed for a particular number of connections. No more connections can be made to the service at this time because there are already as many connections as the service can accept. (Exception from HRESULT: 0x80070573)". What do I do? This is the first Radius client I am installing (and the first change to the vanilla NPS since running the role installation wizard) so obviously I have not hit the 50 client max. Cheers

    Read the article

  • Error 0x6ba (RPC server is unavailable) when running sfc /scannow on Windows XP in Safe Mode

    - by leeand00
    I think that my mup.sys file is corrupt. I received the following error when trying to access a network share that was located on my Windows 7 box, from my Windows XP box: No network provider accepted the given network path. After reading this I attempted to follow the directions by rebooting my computer into safe mode. After I run "sfc /scannow" I receive the following error message: The specific error code is 0x000006ba [The RPC server is unavailable]. When I go into Services, it says that the Remote Procedure Call (RPC) service is running but that the Remote Procedure Call (RPC) Locator is not running. When I try to start the Remote Procedure Call (RPC) Locator, it gives me an error saying: Error 1084: This service cannot be started in Safe Mode What can I do about this? If it can't find the Remote Procedure Call service in safe mode?

    Read the article

  • Installing missing package that provides Xm/Xm.h

    - by Nicholas Kinar
    I'm compiling a software package that requires a header file. The header file is missing from my Ubuntu 11.10 (64-bit) installation. During the compilation using make, gcc and gfortran, I receive the following error message. XMstr.c:7:19: fatal error: Xm/Xm.h: No such file or directory Googling for an answer leads me to believe that a MESA library needs to be installed on my system, but I can't find an exact match for the package name. What might be the name of the package that I need to install? Does the package have the same name on more recent Ubuntu distros?

    Read the article

  • CodePlex Daily Summary for Thursday, February 25, 2010

    CodePlex Daily Summary for Thursday, February 25, 2010New ProjectsAptusSoftware.Threading: AptusSoftware.Threading is a class library designed primarily to assist in the development of multi-threaded WinForm applications, although there i...AxiomGameDesigner: It is going to be a universal scene editor for Axiom 3D game engine. It is in pure C# and will be kept portable to MONO for compatibility with linu...Badger - Unity Productivity Extensions: A set of Microsoft Unity Extensions. Why Badger? Because I love badgers.Business & System Analysis Templates and Best Practices for Russian: http://saway.codeplex.com/Conectayas: Conectayas is an open source "Connect Four" alike game but transformable to "Tic-Tac-Toe" and to a lot of similar games that uses mouse. Written in...FastCode: .NET 3.5 Extensions set to increase coding speed.Hundiyas: Hundiyas is an open source "Battleship" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses mouse. This cross-platform and cro...Icelandic Online Banking: Icelandic Online Banking is project defining a web service interface for online banking.IE8 AddOns XML Creator: Application that helps on creating the xml files for IE8 Accelerators, Search Providers and the markup for Web Slices.iKnowledge: a asp.net mvc demoLearn ASP.NET MVC: Learn ASP.NET MVC is a project for the members of the Peer Learning group in Silicon Valley. It contains the SportsStore solution from the Pro ASP...Live at Education Meta Web-Service: Live at Education Meta Web-Service is intended to abstract from several technologies that are included in Live@edu set of services. This web-ser...Low level wave sound output for VB.NET: Low level sound output class for VB.NET using platform invocation services to call winmm.dllMailQ: MailQ makes it easier for developers to send mail messages from an application. The system sends mails based on a database queue system (store, se...Managed DXGI: Managed DXGI library is Fully managed wrapper writen on C# for DXGI 1.0 and 1.1 technology. It makes easier to support DXGI in managed application....Multivalue AutoComplete WinForms TextBox in C#: This project is a sample application that demonstrates how to create a multivalue WinForms textbox in C# using .NET Framework 3.5.Nifty CSharp Tools: Nifty CSharp Tools, will contain various tools and snippets. IRCBot, splashscreens, linq, world of warcraft log parsing, screenshot uploaders, twi...PHP MPQ: A port of StormLib to PHP for handling Blizzard MPQ files.RedDevils strategy - Project Hoshimi Programming Battle: Source Code of RedDevils strategy. Imagine Cup 2008 - Project Hoshimi Programming Battle.RNUNIT: rNunit is a distributed Nunit project. Many application these days are client-server application, distributed application and regular unit testing ...Samar Solution: Samar Solutions is a business system for office automation.Silverlight OOMRPG Game Engine: Silverlight OOMRPG Game EngineSimulator: GPSSimulatorSLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit is a flexible Augmented Reality library for Silverlight with the aim to make real time Augmented Reality applications with Silverlight ...Spiral Architecture Driven Development (SADD) for Russian: Это русская версия сайта sadd.codeplex.comSQLSnapshotManager: Easily manage SQL Server database snapshots in a easy to use visual interface.Twilio with VB.NET MVC: Twilio with VB.NET MVC is a sample application for developing with Twilio's REST based telephony API. It includes an XML Schema of the TwiML respon...Ultra Speed Dial: UltraSpeedDial.com - Online Speed Dial Page.Visual HTML Editor justHTML: justHTML - is simle windows-application WYSIWYG editor that allow everyone - without any knowledge of HTML - to create and edit web-pages. It supp...WinMTR.NET: .NET Clone of the popular Windows clone of the popular Linux Matt's TracerouteWPF Dialogs: "WPF Dialogs" is a library for different Dialogs in WPF (e.g. FolderBrowseDialog, SaveFileDialog, OpenFileDialog etc.). These Dialogs are written i...WPFLogin: A small Login window in WPF and C#XNA PerformanceTimers: CPU Timers for Windows and Xbox360. Can track multiple threads, and presents output as a log on-screen.New ReleasesAptusSoftware.Threading: 2.0.0: First public release. This release is in production as part of several commercial applications and is stable. The source code download includes a...BizTalk Software Factory: BizTalk Software Factory v2.1: This is a service release for the BizTalk Software Factory for BizTalk Server 2009, containing so far: Fix for x64: the SN.EXE tool is now locate...Business & System Analysis Templates and Best Practices for Russian: R00 The Place reserver: Just to reserve the place Will be filled out soonChronos WPF: Chronos v1.0 Beta 2: Added a new SplashScreen Added a new Login View and implemented Log Off Added a new PasswordBoxHelper (http://www.codeproject.com/Articles/371...dotNetTips: dotNetTips.Utility 3.5 R2: This is a new release (version 3.5.0.3) compatible with .NET 3.5. Lots of new classes/features!! Requires SP1 if using the Entity Framework extensi...fleXdoc: template-based server-side document generator (docx): fleXdoc 1.0 beta 3: The third and final beta of fleXdoc. fleXdoc consists of a webservice and a (test)client for the service. Make sure you also download the testclien...FluentPS: FluentPS v1.0: - FluentPS is moved from ASMX to WCF interface of the Project Server Interface (PSI) - Impersonation changes to work in compliance with WCF interfa...FolderSize: FolderSize.Win32.1.0.4.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...iTuner - The iTunes Companion: iTuner 1.1.3707 Beta 3: As promised, the iTuner Automated Librarian is now available. This automatically cleans an entire album of dead tracks and duplicates as tracks ar...Live at Education Meta Web-Service: LAEMWS v 1.0 beta: Release Candidate for LAEMWS.Macaw Reusable Code Library: LanguageConfigurationSolution: This Solution helps developing a multi language publishing web siteManaged DXGI: Initial Release.: Base declaration of interfaces, most of them untested yet.Math.NET Numerics: 2010.2.24.667 Build: Latest alpha buildMiniTwitter: 1.08.1: MiniTwitter 1.08.1 更新内容 変更 インクリメンタル検索時には大文字小文字の区別をしないように変更 クライアント名の表示を本家にあわせて from から via に変更 修正 公式 RT 時にステータスが上に表示されたり二重に表示されるバグを修正 自分が自分へ返信...Multivalue AutoComplete WinForms TextBox in C#: 1.0 First public release: Multivalue autocomplete textbox control and host application in this release are released in a single Visual Studio 2008 projects. See my related b...NMock3: NMock3 - Beta3, .NET 3.5: This release has some exciting new features. Please start providing feedback on the tutorials. The first several are complete and the rest are no...nxAjax - an asp.net ajax library using jQuery: nxAjax v3 codeplex 7: nxAjax v3 codeplex 7 binary and test website. Bug Fixed: ajax:Form control Add: Drag and drop Rewritten: DragnDropManager DragPanel DropPan...Office Apps: 0.8.7: whats new? Document.Editor and Document.Viewer now supports FlowDocument (.xaml) files bug fix'sPDF Rider: PDF Rider 0.3: Application PrerequisitesMicrosoft Windows Operating Systems (XP (tested) - Vista - 7) Microsoft .NET Framework 3.5 runtime A PDF rendering sof...ShellLight: ShellLight 0.1.0.1 Src: Codeplex project released. This is only a preview of the product. Until the first final release there will be many improvements.Silverlight OOMRPG Game Engine: SilverlightGameTutorialSolution v1.01: Please visit my blog for Silverlight OOMROG Game Tutorial: http://www.cnblogs.com/Jax/archive/2010/02/24/1673053.html.Simple Savant: Simple Savant v0.4: Added support for full-text indexing (See Full-Text Indexing) Added support for attribute spanning and compression for property values larger tha...Spiral Architecture Driven Development (SADD) for Russian: R00: R00 to reserve site nameTeamReview - TFS Code Review: Release 1.1.3: Release Features New expanded product positioning for capturing any targeted coding work as a trackable, assignable, reportable Work Item for any r...Text Designer Outline Text Library: 10th minor release: Version 0.3.1 (10th minor release)Fixed the gradient brush being too big for the text, resulting in not much gradient shown in the text. Gradient...TFS Workflow Control: TeamExplorer and TSWA control 1.0 for TFS 2010 RC: This is a special version for TFS 2010 RC. Use the RC version of the power tools to modify the layout of your work items (http://visualstudiogaller...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.7) - VS2010 RC Support: This update adds support for Visual Studio 2010 RC in addition to Visual Studio 2008. Please note that Visual Studio 2010 Beta 2 is NOT supported a...Tumblen3: tumblen3 Version 25Feb2010: ready for Twitter's xAuthUMD文本编辑器: UMDEditor文本编辑器V2.1.0: 2.1.0 (2010-02-24) 增加查找章节内指定文本内容的功能 2.0.4 (2010-02-06) 章节内容框增加右键菜单,包含编辑文本的基本操作 ------------------------------------------------------- 执行 reg.bat ...VCC: Latest build, v2.1.30224.0: Automatic drop of latest buildVisual HTML Editor justHTML: Latest binary: Latest buid here. Executable and mshtml.dll included in this archive. Ready to use ;)Visual HTML Editor justHTML: Source code for version 2.5: Visual studio 2008 project with full source code.VOB2MKV: vob2mkv-1.0.2: The release vob2mkv-1.0.2 is a feature update of the VOB2MKV project. It now includes a DirectShow source filter, MKVSOURCE. A source filter allo...WinMTR.NET: V 1.0: V 1.0WPF Dialogs: Version 0.1.0: Version 0.1.0 FolderBrowseDialog is implementet for more information look here Version 0.1.0 (german: Version 0.1.0 - Deutsch).WPF Dialogs: Version 0.1.1: Version 0.1.1 Features FolderBrowseDialog was extended / FolderBrowseDialog - Deutsch wurde erweitertXNA PerformanceTimers: XNA PerformanceTimers 0.1: Initial release.Zeta Resource Editor: Release 2010-02-24: Added HTTP proxy server support.Most Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesOxiteMost Active ProjectsDinnerNow.netBlogEngine.NETRawrInfoServiceSLARToolkit - Silverlight Augmented Reality ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web ServicesRapid Entity Framework. (ORM). CTP 2Common Context Adapters

    Read the article

  • Direct Links To Download Microsoft Office 2007 Products From Microsoft Download Servers

    - by Damodhar
    Downloading installers of Microsoft Office 2007 products from Microsoft Office website is not an easy task. To download any of the Microsoft Office 2007 products – you need to sign-in using Windows Live/Passport/Hotmail account, fill a big profile form and accept their terms & conditions. However even after gong through all the steps all you get is a link to download 60 day trail version installers. This is not cool! How about links that allows you to download the required installers directly from Microsoft Servers?   Here are the links to download Microsoft Office 2007 Applications & Suites directly from Microsoft download servers: Microsoft Office 2007 Product Installers Microsoft Office Home and Student 2007 Microsoft Office Standard 2007 Microsoft Office Professional 2007 Microsoft Office Small Business 2007 Microsoft Office Enterprise 2007 Microsoft Office OneNote 2007 Microsoft Office Publisher 2007 Microsoft Office Visio Professional 2007 Microsoft Office 2007 Service Packs Microsoft Office 2007 Service Pack 2 Microsoft Office 2007 Service Pack 1 Microsoft Office 2007 Viewers & Compatibility Packs Microsoft Word Viewer 2007 Microsoft PowerPoint Viewer 2007 Microsoft Excel Viewer 2007 Microsoft Visio Viewer 2007 Microsoft Office 2007 Compatibility Pack for Word, Excel, and PowerPoint File Formats CC image credit: flickr Related Posts:None FoundJoin us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Microsoft Visual Studio Release History/Timelines/Milestones

    1975 – Bill Gates and Paul Allen write a version of Basic for Altair 8080 1982 – IBM releases BASCOM 1.0 (developed by Microsoft) 1983 – Microsoft Basic Compiler System v5.35 for MS-DOS release 1984 - Microsoft Basic Compiler System v5.36 release 1985 – Microsoft QuickBASIC 1.0 1986 – Microsoft QuickBASIC 1.01, 1.02, 2.00 1987 – Microsoft QuickBASIC 2.01, 3.00, 4.00 1987 – Microsoft BASIC 6.0 1988 – Microsoft QuickBASIC 4.00, 4.00b, 4.50 1989 – Microsoft BASIC Professional Development System 7.0 1990 - Microsoft BASIC Professional Development System 7.1 1991 – Microsoft Visual Basic released May 20-Windows World Convention –Atlanta 1992 – Microsoft Visual Basic 2.0 1993 – Microsoft Visual Basic 3.0 in Standard and Professional versions 1995 – Microsoft Visual Basic 4.0 released, supported the new Windows 95 1997 – Microsoft Visual Basic 5.0 – introduction of IntelliSense 1998 – Microsoft Visual Studio 6.0 that included Visual Basic 6.0 released (first VS) 2002 – Microsoft Visual Basic .NET 7.0 2002 – Visual Studio .NET 2003 – Microsoft Visual Basic .NET 7.1 2003 – Microsoft Visual Studio w/Intellisense 2003 – Visual Studio .NET 2004 – Announce Visual Studios 2005 – Code name Whidbey 2005 – Visual Studio 2005 release w/Extensibility 2005 – Visual Studio Express released 2006 - Expression Tool Set released - devs and designers work together 2006 – Visual Studio Team release – November 30th 2007 – Visual Studio 2008 (code name Orcas) ships November = Video Studio Shell 2010 - Visual Studios (code name Rosario) span.fullpost {display:none;}

    Read the article

  • Share a Printer on Your Network from Vista or XP to Windows 7

    - by Mysticgeek
    The other day we looked at sharing a printer between Windows 7 machines, but you may only have one Windows 7 machine and the printer is connected to a Vista or XP computer. Today we show you how to share a printer from either Vista or XP to Windows 7. We previously showed you how to share files and printers between Windows 7 and XP. But what if you have a printer connected to an XP or Vista machine in another room, and you want to print to it from Windows 7? This guide will walk you through the process. Note: In these examples we’re using 32-bit versions of Windows 7, Vista, and XP on a basic home network. We are using an HP PSC 1500 printer, but keep in mind every printer is different so finding and installing the correct drivers will vary. Share a Printer from Vista To share the printer on a Vista machine click on Start and enter printers into the search box and hit Enter. Right-click on the printer you want to share and select Sharing from the context menu. Now in Printer Properties, select the Sharing tab, mark the box next to Share this printer, and give the printer a name. Make sure the name is something simple with no spaces then click Ok. Share a Printer from XP To share a printer from XP click on Start then select Printers and Faxes. In the Printers and Faxes window right-click on the printer to share and select Sharing. In the Printer Properties window select the Sharing tab and the radio button next to Share this printer and give it a short name with no spaces then click Ok. Add Printer to Windows 7 Now that we have the printer on Vista or XP set up to be shared, it’s time to add it to Windows 7. Open the Start Menu and click on Devices and Printers. In Devices and Printers click on Add a printer. Next click on Add a network, wireless or Bluetooth printer. Windows 7 will search for the printer on your network and once its been found click Next. The printer has been successfully added…click Next. Now you can set it as the default printer and send a test page to verify everything works. If everything is successful, close out of the add printer screens and you should be good to go.   Alternate Method If the method above doesn’t work, you’ll can try the following for either XP or Vista. In our example, when trying to add the printer connected to our XP machine, it wasn’t recognized automatically. If you’re search pulls up nothing then click on The printer that I want isn’t listed. In the Add Printer window under Find a printer by name or TCP/IP address click the radio button next to Select a shared printer by name. You can either type in the path to the printer or click on Browse to find it. In this instance we decided to browse to it and notice we have 5 computers found on the network. We want to be able to print to the XPMCE computer so we double-click on that. Type in the username and password for that computer… Now we see the printer and can select it. The path to the printer is put into the Select a shared printer by name field. Wait while Windows connects to the printer and installs it… It’s successfully added…click Next. Now you can set it as the default printer or not and print a test page to make sure everything works successfully. Now when we go back to Devices and Printers under Printers and Faxes, we see the HP printer on XPMCE. Conclusion Sharing a printer from one machine to another can sometimes be tricky, but the method we used here in our setup worked well. Since the printer we used is fairly new, there wasn’t a problem with locating any drivers for it. Windows 7 includes a lot of device drivers already so you may be surprised on what it’s able to install. Your results may vary depending on your type of printer, Windows version, and network setup. This should get you started configuring the machines on your network—hopefully with good results.  If you you have two Windows 7 computers, then sharing a printer or files is easy through the Homegroup feature. You can also share a printer between Windows 7 machines on the same network but not Homegroup. Similar Articles Productive Geek Tips Share a Printer Between Windows 7 Machines Not in the Same HomegroupShare Files and Printers between Windows 7 and XPHow To Share Files and Printers Between Windows 7 and VistaEnable Mapping to \HostnameC$ Share on Windows 7 or VistaUse the Homegroup Feature in Windows 7 to Share Printers and Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >