Search Results

Search found 45852 results on 1835 pages for 'event id 861'.

Page 538/1835 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • Android - HorizontalScrollView within ScrollView Touch Handling

    - by Joel
    Hi, I have a ScrollView that surrounds my entire layout so that the entire screen is scrollable. The first element I have in this ScrollView is a HorizontalScrollView block that has features that can be scrolled through horizontally. I've added an ontouchlistener to the horizontalscrollview to handle touch events and force the view to "snap" to the closest image on the ACTION_UP event. So the effect I'm going for is like the stock android homescreen where you can scroll from one to the other and it snaps to one screen when you lift your finger. This all works great except for one problem: I need to swipe left to right almost perfectly horizontally for an ACTION_UP to ever register. If I swipe vertically in the very least (which I think many people tend to do on their phones when swiping side to side), I will receive an ACTION_CANCEL instead of an ACTION_UP. My theory is that this is because the horizontalscrollview is within a scrollview, and the scrollview is hijacking the vertical touch to allow for vertical scrolling. How can I disable the touch events for the scrollview from just within my horizontal scrollview, but still allow for normal vertical scrolling elsewhere in the scrollview? Here's a sample of my code: public class HomeFeatureLayout extends HorizontalScrollView { private ArrayList<ListItem> items = null; private GestureDetector gestureDetector; View.OnTouchListener gestureListener; private static final int SWIPE_MIN_DISTANCE = 5; private static final int SWIPE_THRESHOLD_VELOCITY = 300; private int activeFeature = 0; public HomeFeatureLayout(Context context, ArrayList<ListItem> items){ super(context); setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT)); setFadingEdgeLength(0); this.setHorizontalScrollBarEnabled(false); this.setVerticalScrollBarEnabled(false); LinearLayout internalWrapper = new LinearLayout(context); internalWrapper.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); internalWrapper.setOrientation(LinearLayout.HORIZONTAL); addView(internalWrapper); this.items = items; for(int i = 0; i< items.size();i++){ LinearLayout featureLayout = (LinearLayout) View.inflate(this.getContext(),R.layout.homefeature,null); TextView header = (TextView) featureLayout.findViewById(R.id.featureheader); ImageView image = (ImageView) featureLayout.findViewById(R.id.featureimage); TextView title = (TextView) featureLayout.findViewById(R.id.featuretitle); title.setTag(items.get(i).GetLinkURL()); TextView date = (TextView) featureLayout.findViewById(R.id.featuredate); header.setText("FEATURED"); Image cachedImage = new Image(this.getContext(), items.get(i).GetImageURL()); image.setImageDrawable(cachedImage.getImage()); title.setText(items.get(i).GetTitle()); date.setText(items.get(i).GetDate()); internalWrapper.addView(featureLayout); } gestureDetector = new GestureDetector(new MyGestureDetector()); setOnTouchListener(new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { if (gestureDetector.onTouchEvent(event)) { return true; } else if(event.getAction() == MotionEvent.ACTION_UP || event.getAction() == MotionEvent.ACTION_CANCEL ){ int scrollX = getScrollX(); int featureWidth = getMeasuredWidth(); activeFeature = ((scrollX + (featureWidth/2))/featureWidth); int scrollTo = activeFeature*featureWidth; smoothScrollTo(scrollTo, 0); return true; } else{ return false; } } }); } class MyGestureDetector extends SimpleOnGestureListener { @Override public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) { try { //right to left if(e1.getX() - e2.getX() > SWIPE_MIN_DISTANCE && Math.abs(velocityX) > SWIPE_THRESHOLD_VELOCITY) { activeFeature = (activeFeature < (items.size() - 1))? activeFeature + 1:items.size() -1; smoothScrollTo(activeFeature*getMeasuredWidth(), 0); return true; } //left to right else if (e2.getX() - e1.getX() > SWIPE_MIN_DISTANCE && Math.abs(velocityX) > SWIPE_THRESHOLD_VELOCITY) { activeFeature = (activeFeature > 0)? activeFeature - 1:0; smoothScrollTo(activeFeature*getMeasuredWidth(), 0); return true; } } catch (Exception e) { // nothing } return false; } } }

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • These are few objective type questions which i was not able to find the solution [closed]

    - by Tarun
    1. Which of the following advantages does System.Collections.IDictionaryEnumerator provide over System.Collections.IEnumerator? a. It adds properties for direct access to both the Key and the Value b. It is optimized to handle the structure of a Dictionary. c. It provides properties to determine if the Dictionary is enumerated in Key or Value order d. It provides reverse lookup methods to distinguish a Key from a specific Value 2. When Implementing System.EnterpriseServices.ServicedComponent derived classes, which of the following statements are true? a. Enabling object pooling requires an attribute on the class and the enabling of pooling in the COM+ catalog. b. Methods can be configured to automatically mark a transaction as complete by the use of attributes. c. You can configure authentication using the AuthenticationOption when the ActivationMode is set to Library. d. You can control the lifecycle policy of an individual instance using the SetLifetimeService method. 3. Which of the following are true regarding event declaration in the code below? class Sample { event MyEventHandlerType MyEvent; } a. MyEventHandlerType must be derived from System.EventHandler or System.EventHandler<TEventArgs> b. MyEventHandlerType must take two parameters, the first of the type Object, and the second of a class derived from System.EventArgs c. MyEventHandlerType may have a non-void return type d. If MyEventHandlerType is a generic type, event declaration must use a specialization of that type. e. MyEventHandlerType cannot be declared static 4. Which of the following statements apply to developing .NET code, using .NET utilities that are available with the SDK or Visual Studio? a. Developers can create assemblies directly from the MSIL Source Code. b. Developers can examine PE header information in an assembly. c. Developers can generate XML Schemas from class definitions contained within an assembly. d. Developers can strip all meta-data from managed assemblies. e. Developers can split an assembly into multiple assemblies. 5. Which of the following characteristics do classes in the System.Drawing namespace such as Brush,Font,Pen, and Icon share? a. They encapsulate native resource and must be properly Disposed to prevent potential exhausting of resources. b. They are all MarshalByRef derived classes, but functionality across AppDomains has specific limitations. c. You can inherit from these classes to provide enhanced or customized functionality 6. Which of the following are required to be true by objects which are going to be used as keys in a System.Collections.HashTable? a. They must handle case-sensitivity identically in both the GetHashCode() and Equals() methods. b. Key objects must be immutable for the duration they are used within a HashTable. c. Get HashCode() must be overridden to provide the same result, given the same parameters, regardless of reference equalityl unless the HashTable constructor is provided with an IEqualityComparer parameter. d. Each Element in a HashTable is stored as a Key/Value pair of the type System.Collections.DictionaryElement e. All of the above 7. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. A Nullable type is a structure. c. An implicit conversion exists from any non-nullable value type to a nullable form of that type. d. An implicit conversion exists from any nullable value type to a non-nullable form of that type. e. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 8. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 9. Which of the following does using Initializer Syntax with a collection as shown below require? CollectionClass numbers = new CollectionClass { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; a. The Collection Class must implement System.Collections.Generic.ICollection<T> b. The Collection Class must implement System.Collections.Generic.IList<T> c. Each of the Items in the Initializer List will be passed to the Add<T>(T item) method d. The items in the initializer will be treated as an IEnumerable<T> and passed to the collection constructor+K110 10. What impact will using implicitly typed local variables as in the following example have? var sample = "Hello World"; a. The actual type is determined at compilation time, and has no impact on the runtime b. The actual type is determined at runtime, and late binding takes effect c. The actual type is based on the native VARIANT concept, and no binding to a specific type takes place. d. "var" itself is a specific type defined by the framework, and no special binding takes place 11. Which of the following is not supported by remoting object types? a. well-known singleton b. well-known single call c. client activated d. context-agile 12. In which of the following ways do structs differ from classes? a. Structs can not implement interfaces b. Structs cannot inherit from a base struct c. Structs cannot have events interfaces d. Structs cannot have virtual methods 13. Which of the following is not an unboxing conversion? a. void Sample1(object o) { int i = (int)o; } b. void Sample1(ValueType vt) { int i = (int)vt; } c. enum E { Hello, World} void Sample1(System.Enum et) { E e = (E) et; } d. interface I { int Value { get; set; } } void Sample1(I vt) { int i = vt.Value; } e. class C { public int Value { get; set; } } void Sample1(C vt) { int i = vt.Value; } 14. Which of the following are characteristics of the System.Threading.Timer class? a. The method provided by the TimerCallback delegate will always be invoked on the thread which created the timer. b. The thread which creates the timer must have a message processing loop (i.e. be considered a UI thread) c. The class contains protection to prevent reentrancy to the method provided by the TimerCallback delegate d. You can receive notification of an instance being Disposed by calling an overload of the Dispose method. 15. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 16. Which of the following scenarios are applicable to Window Workflow Foundation? a. Document-centric workflows b. Human workflows c. User-interface page flows d. Builtin support for communications across multiple applications and/or platforms e. All of the above 17. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 18 While using the capabilities supplied by the System.Messaging classes, which of the following are true? a. Information must be explicitly converted to/from a byte stream before it uses the MessageQueue class b. Invoking the MessageQueue.Send member defaults to using the System.Messaging.XmlMessageFormatter to serialize the object. c. Objects must be XMLSerializable in order to be transferred over a MessageQueue instance. d. The first entry in a MessageQueue must be removed from the queue before the next entry can be accessed e. Entries removed from a MessageQueue within the scope of a transaction, will be pushed back into the front of the queue if the transaction fails. 19. Which of the following are true about declarative attributes? a. They must be inherited from the System.Attribute. b. Attributes are instantiated at the same time as instances of the class to which they are applied. c. Attribute classes may be restricted to be applied only to application element types. d. By default, a given attribute may be applied multiple times to the same application element. 20. When using version 3.5 of the framework in applications which emit a dynamic code, which of the following are true? a. A Partial trust code can not emit and execute a code b. A Partial trust application must have the SecurityCriticalAttribute attribute have called Assert ReflectionEmit permission c. The generated code no more permissions than the assembly which emitted it. d. It can be executed by calling System.Reflection.Emit.DynamicMethod( string name, Type returnType, Type[] parameterTypes ) without any special permissions Within Windows Workflow Foundation, Compensating Actions are used for: a. provide a means to rollback a failed transaction b. provide a means to undo a successfully committed transaction later c. provide a means to terminate an in process transaction d. achieve load balancing by adapting to the current activity 21. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 22. Which of the following controls allows the use of XSL to transform XML content into formatted content? a. System.Web.UI.WebControls.Xml b. System.Web.UI.WebControls.Xslt c. System.Web.UI.WebControls.Substitution d. System.Web.UI.WebControls.Transform 23. To which of the following do automatic properties refer? a. You declare (explicitly or implicitly) the accessibility of the property and get and set accessors, but do not provide any implementation or backing field b. You attribute a member field so that the compiler will generate get and set accessors c. The compiler creates properties for your class based on class level attributes d. They are properties which are automatically invoked as part of the object construction process 24. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. An implicit conversion exists from any non-nullable value type to a nullable form of that type. c. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 25. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is accessible via reflection. c. The compiler generates a code that will store the information separately from the instance to ensure its security. 26. When using an implicitly typed array, which of the following is most appropriate? a. All elements in the initializer list must be of the same type. b. All elements in the initializer list must be implicitly convertible to a known type which is the actual type of at least one member in the initializer list c. All elements in the initializer list must be implicitly convertible to common type which is a base type of the items actually in the list 27. Which of the following is false about anonymous types? a. They can be derived from any reference type. b. Two anonymous types with the same named parameters in the same order declared in different classes have the same type. c. All properties of an anonymous type are read/write. 28. Which of the following are true about Extension methods. a. They can be declared either static or instance members b. They must be declared in the same assembly (but may be in different source files) c. Extension methods can be used to override existing instance methods d. Extension methods with the same signature for the same class may be declared in multiple namespaces without causing compilation errors

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problems with Google Maps API v3 + jQuery UI Tabs

    - by Bears will eat you
    There are a number of problems, which seem to be fairly well-known, when using the Google Maps API to render a map within a jQuery UI tab. I've seen SO questions posted about similar issues (here and here, for example) but the solutions there only seem to work for v2 of the Maps API. Other references I checked out are here and here, along with pretty much everything I could dig up through Googling. I've been trying to stuff a map (using v3 of the API) into a jQuery tab with mixed results. I'm using the latest versions of everything (currently jQuery 1.3.2, jQuery UI 1.7.2, don't know about Maps). This is the markup & javascript: <body> <div id="dashtabs"> <span class="logout"> <a href="go away">Log out</a> </span> <!-- tabs --> <ul class="dashtabNavigation"> <li><a href="#first_tab" >First</a></li> <li><a href="#second_tab" >Second</a></li> <li><a href="#map_tab" >Map</a></li> </ul> <!-- tab containers --> <div id="first_tab">This is my first tab</div> <div id="second_tab">This is my second tab</div> <div id="map_tab"> <div id="map_canvas"></div> </div> </div> </body> and $(document).ready(function() { var map = null; $('#dashtabs').tabs(); $('#dashtabs').bind('tabsshow', function(event, ui) { if (ui.panel.id == 'map_tab' && !map) { map = initializeMap(); google.maps.event.trigger(map, 'resize'); } }); }); function initializeMap() { // Just some canned map for now var latlng = new google.maps.LatLng(-34.397, 150.644); var myOptions = { zoom: 8, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; return new google.maps.Map($('#map_canvas')[0], myOptions); } And here's what I've found that does/doesn't work (for Maps API v3): Using the off-left technique as described in the jQuery UI Tabs documentation (and in the answers to the two questions I linked) doesn't work at all. In fact, the best-functioning code uses the CSS .ui-tabs .ui-tabs-hide { display: none; } instead. The only way to get a map to display in a tab at all is to set the CSS width and height of #map_canvas to be absolute values. Changing the width and height to auto or 100% causes the map to not display at all, even if it's already been successfully rendered (using absolute width and height). I couldn't find it documented anywhere outside of the Maps API, but map.checkResize() won't work anymore. Instead, you have to fire a resize event by calling google.maps.event.trigger(map, 'resize'). If the map is not initialized inside of a function bound to a tabsshow event, the map itself is rendered correctly but the controls are not - most are just plain missing. So, here are my questions: Does anyone else have experience accomplishing this same feat? If so, how did you figure out what would actually work, since the documented tricks don't work for Maps API v3? What about loading tab content using Ajax as per the jQuery UI docs? I haven't had a chance to play around with it but my guess is that it's going to break Maps even more. What are the chances of getting it to work (or is it not worth trying)? How do I make the map fill the largest possible area? I'd like it to fill the tab and adapt to page resizes, much in the way that it's done over at maps.google.com. But, as I said, I appear to be stuck with applying only absolute width and height CSS to the map div. Sorry if this was long-winded but this might be the only documentation for Maps API v3 + jQuery tabs. Cheers!

    Read the article

  • What's up with LDoms: Part 4 - Virtual Networking Explained

    - by Stefan Hinker
    I'm back from my summer break (and some pressing business that kept me away from this), ready to continue with Oracle VM Server for SPARC ;-) In this article, we'll have a closer look at virtual networking.  Basic connectivity as we've seen it in the first, simple example, is easy enough.  But there are numerous options for the virtual switches and virtual network ports, which we will discuss in more detail now.   In this section, we will concentrate on virtual networking - the capabilities of virtual switches and virtual network ports - only.  Other options involving hardware assignment or redundancy will be covered in separate sections later on. There are two basic components involved in virtual networking for LDoms: Virtual switches and virtual network devices.  The virtual switch should be seen just like a real ethernet switch.  It "runs" in the service domain and moves ethernet packets back and forth.  A virtual network device is plumbed in the guest domain.  It corresponds to a physical network device in the real world.  There, you'd be plugging a cable into the network port, and plug the other end of that cable into a switch.  In the virtual world, you do the same:  You create a virtual network device for your guest and connect it to a virtual switch in a service domain.  The result works just like in the physical world, the network device sends and receives ethernet packets, and the switch does all those things ethernet switches tend to do. If you look at the reference manual of Oracle VM Server for SPARC, there are numerous options for virtual switches and network devices.  Don't be confused, it's rather straight forward, really.  Let's start with the simple case, and work our way to some more sophisticated options later on.  In many cases, you'll want to have several guests that communicate with the outside world on the same ethernet segment.  In the real world, you'd connect each of these systems to the same ethernet switch.  So, let's do the same thing in the virtual world: root@sun # ldm add-vsw net-dev=nxge2 admin-vsw primary root@sun # ldm add-vnet admin-net admin-vsw mars root@sun # ldm add-vnet admin-net admin-vsw venus We've just created a virtual switch called "admin-vsw" and connected it to the physical device nxge2.  In the physical world, we'd have powered up our ethernet switch and installed a cable between it and our big enterprise datacenter switch.  We then created a virtual network interface for each one of the two guest systems "mars" and "venus" and connected both to that virtual switch.  They can now communicate with each other and with any system reachable via nxge2.  If primary were running Solaris 10, communication with the guests would not be possible.  This is different with Solaris 11, please see the Admin Guide for details.  Note that I've given both the vswitch and the vnet devices some sensible names, something I always recommend. Unless told otherwise, the LDoms Manager software will automatically assign MAC addresses to all network elements that need one.  It will also make sure that these MAC addresses are unique and reuse MAC addresses to play nice with all those friendly DHCP servers out there.  However, if we want to do this manually, we can also do that.  (One reason might be firewall rules that work on MAC addresses.)  So let's give mars a manually assigned MAC address: root@sun # ldm set-vnet mac-addr=0:14:4f:f9:c4:13 admin-net mars Within the guest, these virtual network devices have their own device driver.  In Solaris 10, they'd appear as "vnet0".  Solaris 11 would apply it's usual vanity naming scheme.  We can configure these interfaces just like any normal interface, give it an IP-address and configure sophisticated routing rules, just like on bare metal.  In many cases, using Jumbo Frames helps increase throughput performance.  By default, these interfaces will run with the standard ethernet MTU of 1500 bytes.  To change this,  it is usually sufficient to set the desired MTU for the virtual switch.  This will automatically set the same MTU for all vnet devices attached to that switch.  Let's change the MTU size of our admin-vsw from the example above: root@sun # ldm set-vsw mtu=9000 admin-vsw primary Note that that you can set the MTU to any value between 1500 and 16000.  Of course, whatever you set needs to be supported by the physical network, too. Another very common area of network configuration is VLAN tagging. This can be a little confusing - my advise here is to be very clear on what you want, and perhaps draw a little diagram the first few times.  As always, keeping a configuration simple will help avoid errors of all kind.  Nevertheless, VLAN tagging is very usefull to consolidate different networks onto one physical cable.  And as such, this concept needs to be carried over into the virtual world.  Enough of the introduction, here's a little diagram to help in explaining how VLANs work in LDoms: Let's remember that any VLANs not explicitly tagged have the default VLAN ID of 1. In this example, we have a vswitch connected to a physical network that carries untagged traffic (VLAN ID 1) as well as VLANs 11, 22, 33 and 44.  There might also be other VLANs on the wire, but the vswitch will ignore all those packets.  We also have two vnet devices, one for mars and one for venus.  Venus will see traffic from VLANs 33 and 44 only.  For VLAN 44, venus will need to configure a tagged interface "vnet44000".  For VLAN 33, the vswitch will untag all incoming traffic for venus, so that venus will see this as "normal" or untagged ethernet traffic.  This is very useful to simplify guest configuration and also allows venus to perform Jumpstart or AI installations over this network even if the Jumpstart or AI server is connected via VLAN 33.  Mars, on the other hand, has full access to untagged traffic from the outside world, and also to VLANs 11,22 and 33, but not 44.  On the command line, we'd do this like this: root@sun # ldm add-vsw net-dev=nxge2 pvid=1 vid=11,22,33,44 admin-vsw primary root@sun # ldm add-vnet admin-net pvid=1 vid=11,22,33 admin-vsw mars root@sun # ldm add-vnet admin-net pvid=33 vid=44 admin-vsw venus Finally, I'd like to point to a neat little option that will make your live easier in all those cases where configurations tend to change over the live of a guest system.  It's the "id=<somenumber>" option available for both vswitches and vnet devices.  Normally, Solaris in the guest would enumerate network devices sequentially.  However, it has ways of remembering this initial numbering.  This is good in the physical world.  In the virtual world, whenever you unbind (aka power off and disassemble) a guest system, remove and/or add network devices and bind the system again, chances are this numbering will change.  Configuration confusion will follow suit.  To avoid this, nail down the initial numbering by assigning each vnet device it's device-id explicitly: root@sun # ldm add-vnet admin-net id=1 admin-vsw venus Please consult the Admin Guide for details on this, and how to decipher these network ids from Solaris running in the guest. Thanks for reading this far.  Links for further reading are essentially only the Admin Guide and Reference Manual and can be found above.  I hope this is useful and, as always, I welcome any comments.

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Easier ASP.NET MVC Routing

    - by Steve Wilkes
    I've recently refactored the way Routes are declared in an ASP.NET MVC application I'm working on, and I wanted to share part of the system I came up with; a really easy way to declare and keep track of ASP.NET MVC Routes, which then allows you to find the name of the Route which has been selected for the current request. Traditional MVC Route Declaration Traditionally, ASP.NET MVC Routes are added to the application's RouteCollection using overloads of the RouteCollection.MapRoute() method; for example, this is the standard way the default Route which matches /controller/action URLs is created: routes.MapRoute(     "Default",     "{controller}/{action}/{id}",     new { controller = "Home", action = "Index", id = UrlParameter.Optional }); The first argument declares that this Route is to be named 'Default', the second specifies the Route's URL pattern, and the third contains the URL pattern segments' default values. To then write a link to a URL which matches the default Route in a View, you can use the HtmlHelper.RouteLink() method, like this: @ this.Html.RouteLink("Default", new { controller = "Orders", action = "Index" }) ...that substitutes 'Orders' into the {controller} segment of the default Route's URL pattern, and 'Index' into the {action} segment. The {Id} segment was declared optional and isn't specified here. That's about the most basic thing you can do with MVC routing, and I already have reservations: I've duplicated the magic string "Default" between the Route declaration and the use of RouteLink(). This isn't likely to cause a problem for the default Route, but once you get to dozens of Routes the duplication is a pain. There's no easy way to get from the RouteLink() method call to the declaration of the Route itself, so getting the names of the Route's URL parameters correct requires some effort. The call to MapRoute() is quite verbose; with dozens of Routes this gets pretty ugly. If at some point during a request I want to find out the name of the Route has been matched.... and I can't. To get around these issues, I wanted to achieve the following: Make declaring a Route very easy, using as little code as possible. Introduce a direct link between where a Route is declared, where the Route is defined and where the Route's name is used, so I can use Visual Studio's Go To Definition to get from a call to RouteLink() to the declaration of the Route I'm using, making it easier to make sure I use the correct URL parameters. Create a way to access the currently-selected Route's name during the execution of a request. My first step was to come up with a quick and easy syntax for declaring Routes. 1 . An Easy Route Declaration Syntax I figured the easiest way of declaring a route was to put all the information in a single string with a special syntax. For example, the default MVC route would be declared like this: "{controller:Home}/{action:Index}/{Id}*" This contains the same information as the regular way of defining a Route, but is far more compact: The default values for each URL segment are specified in a colon-separated section after the segment name The {Id} segment is declared as optional simply by placing a * after it That's the default route - a pretty simple example - so how about this? routes.MapRoute(     "CustomerOrderList",     "Orders/{customerRef}/{pageNo}",     new { controller = "Orders", action = "List", pageNo = UrlParameter.Optional },     new { customerRef = "^[a-zA-Z0-9]+$", pageNo = "^[0-9]+$" }); This maps to the List action on the Orders controller URLs which: Start with the string Orders/ Then have a {customerRef} set of characters and numbers Then optionally a numeric {pageNo}. And again, it’s quite verbose. Here's my alternative: "Orders/{customerRef:^[a-zA-Z0-9]+$}/{pageNo:^[0-9]+$}*->Orders/List" Quite a bit more brief, and again, containing the same information as the regular way of declaring Routes: Regular expression constraints are declared after the colon separator, the same as default values The target controller and action are specified after the -> The {pageNo} is defined as optional by placing a * after it With an appropriate parser that gave me a nice, compact and clear way to declare routes. Next I wanted to have a single place where Routes were declared and accessed. 2. A Central Place to Declare and Access Routes I wanted all my Routes declared in one, dedicated place, which I would also use for Route names when calling RouteLink(). With this in mind I made a single class named Routes with a series of public, constant fields, each one relating to a particular Route. With this done, I figured a good place to actually declare each Route was in an attribute on the field defining the Route’s name; the attribute would parse the Route definition string and make the resulting Route object available as a property. I then made the Routes class examine its own fields during its static setup, and cache all the attribute-created Route objects in an internal Dictionary. Finally I made Routes use that cache to register the Routes when requested, and to access them later when required. So the Routes class declares its named Routes like this: public static class Routes{     [RouteDefinition("Orders/{customerName}->Orders/Index")]     public const string OrdersCustomerIndex = "OrdersCustomerIndex";     [RouteDefinition("Orders/{customerName}/{orderId:^([0-9]+)$}->Orders/Details")]     public const string OrdersDetails = "OrdersDetails";     [RouteDefinition("{controller:Home}*/{action:Index}*")]     public const string Default = "Default"; } ...which are then used like this: @ this.Html.RouteLink(Routes.Default, new { controller = "Orders", action = "Index" }) Now that using Go To Definition on the Routes.Default constant takes me to where the Route is actually defined, it's nice and easy to quickly check on the parameter names when using RouteLink(). Finally, I wanted to be able to access the name of the current Route during a request. 3. Recovering the Route Name The RouteDefinitionAttribute creates a NamedRoute class; a simple derivative of Route, but with a Name property. When the Routes class examines its fields and caches all the defined Routes, it has access to the name of the Route through the name of the field against which it is defined. It was therefore a pretty easy matter to have Routes give NamedRoute its name when it creates its cache of Routes. This means that the Route which is found in RequestContext.RouteData.Route is now a NamedRoute, and I can recover the Route's name during a request. For visibility, I made NamedRoute.ToString() return the Route name and URL pattern, like this: The screenshot is from an example project I’ve made on bitbucket; it contains all the named route classes and an MVC 3 application which demonstrates their use. I’ve found this way of defining and using Routes much tidier than the default MVC system, and you find it useful too

    Read the article

  • New OFM versions released SOA Suite 11.1.1.4 &amp; BPM 11.1.1.4 &amp; JDeveloper 11.1.1.4 WebLogic on JRockit 10.3.4 feedback from the community

    - by Jürgen Kress
    Oracle SOA Suite 11g Installations This is the latest release of the Oracle SOA Suite 11g. Please see the Documentation tab for Release Notes, Installation Guides and other release specific information. Please also see the List of New Features and Samples provided for this release. Release 11gR1 (11.1.1.4.0) Microsoft Windows (32-bit JVM) Linux (32-bit JVM) Generic Oracle JDeveloper 11g Rel 1 (11.1.1.x) (JDeveloper + ADF) Integrated development environment certified on Windows, Linux, and Macintosh. License is free (read the Pricing FAQ). Studio Edition for Windows (1.2 GB) | Studio Edition for Linux (1.3 GB) | See All See Additional Development Tools Oracle WebLogic Server 11g Rel 1 (10.3.4) Installers The WebLogic Server installers include Oracle Coherence and Oracle Enterprise Pack for Eclipse and supports development with other Fusion Middleware products . The zip includes WebLogic Server only and is intended for WebLogic Server development only. Linux x86 (1.1 GB) | Windows x86 (1 GB) Zip for Windows x86, Linux x86, Mac OS X (316 MB) | See All Oracle WebLogic Server 11gR1 (10.3.4) on JRockit Virtual Edition Download For additional downloads please visit the Oracle Fusion Middleware Products Update Center Share your feedback with the @soacommunity on twitter SOASimone Simone Geib SOA Suite 11gR1 (11.1.1.4.0) has just been released: http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html gschmutz gschmutz My new blog post: WebLogic Server, JDev, SOA, BPM, OSB and CEP 11.1.1.4 (PS3) available! - http://tinyurl.com/4negnpn simon_haslam Simon Haslam I'm very pleased to see WLS 10.3.4 for JRockit VE launched at the same time as the rest of PS3 http://j.mp/gl1nQm (32bit anyway) lucasjellema Lucas Jellema See http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/156082.xml for PS3 extension downloads BPM, SOA Editor, WebCenter demed demed List of new features in @OracleSOA 11gR1 PS3: http://bit.ly/fVRwsP is not extremely long but huge release by # of bugs fixed. Go! biemond Edwin Biemond WebLogic 10.3.4 new features http://bit.ly/f7L1Eu Exalogic Elastic Cloud , JPA2 , Maven plugin, OWSM policies on WebLogic SCA applications JDeveloper JDeveloper & ADF JDeveloper and Oracle ADF 11g Release 1 Patch Set 3 (11.1.1.4.0): New Features and Bug Fixes http://bit.ly/feghnY simon_haslam Simon Haslam WebLogic Server 10.3.4 (i.e. 11gR1 PS3) available now too http://bit.ly/eeysZ2 JDeveloper JDeveloper & ADF Share your impressions on the new JDeveloper 11g Patchset 3 release that came out today! Download it here: http://bit.ly/dogRN8 VikasAatOracle Vikas Anand SOA Suite 11gR1PS3 is Hotpluggable ...see list of features that @Demed posted..#soa #soacommunity   New versions of Oracle Fusion Middleware 11g R1 (11.1.1.4.x)  include: Oracle WebLogic Server 11g R1 (10.3.4) Oracle SOA Suite 11g R1 (11.1.1.4.0) Oracle Business Process Management 11g R1 (11.1.1.4.0) Oracle Complex Event Processing 11g R1 (11.1.1.4.0) Oracle Application Integration Architecture Foundation Pack 11g R1 (11.1.1.4.0) Oracle Service Bus 11g R1 (11.1.1.4.0) Oracle Enterprise Repository 11g R1 (11.1.1.4.0) Oracle Identity Management 11g R1 (11.1.1.4.0) Oracle Enterprise Content Management 11g R1 (11.1.1.4.0) Oracle WebCenter 11g R1 (11.1.1.4.0) - coming soon Oracle Forms, Reports, Portal & Discoverer 11g R1 (11.1.1.4.0) Oracle Repository Creation Utility 11g R1 (11.1.1.4.0) Oracle JDeveloper & Application Development Runtime 11g R1 (11.1.1.4.0) Resources Download  (OTN) Certification Documentation   New Features in Oracle SOA Suite 11g Release 1 (11.1.1.4.0) Updated: January, 2011 Go to Oracle SOA Suite 11g Doc Introduction Oracle SOA Suite 11gR1 (11.1.1.4.0) includes both bug fixes as well as new features listed below - click on the title of each feature for more details. Downloads, documentation links and more information on the Oracle SOA Suite available on the SOA Suite OTN page and as always, we welcome your feedback on the SOA OTN forum. New in Oracle SOA Suite in this release BPEL Component BPEL 2.0 support in JDeveloper The BPEL editor in JDeveloper now generates BPEL 2.0 code and introduces several new activities. Augmented XML variables auto-initialization capabilities The XML variable auto-initialization capabilities have been enhanced to support two need additional use cases: to initialize the to-spec node if it doesn't exist during the rule and to initialize array elements. New Assign Activity dialog The new Assign Activity supports the same drag & drop paradigm used for the XSLT mapper, greatly streamlining the task of assigning multiple variables. Mediator Component Time window parameter for the resequencer This new parameter lets users initiate a best-effort resequencing based on a time window rather than a number of messages. Support for attachments in the Mediator assign dialog The Mediator assign dialog now supports attachment, enabling usage of the Mediator to transmit attachments even if source and target schemas are different. Adapters & Bindings ChunkSize property added to the File Adapter header properties The ChunkSize property of the File Adapter is now available as a header property, allowing in-process modification of the value for this property. Improved support for distributed WLS JMS topics though automatic rebalancing of listeners The JMS Adapter has been enhanced to subscribe to administrative events from WLS JMS. Based on these events, it dynamically rebalances listeners when there are changes to the members of a local or remote WLS JMS distributed destination. JDeveloper configuration wizard for custom JCA adapters A new wizard is available in JDeveloper to configure custom-built adapters Administration & Enterprise Manager Enhanced purging capabilities to manage database growth Historical instance data can now be purged using three different strategies: batch script, scheduled batch script or data partitioning. Asynchronous bulk instance deletion in Enterprise Manager Bulk deletion of instances in Enterprise Manager now executes as an asynchronous operation in Enterprise Manager, returning control to the user as soon as the action has been submitted and acknowledged. B2B Ability to schedule partner downtime This feature allows trading partners to notify each other about planned downtime and to delay delivery of messages during that period. Message sequencing B2B now supports both inbound and outbound message sequencing. Simplified BAM integration with B2B B2B ships with various pre-configured artifacts to simplify monitoring in BAM. Instance Message Java API for B2B The new instance message Java API supports programmatic access to B2B instance message data. Oracle Service Bus (OSB) Certification of the File and FTP JCA Adapters The File and FTP JCA adapters are now certified for use with Oracle Service Bus (in addition to the native transports). Security enhancements Oracle Service Bus now supports SAML 2.0 as well as the OWSM authorization policies. Check the Oracle Service Bus 11.1.1.4 Release Notes for a complete list of new features. Installation, Hot-Pluggability & Certifications Ability to run Oracle SOA Suite on IBM WebSphere Application Server Oracle SOA Suite can now be deployed on IBM WebSphere Application Server Network Deployment (ND) 7.0.11 and IBM WebSphere Application Server 7.0.11. Single JVM developer installation template Oracle SOA Suite can now be targeted to the WebLogic admin server - there is no requirement to also have a managed server. This topology is intended to minimize the memory foorprint of development environments. This is in addition to the list of supported browsers, operating systems and databases already certified in prior releases. Complex Event Processing (CEP) IDE enhancements This release introduces several enhancements to the development IDE, such as adapter wizards and event-type repository. CQL enhancements CQL enhancements include JDBC data cartridges and parametrized queries. Tracing and injecting events in the Event Processing Network (EPN) In the development environment you can now trace and inject events. Check the Oracle CEP 11.1.1.4 Release Notes for a complete list of new features. SOA Suite page on OTN For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Suite 11.1.1.4,JDeveloper 11.1.1.4,WebLogic 10.3.4,JRockit 10.3.4,SOA Community,Oracle,OPN,SOA,Simone Geib,Guido Schmutz,Edwin Biemond,Lucas Jellema,Simon Haslam,Demed,Vikas Anand,Jürgen Kress

    Read the article

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet

    - by hamsun
    I have been asked more than once whether the Oracle Wallet supports GoldenGate trail encryption. Although GoldenGate has supported encryption with the ENCKEYS file for years, Oracle GoldenGate 12c now also supports encryption using the Oracle Wallet. This helps improve security and makes it easier to administer. Two types of wallets can be configured in Oracle GoldenGate 12c: The wallet that holds the master keys, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.   The wallet that holds the Oracle Database user IDs and passwords stored in the ‘credential store’ stored in the new 12c dircrd/cwallet.sso file.   A wallet can be created using a ‘create wallet’  command.  Adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.   GGSCI (EDLVC3R27P0) 42> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 43> add masterkey Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.   Existing GUI Wallet utilities that come with other products such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet can be changed.   GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/* -rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso GGSCI (EDLVC3R27P0) 45> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   The second wallet file is used for the credential used to connect to a database, without exposing the user id or password. Once it is configured, this file can be copied so that credentials are available to connect to the source or target database.   GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/* -rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso   The encryption wallet file can also be copied to the target machine so the replicat has access to the master key to decrypt records that are encrypted in the trail. Similar to the old ENCKEYS file, the master keys wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, and NonStop platforms.   GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt   The new 12c UserIdAlias parameter is used to locate the credential in the wallet so the source user id and password does not need to be stored as a parameter as long as it is in the wallet.   GGSCI (EDLVC3R27P0) 52> view param extwest extract extwest exttrail ./dirdat/ew useridalias gguamer table west.*; The EncryptTrail parameter is used to encrypt the trail using the Advanced Encryption Standard and can be used with a primary extract or pump extract. GGSCI (EDLVC3R27P0) 54> view param pwest extract pwest encrypttrail AES256 rmthost easthost, mgrport 15001 rmttrail ./dirdat/pe passthru table west.*;   Once the extracts are running, records can be encrypted using the wallet.   GGSCI (EDLVC3R27P0) 60> info extract *west EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       00:00:17 (updated 00:00:01 ago) Process ID           24982 Log Read Checkpoint  Oracle Integrated Redo Logs                      2014-05-30 05:25:53                      SCN 0.0 (0) EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       24:02:32 (updated 00:00:05 ago) Process ID           24983 Log Read Checkpoint  File ./dirdat/ew000004                      2014-05-29 05:23:34.748949  RBA 1483   The ‘info masterkey’ command is used to confirm the wallet contains the key after copying it to the target machine. The key is needed to decrypt the data in the trail before the replicat applies the changes to the target database.   GGSCI (EDLVC3R27P0) 41> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 42> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   Once the replicat is running, records can be decrypted using the wallet.   GGSCI (EDLVC3R27P0) 44> info reast REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING INTEGRATED Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Process ID           25057 Log Read Checkpoint  File ./dirdat/pe000004                      2014-05-30 05:28:16.000000  RBA 1546   There is no need for the DecryptTrail parameter when using the Oracle Wallet, unlike when using the ENCKEYS file.   GGSCI (EDLVC3R27P0) 45> view params reast replicat reast assumetargetdefs discardfile ./dirrpt/reast.dsc, purge useridalias ggueuro map west.*, target east.*;   Once a record is inserted into the source table and committed, the encryption can be verified using logdump and then querying the target table.   AMER_SQL>insert into west.branch values (50, 80071); 1 row created.   AMER_SQL>commit; Commit complete.   The following encrypted record can be found using logdump. Logdump 40 >n 2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546 Name: WEST.BRANCH After  Image:                                             Partition 4   G  s    0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1   554f e65a 5185 0257                               | UO.ZQ..W  Bad compressed block, found length of  7075 (x1ba3), RBA 1546   GGS tokens: TokenID x52 'R' ORAROWID         Info x00  Length   20  4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp..  TokenID x4c 'L' LOGCSN           Info x00  Length    7  3231 3632 3934 33                                 | 2162943  TokenID x36 '6' TRANID           Info x00  Length   10  3130 2e31 372e 3135 3031                          | 10.17.1501  The replicat automatically decrypted this record from the trail and then inserted the row to the target table using the wallet. This select verifies the row was inserted into the target database and the data is not encrypted. EURO_SQL>select * from branch where branch_number=50; BRANCH_NUMBER                  BRANCH_ZIP -------------                                   ----------    50                                              80071   Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle course now to learn more about GoldenGate 12c new features including how to use GoldenGate with the Oracle wallet, credentials, integrated extracts, integrated replicats, the Oracle Universal Installer, and other new features. Looking for another course? View all Oracle GoldenGate training.   Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and a GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and also has experience teaching other technical curriculums including GoldenGate Monitor, Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server.

    Read the article

  • Changing an HTML Form's Target with jQuery

    - by Rick Strahl
    This is a question that comes up quite frequently: I have a form with several submit or link buttons and one or more of the buttons needs to open a new Window. How do I get several buttons to all post to the right window? If you're building ASP.NET forms you probably know that by default the Web Forms engine sends button clicks back to the server as a POST operation. A server form has a <form> tag which expands to this: <form method="post" action="default.aspx" id="form1"> Now you CAN change the target of the form and point it to a different window or frame, but the problem with that is that it still affects ALL submissions of the current form. If you multiple buttons/links and they need to go to different target windows/frames you can't do it easily through the <form runat="server"> tag. Although this discussion uses ASP.NET WebForms as an example, realistically this is a general HTML problem although likely more common in WebForms due to the single form metaphor it uses. In ASP.NET MVC for example you'd have more options by breaking out each button into separate forms with its own distinct target tag. However, even with that option it's not always possible to break up forms - for example if multiple targets are required but all targets require the same form data to the be posted. A common scenario here is that you might have a button (or link) that you click where you still want some server code to fire but at the end of the request you actually want to display the content in a new window. A common operation where this happens is report generation: You click a button and the server generates a report say in PDF format and you then want to display the PDF result in a new window without killing the content in the current window. Assuming you have other buttons on the same Page that need to post to base window how do you get the button click to go to a new window? Can't  you just use a LinkButton or other Link Control? At first glance you might think an easy way to do this is to use an ASP.NET LinkButton to do this - after all a LinkButton creates a hyper link that CAN accept a target and it also posts back to the server, right? However, there's no Target property, although you can set the target HTML attribute easily enough. Code like this looks reasonable: <asp:LinkButton runat="server" ID="btnNewTarget" Text="New Target" target="_blank" OnClick="bnNewTarget_Click" /> But if you try this you'll find that it doesn't work. Why? Because ASP.NET creates postbacks with JavaScript code that operates on the current window/frame: <a id="btnNewTarget" target="_blank" href="javascript:__doPostBack(&#39;btnNewTarget&#39;,&#39;&#39;)">New Target</a> What happens with a target tag is that before the JavaScript actually executes a new window is opened and the focus shifts to the new window. The new window of course is empty and has no __doPostBack() function nor access to the old document. So when you click the link a new window opens but the window remains blank without content - no server postback actually occurs. Natch that idea. Setting the Form Target for a Button Control or LinkButton So, in order to send Postback link controls and buttons to another window/frame, both require that the target of the form gets changed dynamically when the button or link is clicked. Luckily this is rather easy to do however using a little bit of script code and jQuery. Imagine you have two buttons like this that should go to another window: <asp:LinkButton runat="server" ID="btnNewTarget" Text="New Target" OnClick="ClickHandler" /> <asp:Button runat="server" ID="btnButtonNewTarget" Text="New Target Button" OnClick="ClickHandler" /> ClickHandler in this case is any routine that generates the output you want to display in the new window. Generally this output will not come from the current page markup but is generated externally - like a PDF report or some report generated by another application component or tool. The output generally will be either generated by hand or something that was generated to disk to be displayed with Response.Redirect() or Response.TransmitFile() etc. Here's the dummy handler that just generates some HTML by hand and displays it: protected void ClickHandler(object sender, EventArgs e) { // Perform some operation that generates HTML or Redirects somewhere else Response.Write("Some custom output would be generated here (PDF, non-Page HTML etc.)"); // Make sure this response doesn't display the page content // Call Response.End() or Response.Redirect() Response.End(); } To route this oh so sophisticated output to an alternate window for both the LinkButton and Button Controls, you can use the following simple script code: <script type="text/javascript"> $("#btnButtonNewTarget,#btnNewTarget").click(function () { $("form").attr("target", "_blank"); }); </script> So why does this work where the target attribute did not? The difference here is that the script fires BEFORE the target is changed to the new window. When you put a target attribute on a link or form the target is changed as the very first thing before the link actually executes. IOW, the link literally executes in the new window when it's done this way. By attaching a click handler, though we're not navigating yet so all the operations the script code performs (ie. __doPostBack()) and the collection of Form variables to post to the server all occurs in the current page. By changing the target from within script code the target change fires as part of the form submission process which means it runs in the correct context of the current page. IOW - the input for the POST is from the current page, but the output is routed to a new window/frame. Just what we want in this scenario. Voila you can dynamically route output to the appropriate window.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  HTML  jQuery  

    Read the article

  • How to use onSensorChanged sensor data in combination with OpenGL

    - by Sponge
    I have written a TestSuite to find out how to calculate the rotation angles from the data you get in SensorEventListener.onSensorChanged(). I really hope you can complete my solution to help people who will have the same problems like me. Here is the code, i think you will understand it after reading it. Feel free to change it, the main idea was to implement several methods to send the orientation angles to the opengl view or any other target which would need it. method 1 to 4 are working, they are directly sending the rotationMatrix to the OpenGl view. all other methods are not working or buggy and i hope someone knows to get them working. i think the best method would be method 5 if it would work, because it would be the easiest to understand but i'm not sure how efficient it is. the complete code isn't optimized so i recommend to not use it as it is in your project. here it is: import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGL10; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import static javax.microedition.khronos.opengles.GL10.*; import android.app.Activity; import android.content.Context; import android.content.pm.ActivityInfo; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.os.Bundle; import android.util.Log; import android.view.WindowManager; /** * This class provides a basic demonstration of how to use the * {@link android.hardware.SensorManager SensorManager} API to draw a 3D * compass. */ public class SensorToOpenGlTests extends Activity implements Renderer, SensorEventListener { private static final boolean TRY_TRANSPOSED_VERSION = false; /* * MODUS overview: * * 1 - unbufferd data directly transfaired from the rotation matrix to the * modelview matrix * * 2 - buffered version of 1 where both acceleration and magnetometer are * buffered * * 3 - buffered version of 1 where only magnetometer is buffered * * 4 - buffered version of 1 where only acceleration is buffered * * 5 - uses the orientation sensor and sets the angles how to rotate the * camera with glrotate() * * 6 - uses the rotation matrix to calculate the angles * * 7 to 12 - every possibility how the rotationMatrix could be constructed * in SensorManager.getRotationMatrix (see * http://www.songho.ca/opengl/gl_anglestoaxes.html#anglestoaxes for all * possibilities) */ private static int MODUS = 2; private GLSurfaceView openglView; private FloatBuffer vertexBuffer; private ByteBuffer indexBuffer; private FloatBuffer colorBuffer; private SensorManager mSensorManager; private float[] rotationMatrix = new float[16]; private float[] accelGData = new float[3]; private float[] bufferedAccelGData = new float[3]; private float[] magnetData = new float[3]; private float[] bufferedMagnetData = new float[3]; private float[] orientationData = new float[3]; // private float[] mI = new float[16]; private float[] resultingAngles = new float[3]; private int mCount; final static float rad2deg = (float) (180.0f / Math.PI); private boolean mirrorOnBlueAxis = false; private boolean landscape; public SensorToOpenGlTests() { } /** Called with the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); openglView = new GLSurfaceView(this); openglView.setRenderer(this); setContentView(openglView); } @Override protected void onResume() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onResume(); openglView.onResume(); if (((WindowManager) getSystemService(WINDOW_SERVICE)) .getDefaultDisplay().getOrientation() == 1) { landscape = true; } else { landscape = false; } mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ORIENTATION), SensorManager.SENSOR_DELAY_GAME); } @Override protected void onPause() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onPause(); openglView.onPause(); mSensorManager.unregisterListener(this); } public int[] getConfigSpec() { // We want a depth buffer, don't care about the // details of the color buffer. int[] configSpec = { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE }; return configSpec; } public void onDrawFrame(GL10 gl) { // clear screen and color buffer: gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); // set target matrix to modelview matrix: gl.glMatrixMode(GL10.GL_MODELVIEW); // init modelview matrix: gl.glLoadIdentity(); // move camera away a little bit: if ((MODUS == 1) || (MODUS == 2) || (MODUS == 3) || (MODUS == 4)) { if (landscape) { // in landscape mode first remap the rotationMatrix before using // it with glMultMatrixf: float[] result = new float[16]; SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result); gl.glMultMatrixf(result, 0); } else { gl.glMultMatrixf(rotationMatrix, 0); } } else { //in all other modes do the rotation by hand: gl.glRotatef(resultingAngles[1], 1, 0, 0); gl.glRotatef(resultingAngles[2], 0, 1, 0); gl.glRotatef(resultingAngles[0], 0, 0, 1); if (mirrorOnBlueAxis) { //this is needed for mode 6 to work gl.glScalef(1, 1, -1); } } //move the axis to simulate augmented behaviour: gl.glTranslatef(0, 2, 0); // draw the 3 axis on the screen: gl.glVertexPointer(3, GL_FLOAT, 0, vertexBuffer); gl.glColorPointer(4, GL_FLOAT, 0, colorBuffer); gl.glDrawElements(GL_LINES, 6, GL_UNSIGNED_BYTE, indexBuffer); } public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); float r = (float) width / height; gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glFrustumf(-r, r, -1, 1, 1, 10); } public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glDisable(GL10.GL_DITHER); gl.glClearColor(1, 1, 1, 1); gl.glEnable(GL10.GL_CULL_FACE); gl.glShadeModel(GL10.GL_SMOOTH); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // load the 3 axis and there colors: float vertices[] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1 }; float colors[] = { 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1 }; byte indices[] = { 0, 1, 0, 2, 0, 3 }; ByteBuffer vbb; vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertexBuffer = vbb.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); vbb = ByteBuffer.allocateDirect(colors.length * 4); vbb.order(ByteOrder.nativeOrder()); colorBuffer = vbb.asFloatBuffer(); colorBuffer.put(colors); colorBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void onAccuracyChanged(Sensor sensor, int accuracy) { } public void onSensorChanged(SensorEvent event) { // load the new values: loadNewSensorData(event); if (MODUS == 1) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); } if (MODUS == 2) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData); } if (MODUS == 3) { rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, bufferedMagnetData); } if (MODUS == 4) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, magnetData); } if (MODUS == 5) { // this mode uses the sensor data recieved from the orientation // sensor resultingAngles = orientationData.clone(); if ((-90 > resultingAngles[1]) || (resultingAngles[1] > 90)) { resultingAngles[1] = orientationData[0]; resultingAngles[2] = orientationData[1]; resultingAngles[0] = orientationData[2]; } } if (MODUS == 6) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); final float[] anglesInRadians = new float[3]; SensorManager.getOrientation(rotationMatrix, anglesInRadians); if ((-90 < anglesInRadians[2] * rad2deg) && (anglesInRadians[2] * rad2deg < 90)) { // device camera is looking on the floor // this hemisphere is working fine mirrorOnBlueAxis = false; resultingAngles[0] = anglesInRadians[0] * rad2deg; resultingAngles[1] = anglesInRadians[1] * rad2deg; resultingAngles[2] = anglesInRadians[2] * -rad2deg; } else { mirrorOnBlueAxis = true; // device camera is looking in the sky // this hemisphere is mirrored at the blue axis resultingAngles[0] = (anglesInRadians[0] * rad2deg); resultingAngles[1] = (anglesInRadians[1] * rad2deg); resultingAngles[2] = (anglesInRadians[2] * rad2deg); } } if (MODUS == 7) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x y z * order Rx*Ry*Rz */ resultingAngles[2] = (float) (Math.asin(rotationMatrix[2])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[0] = -(float) (Math.acos(rotationMatrix[0] / cosB)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[10] / cosB)) * rad2deg; } if (MODUS == 8) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z y x */ resultingAngles[2] = (float) (Math.asin(-rotationMatrix[8])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[9] / cosB)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[4] / cosB)) * rad2deg; } if (MODUS == 9) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z x y * * note z axis looks good at this one */ resultingAngles[1] = (float) (Math.asin(rotationMatrix[9])); final float minusCosA = -(float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[8] / minusCosA)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[1] / minusCosA)) * rad2deg; } if (MODUS == 10) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y x z */ resultingAngles[1] = (float) (Math.asin(-rotationMatrix[6])); final float cosA = (float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[2] / cosA)) * rad2deg; resultingAngles[0] = (float) (Math.acos(rotationMatrix[5] / cosA)) * rad2deg; } if (MODUS == 11) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y z x */ resultingAngles[0] = (float) (Math.asin(rotationMatrix[4])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } if (MODUS == 12) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x z y */ resultingAngles[0] = (float) (Math.asin(-rotationMatrix[1])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } logOutput(); } /** * transposes the matrix because it was transposted (inverted, but here its * the same, because its a rotation matrix) to be used for opengl * * @param source * @return */ private float[] transpose(float[] source) { final float[] result = source.clone(); if (TRY_TRANSPOSED_VERSION) { result[1] = source[4]; result[2] = source[8]; result[4] = source[1]; result[6] = source[9]; result[8] = source[2]; result[9] = source[6]; } // the other values in the matrix are not relevant for rotations return result; } private void rootMeanSquareBuffer(float[] target, float[] values) { final float amplification = 200.0f; float buffer = 20.0f; target[0] += amplification; target[1] += amplification; target[2] += amplification; values[0] += amplification; values[1] += amplification; values[2] += amplification; target[0] = (float) (Math .sqrt((target[0] * target[0] * buffer + values[0] * values[0]) / (1 + buffer))); target[1] = (float) (Math .sqrt((target[1] * target[1] * buffer + values[1] * values[1]) / (1 + buffer))); target[2] = (float) (Math .sqrt((target[2] * target[2] * buffer + values[2] * values[2]) / (1 + buffer))); target[0] -= amplification; target[1] -= amplification; target[2] -= amplification; values[0] -= amplification; values[1] -= amplification; values[2] -= amplification; } private void loadNewSensorData(SensorEvent event) { final int type = event.sensor.getType(); if (type == Sensor.TYPE_ACCELEROMETER) { accelGData = event.values.clone(); } if (type == Sensor.TYPE_MAGNETIC_FIELD) { magnetData = event.values.clone(); } if (type == Sensor.TYPE_ORIENTATION) { orientationData = event.values.clone(); } } private void logOutput() { if (mCount++ > 30) { mCount = 0; Log.d("Compass", "yaw0: " + (int) (resultingAngles[0]) + " pitch1: " + (int) (resultingAngles[1]) + " roll2: " + (int) (resultingAngles[2])); } } }

    Read the article

  • Recap: Oracle Fusion Middleware Strategies Driving Business Innovation

    - by Harish Gaur
    Hasan Rizvi, Executive Vice President of Oracle Fusion Middleware & Java took the stage on Tuesday to discuss how Oracle Fusion Middleware helps enable business innovation. Through a series of product demos and customer showcases, Hassan demonstrated how Oracle Fusion Middleware is a complete platform to harness the latest technological innovations (cloud, mobile, social and Fast Data) throughout the application lifecycle. Fig 1: Oracle Fusion Middleware is the foundation of business innovation This Session included 4 demonstrations to illustrate these strategies: 1. Build and deploy native mobile applications using Oracle ADF Mobile 2. Empower business user to model processes, design user interface and have rich mobile experience for process interaction using Oracle BPM Suite PS6. 3. Create collaborative user experience and integrate social sign-on using Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Social Network & Oracle Identity Management 11g R2 4. Deploy and manage business applications on Oracle Exalogic Nike, LA Department of Water & Power and Nintendo joined Hasan on stage to share how their organizations are leveraging Oracle Fusion Middleware to enable business innovation. Managing Performance in the Wrld of Social and Mobile How do you provide predictable scalability and performance for an application that monitors active lifestyle of 8 million users on a daily basis? Nike’s answer is Oracle Coherence, a component of Oracle Fusion Middleware and Oracle Exadata. Fig 2: Oracle Coherence enabled data grid improves performance of Nike+ Digital Sports Platform Nicole Otto, Sr. Director of Consumer Digital Technology discussed the vision of the Nike+ platform, a platform which represents a shift for NIKE from a  "product"  to  a "product +" experience.  There are currently nearly 8 million users in the Nike+ system who are using digitally-enabled Nike+ devices.  Once data from the Nike+ device is transmitted to Nike+ application, users access the Nike+ website or via the Nike mobile applicatoin, seeing metrics around their daily active lifestyle and even engage in socially compelling experiences to compare, compete or collaborate their data with their friends. Nike expects the number of users to grow significantly this year which will drive an explosion of data and potential new experiences. To deal with this challenge, Nike envisioned building a shared platform that would drive a consumer-centric model for the company. Nike built this new platform using Oracle Coherence and Oracle Exadata. Using Coherence, Nike built a data grid tier as a distributed cache, thereby provide low-latency access to most recent and relevant data to consumers. Nicole discussed how Nike+ Digital Sports Platform is unique in the way that it utilizes the Coherence Grid.  Nike takes advantage of Coherence as a traditional cache using both cache-aside and cache-through patterns.  This new tier has enabled Nike to create a horizontally scalable distributed event-driven processing architecture. Current data grid volume is approximately 150,000 request per minute with about 40 million objects at any given time on the grid. Improving Customer Experience Across Multiple Channels Customer experience is on top of every CIO's mind. Customer Experience needs to be consistent and secure across multiple devices consumers may use.  This is the challenge Matt Lampe, CIO of Los Angeles Department of Water & Power (LADWP) was faced with. Despite being the largest utilities company in the country, LADWP had been relying on a 38 year old customer information system for serving its customers. Their prior system  had been unable to keep up with growing customer demands. Last year, LADWP embarked on a journey to improve customer experience for 1.6million LA DWP customers using Oracle WebCenter platform. Figure 3: Multi channel & Multi lingual LADWP.com built using Oracle WebCenter & Oracle Identity Management platform Matt shed light on his efforts to drive customer self-service across 3 dimensions – new website, new IVR platform and new bill payment service. LADWP has built a new portal to increase customer self-service while reducing the transactions via IVR. LADWP's website is powered Oracle WebCenter Portal and is accessible by desktop and mobile devices. By leveraging Oracle WebCenter, LADWP eliminated the need to build, format, and maintain individual mobile applications or websites for different devices. Their entire content is managed using Oracle WebCenter Content and secured using Oracle Identity Management. This new portal automated their paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account -  like Bill Pay, Payment History, Bill History and Usage Analysis. LADWP's solution went live in April 2012. Matt indicated that LADWP's Self-Service Portal has greatly improved customer satisfaction.  In a JD Power Associates website satisfaction survey, results indicate rankings have climbed by 25+ points, marking a remarkable increase in user experience. Bolstering Performance and Simplifying Manageability of Business Applications Ingvar Petursson, Senior Vice Preisdent of IT at Nintendo America joined Hasan on-stage to discuss their choice of Exalogic. Nintendo had significant new requirements coming their way for business systems, both internal and external, in the years to come, especially with new products like the WiiU on the horizon this holiday season. Nintendo needed a platform that could give them performance, availability and ease of management as they deploy business systems. Ingvar selected Engineered Systems for two reasons: 1. High performance  2. Ease of management Figure 4: Nintendo relies on Oracle Exalogic to run ATG eCommerce, Oracle e-Business Suite and several business applications Nintendo made a decision to run their business applications (ATG eCommerce, E-Business Suite) and several Fusion Middleware components on the Exalogic platform. What impressed Ingvar was the "stress” testing results during evaluation. Oracle Exalogic could handle their 3-year load estimates for many functions, which was better than Nintendo expected without any hardware expansion. Faster Processing of Big Data Middleware plays an increasingly important role in Big Data. Last year, we announced at OpenWorld the introduction of Oracle Data Integrator for Hadoop and Oracle Loader for Hadoop which helps in the ability to move, transform, load data to and from Big Data Appliance to Exadata.  This year, we’ve added new capabilities to find, filter, and focus data using Oracle Event Processing. This product can natively integrate with Big Data Appliance or runs standalone. Hasan briefly discussed how NTT Docomo, largest mobile operator in Japan, leverages Oracle Event Processing & Oracle Coherence to process mobile data (from 13 million smartphone users) at a speed of 700K events per second before feeding it Hadoop for distributed processing of big data. Figure 5: Mobile traffic data processing at NTT Docomo with Oracle Event Processing & Oracle Coherence    

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • Rebuilding CoasterBuzz, Part III: The architecture using the "Web stack of love"

    - by Jeff
    This is the third post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch in the next month or two. More: Part I: Evolution, and death to WCF Part II: Hot data objects I finally hit a point in the re-do of CoasterBuzz where I feel like the major pieces are in place... rewritten, ported and what not, so that I can focus now on front-end design and more interesting creative problems. I've been asked on more than one occasion (OK, just twice) what's going on under the covers, so I figure this might be a good time to explain the overall architecture. As it turns out, I'm using a whole lof of the "Web stack of love," as Scott Hanselman likes to refer to it. Oh that Hanselman. First off, at the center of it all, is BizTalk. Just kidding. That's "enterprise architecture" humor, where every discussion starts with how they'll use BizTalk. Here are the bigger moving parts: It's fairly straight forward. A common library lives in a number of Web apps, all of which are (or will be) powered by ASP.NET MVC 4. They all talk to the same database. There is the main Web site, which also has the endpoint for the Silverlight-based Feed app. The cstr.bz site handles redirects, which are generated when news items are published and sent to Twitter. Facebook publishing is handled via the RSS Graffiti Facebook app. The API site handles requests from the Windows Phone app. The main site depends very heavily on POP Forums, the open source, MVC-based forum I maintain. It serves a number of functions, primarily handling users. These user objects serve in non-forum roles to handle things like news and database contributions, maintaining track records (coaster nerd for "list of rides I've been on") and, perhaps most importantly, paid club memberships. Before I get into more specifics, note that the "glue" for everything is Ninject, the dependency injection framework. I actually prefer StructureMap these days, but I started with Ninject in POP Forums a long time ago. POP Forums has a static class, PopForumsActivation, that new's up an instance of the container, and you can call it from where ever. The downside is that the forums require Ninject in your MVC app as the default dependency resolver. At some point, I'll decouple it, but for now it's not in the way. In the general sense, the entire set of apps follow a repository-service-controller-view pattern. Repos just do data access, service classes do business logic, controllers compose and route, views view. The forum also provides Scoring Game functionality. The Scoring Game is a reasonably abstract framework to award users points based on certain actions, and then award achievements when a certain number of point events happen. For example, the forum already awards a point when someone plus-one's a post you made. You can set up an achievement that says, "Give the user an award when they've had 100 posts plus'd." It also does zero-point entries into the ledger, so if you make a post, you could award an achievement based on 100 posts made. Wiring in the scoring game to CoasterBuzz functionality is just a matter of going to the Ninject container and getting an instance of the event publisher, and passing it events. Forum adapters were introduced into POP Forums a few versions ago, and they can intercept the model generated for forum topic lists and threads and designate an alternate view. These are used to make the "Day in Pictures" forum, where users can upload photos as frame-by-frame photo threads. Another adapter adds an association UI, so users can associate specific amusement parks with their trip report posts. The Silverlight-based Feed app talks to a simple JSON endpoint in the main app. This uses an underlying library I wrote ages ago, simply called Feeds, that aggregates event information. You inherit from a base class that creates instances of a publisher interface, and then use that class to send it an event type and any number of data fields. Feeds has two publishers: One is to the database, and that's used for the endpoint that talks to the Silverlight app. The second publisher publishes to Twitter, if the event is of the type "news." The wiring is a little strange, because for the new posts and topics events, I'm actually pulling out the forum repository classes from the Ninject container and replacing them with overridden methods to publish. I should probably be doing this at the service class level, but whatever. It's my mess. cstr.bz doesn't do anything interesting. It looks up the path, and if it has a match, does a 301 redirect to the long URL. The API site just serves up JSON for the Windows Phone app. The Windows Phone app is Silverlight, of course, and there isn't much to it. It does use the control toolkit, but beyond that, it relies on a simple class that creates a Webclient and calls the server for JSON to deserialize. The same class is now used by the Feed app, which used to use WCF. Simple is better. Data access in POP Forums is all straight SQL, because a lot of it was ported from the ASP.NET version. Most CoasterBuzz data access is handled by the Entity Framework, using the code-first model. The context class in this case does a lot of work to make sure that the table and key mapping works, since much of it breaks from the normal conventions of EF. One of the more powerful things you can do with EF, once you understand the little gotchas, is split tables by row into different entities. For example, a roller coaster photo has everything in the same row, including the metadata, the thumbnail bytes and the image itself. Obviously, if you want to get a list of photos to iterate over in a view, you don't want to get the image data. The use of navigation properties makes it easier to get just what you want. The front end includes Razor views in MVC, and jQuery is used for client-side goodness. I'm also using jQuery UI in a few places, for tabs, a dialog box and autocomplete. I'm also, tentatively, using jQuery Mobile. I've already ported most forum views to Mobile, but they need some work as v1.1 isn't finished yet. I'm not sure if I'll ship CoasterBuzz with mobile views or not yet. It's on the radar, but not something in my delivery criteria. That covers all of the big frameworks in play. Next time I hope to talk more about the front-end experience, which to me is where most of the fun is these days. Hoping to launch in the next month or two. Getting tired of looking at the old site!

    Read the article

  • How to add correct cancellation when downloading a file with the example in the samples of the new P

    - by Mike
    Hello everybody, I have downloaded the last samples of the Parallel Programming team, and I don't succeed in adding correctly the possibility to cancel the download of a file. Here is the code I ended to have: var wreq = (HttpWebRequest)WebRequest.Create(uri); // Fire start event DownloadStarted(this, new DownloadStartedEventArgs(remoteFilePath)); long totalBytes = 0; wreq.DownloadDataInFileAsync(tmpLocalFile, cancellationTokenSource.Token, allowResume, totalBytesAction => { totalBytes = totalBytesAction; }, readBytes => { Log.Debug("Progression : {0} / {1} => {2}%", readBytes, totalBytes, 100 * (double)readBytes / totalBytes); DownloadProgress(this, new DownloadProgressEventArgs(remoteFilePath, readBytes, totalBytes, (int)(100 * readBytes / totalBytes))); }) .ContinueWith( (antecedent ) => { if (antecedent.IsFaulted) Log.Debug(antecedent.Exception.Message); //Fire end event SetEndDownload(antecedent.IsCanceled, antecedent.Exception, tmpLocalFile, 0); }, cancellationTokenSource.Token); I want to fire an end event after the download is finished, hence the ContinueWith. I slightly changed the code of the samples to add the CancellationToken and the 2 delegates to get the size of the file to download, and the progression of the download: return webRequest.GetResponseAsync() .ContinueWith(response => { if (totalBytesAction != null) totalBytesAction(response.Result.ContentLength); response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); }, ct); I had to add the call to the Wait function, because if I don't, the method exits and the end event is fired too early. Here are the modified method extensions (lot of code, apologies :p) public static Task WriteAllBytesAsync(this Stream stream, string filePath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (stream == null) throw new ArgumentNullException("stream"); // Copy from the source stream to the memory stream and return the copied data return stream.CopyStreamToFileAsync(filePath, ct, resumeDownload, progressAction); } public static Task CopyStreamToFileAsync(this Stream source, string destinationPath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destinationPath == null) throw new ArgumentNullException("destinationPath"); // Open the output file for writing var destinationStream = FileAsync.OpenWrite(destinationPath); // Copy the source to the destination stream, then close the output file. return CopyStreamToStreamAsync(source, destinationStream, ct, progressAction).ContinueWith(t => { var e = t.Exception; destinationStream.Close(); if (e != null) throw e; }, ct, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Current); } public static Task CopyStreamToStreamAsync(this Stream source, Stream destination, CancellationToken ct, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destination == null) throw new ArgumentNullException("destination"); return Task.Factory.Iterate(CopyStreamIterator(source, destination, ct, progressAction)); } private static IEnumerable<Task> CopyStreamIterator(Stream input, Stream output, CancellationToken ct, Action<long> progressAction = null) { // Create two buffers. One will be used for the current read operation and one for the current // write operation. We'll continually swap back and forth between them. byte[][] buffers = new byte[2][] { new byte[BUFFER_SIZE], new byte[BUFFER_SIZE] }; int filledBufferNum = 0; Task writeTask = null; int readBytes = 0; // Until there's no more data to be read or cancellation while (true) { ct.ThrowIfCancellationRequested(); // Read from the input asynchronously var readTask = input.ReadAsync(buffers[filledBufferNum], 0, buffers[filledBufferNum].Length); // If we have no pending write operations, just yield until the read operation has // completed. If we have both a pending read and a pending write, yield until both the read // and the write have completed. yield return writeTask == null ? readTask : Task.Factory.ContinueWhenAll(new[] { readTask, writeTask }, tasks => tasks.PropagateExceptions()); // If no data was read, nothing more to do. if (readTask.Result <= 0) break; readBytes += readTask.Result; if (progressAction != null) progressAction(readBytes); // Otherwise, write the written data out to the file writeTask = output.WriteAsync(buffers[filledBufferNum], 0, readTask.Result); // Swap buffers filledBufferNum ^= 1; } } So basically, at the end of the chain of called methods, I let the CancellationToken throw an OperationCanceledException if a Cancel has been requested. What I hoped was to get IsFaulted == true in the appealing code and to fire the end event with the canceled flags and the correct exception. But what I get is an unhandled exception on the line response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); telling me that I don't catch an AggregateException. I've tried various things, but I don't succeed to make the whole thing work properly. Does anyone of you have played enough with that library and may help me? Thanks in advance Mike

    Read the article

  • Possible to manipulate UI elements via dispatchEvent()?

    - by rinogo
    Hi all! I'm trying to manually dispatch events on a textfield so I can manipulate it indirectly via code (e.g. place cursor at a given set of x/y coordinates). However, my events seem to have no effect. I've written a test to experiment with this phenomenon: package sandbox { import flash.display.Sprite; import flash.events.MouseEvent; import flash.text.TextField; import flash.text.TextFieldType; import flash.text.TextFieldAutoSize; import flash.utils.setTimeout; public class Test extends Sprite { private var tf:TextField; private var tf2:TextField; public function Test() { super(); tf = new TextField(); tf.text = 'Interact here'; tf.type = TextFieldType.INPUT; addChild(tf); tf2 = new TextField(); tf2.text = 'Same events replayed with five second delay here'; tf2.autoSize = TextFieldAutoSize.LEFT; tf2.type = TextFieldType.INPUT; tf2.y = 30; addChild(tf2); tf.addEventListener(MouseEvent.CLICK, mouseListener); tf.addEventListener(MouseEvent.DOUBLE_CLICK, mouseListener); tf.addEventListener(MouseEvent.MOUSE_DOWN, mouseListener); tf.addEventListener(MouseEvent.MOUSE_MOVE, mouseListener); tf.addEventListener(MouseEvent.MOUSE_OUT, mouseListener); tf.addEventListener(MouseEvent.MOUSE_OVER, mouseListener); tf.addEventListener(MouseEvent.MOUSE_UP, mouseListener); tf.addEventListener(MouseEvent.MOUSE_WHEEL, mouseListener); tf.addEventListener(MouseEvent.ROLL_OUT, mouseListener); tf.addEventListener(MouseEvent.ROLL_OVER, mouseListener); } private function mouseListener(event:MouseEvent):void { //trace(event); setTimeout(function():void {trace(event); tf2.dispatchEvent(event);}, 5000); } } } Essentially, all this test does is to use setTimeout to effectively 'record' events on TextField tf and replay them five seconds later on TextField tf2. When an event is dispatched on tf2, it is traced to the console output. The console output upon running this program and clicking on tf is: [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=0 localY=1 stageX=0 stageY=1 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="rollOver" bubbles=false cancelable=false eventPhase=2 localX=0 localY=1 stageX=0 stageY=1 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseOver" bubbles=true cancelable=false eventPhase=3 localX=0 localY=1 stageX=0 stageY=1 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=2 localY=1 stageX=2 stageY=1 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=2 localY=2 stageX=2 stageY=2 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=2 localY=3 stageX=2 stageY=3 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=3 localY=3 stageX=3 stageY=3 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=5 localY=3 stageX=5 stageY=3 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=6 localY=5 stageX=6 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=7 localY=5 stageX=7 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=9 localY=5 stageX=9 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=10 localY=5 stageX=10 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=11 localY=5 stageX=11 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=12 localY=5 stageX=12 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseDown" bubbles=true cancelable=false eventPhase=3 localX=12 localY=5 stageX=12 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseUp" bubbles=true cancelable=false eventPhase=3 localX=12 localY=5 stageX=12 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="click" bubbles=true cancelable=false eventPhase=3 localX=12 localY=5 stageX=12 stageY=5 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=10 localY=4 stageX=10 stageY=4 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=9 localY=2 stageX=9 stageY=2 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseMove" bubbles=true cancelable=false eventPhase=3 localX=9 localY=1 stageX=9 stageY=1 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="mouseOut" bubbles=true cancelable=false eventPhase=3 localX=-1 localY=-1 stageX=-1 stageY=-1 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] [MouseEvent type="rollOut" bubbles=false cancelable=false eventPhase=2 localX=-1 localY=-1 stageX=-1 stageY=-1 relatedObject=null ctrlKey=false altKey=false shiftKey=false delta=0] As we can see, the events are being captured and replayed successfully. However, no change occurs in tf2 - the mouse cursor does not appear in tf2 as we would expect. In fact, the cursor remains in tf even after the tf2 events are dispatched. Please help! Thanks, -Rich

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • Community Outreach - Where Should I Go

    - by Roger Brinkley
    A few days ago I was talking to person new to community development and they asked me what guidelines I used to determine the worthiness of a particular event. After our conversation was over I thought about it a little bit more and figured out there are three ways to determine if any event (be it conference, blog, podcast or other social medias) is worth doing: Transferability, Multiplication, and Impact. Transferability - Is what I have to say useful to the people that are going to hear it. For instance, consider a company that has product offering that can connect up using a number of languages like Scala, Grovey or Java. Sending a Scala expert to talk about Scala and the product is not transferable to a Java User Group, but a Java expert doing the same talk with a Java slant is. Similarly, talking about JavaFX to any Java User Group meeting in Brazil was pretty much a wasted effort until it was open sourced. Once it was open sourced it was well received. You can also look at transferability in relation to the subject matter that you're dealing with. How transferable is a presentation that I create. Can I, or a technical writer on the staff, turn it into some technical document. Could it be converted into some type of screen cast. If we have a regular podcast can we make a reference to the document, catch the high points or turn it into a interview. Is there a way of using this in the sales group. In other words is the document purely one dimensional or can it be re-purposed in other forms. Multiplication - On every trip I'm looking for 2 to 5 solid connections that I can make with developers. These are long term connections, because I know that once that relationship is established it will lead to another 2 - 5 from that connection and within a couple of years were talking about some 100 connections from just one developer. For instance, when I was working on JavaHelp in 2000 I hired a science teacher with a programming background. We've developed a very tight relationship over the year though we rarely see each other more than once a year. But at this JavaOne, one of his employees came up to me and said, "Richard (Rick Hard in Czech) told me to tell you that he couldn't make it to JavaOne this year but if I saw you to tell you hi". Another example is from my Mobile & Embedded days in Brasil. On our very first FISL trip about 5 years ago there were two university students that had created a project called "Marge". Marge was a Bluetooth framework that made connecting bluetooth devices easier. I invited them to a "Sun" dinner that evening. Originally they were planning on leaving that afternoon, but they changed their plans recognizing the opportunity. Their eyes were as big a saucers when they realized the level of engineers at the meeting. They went home started a JUG in Florianoplis that we've visited more than a couple of times. One of them went to work for Brazilian government lab like Berkley Labs, MIT Lab, John Hopkins Applied Physicas Labs or Lincoln Labs in the US. That presented us with an opportunity to show Embedded Java as a possibility for some of the work they were doing there. Impact - The final criteria is how life changing is what I'm going to say be to the individuals I'm reaching. A t-shirt is just a token, but when I reach down and tug at their developer hearts then I know I've succeeded. I'll never forget one time we flew all night to reach Joan Pasoa in Northern Brazil. We arrived at 2am went immediately to our hotel only to be woken up at 6 am to travel 2 hours by car to the presentation hall. When we arrived we were totally exhausted. Outside the facility there were 500 people lined up to hear 6 speakers for the day. That itself was uplifting.  I delivered one of my favorite talks on "I have passion". It was a talk on golf and embedded java development, "Find your passion". When we finished a couple of first year students came up to me and said how much my talk had inspired them. FISL is another great example. I had been about 4 years in a row. FISL is a very young group of developers so capturing their attention is important. Several of the students will come back 2 or 3 years later and ask me questions about research or jobs. And then there's Louis. Louis is one my favorite Brazilians. I can only describe him as a big Brazilian teddy bear. I see him every year at FISL. He works primarily in Java EE but he's attended every single one of my talks over the last 4 years. I can't tell you why, but he always greets me and gives me a hug. For some reason I've had a real impact. And of course when it comes to impact you don't just measure a presentation but every single interaction you have at an event. It's the hall way conversations, the booth conversations, but more importantly it's the conversations at dinner tables or in the cars when you're getting transported to an event. There's a good story that illustrates this. Last year in the spring I was traveling to Goiânia in Brazil. I've been there many times and leaders there no me well. One young man has picked me up at the airport on more than one occasion. We were going out to dinner one evening and he brought his girl friend along. One thing let to another and I eventually asked him, in front of her, "Why haven't you asked her to marry you?" There were all kinds of excuses and she just looked at him and smiled. When I came back in December for JavaOne he came and sought me. "I just want to tell you that I thought a lot about what you said, and I asked her to marry me. We're getting married next Spring." Sometimes just one presentation is all it takes to make an impact. Other times it takes years. Some impacts are directly related to the company and some are more personal in nature. It doesn't matter which it is because it's having the impact that matters.

    Read the article

  • How do I do Collisions in my JavaScript Game Code Below?

    - by Henry
    I'm trying to figure out how would I add collision detection to my code so that when the "Man" character touches the "RedHouse" the RedHouse disappears? Thanks. By the way, I'm new to how things are done on this site, so thus, if there is anything else needed or so, let me know. <title>HMan</title> <body style="background:#808080;"> <br> <canvas id="canvasBg" width="800px" height="500px"style="display:block;background:#ffffff;margin:100px auto 0px;"></canvas> <canvas id="canvasRedHouse" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <canvas id="canvasEnemy" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <canvas id="canvasEnemy2" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <canvas id="canvasMan" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <script> var isPlaying = false; var requestAnimframe = window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.msRequestAnimationFrame || window.oRequestAnimationFrame; var canvasBg = document.getElementById('canvasBg'); var ctxBg = canvasBg.getContext('2d'); var canvasRedHouse = document.getElementById('canvasRedHouse'); var ctxRedHouse = canvasRedHouse.getContext('2d'); var House1; House1 = new RedHouse(); var canvasMan = document.getElementById('canvasMan'); var ctxMan = canvasMan.getContext('2d'); var Man1; Man1 = new Man(); var imgSprite = new Image(); imgSprite.src = 'SpritesI.png'; imgSprite.addEventListener('load',init,false); function init() { drawBg(); startLoop(); document.addEventListener('keydown',checkKeyDown,false); document.addEventListener('keyup',checkKeyUp,false); } function drawBg() { var SpriteSourceX = 0; var SpriteSourceY = 0; var drawManOnScreenX = 0; var drawManOnScreenY = 0; ctxBg.drawImage(imgSprite,SpriteSourceX,SpriteSourceY,800,500,drawManOnScreenX, drawManOnScreenY,800,500); } function clearctxBg() { ctxBg.clearRect(0,0,800,500); } function Man() { this.SpriteSourceX = 10; this.SpriteSourceY = 540; this.width = 40; this.height = 115; this.DrawManOnScreenX = 100; this.DrawManOnScreenY = 260; this.speed = 10; this.actualFrame = 1; this.speed = 2; this.isUpKey = false; this.isRightKey = false; this.isDownKey = false; this.isLeftKey = false; } Man.prototype.draw = function () { clearCtxMan(); this.updateCoors(); this.checkDirection(); ctxMan.drawImage(imgSprite,this.SpriteSourceX,this.SpriteSourceY+this.height* this.actualFrame, this.width,this.height,this.DrawManOnScreenX,this.DrawManOnScreenY, this.width,this.height); } Man.prototype.updateCoors = function(){ this.leftX = this.DrawManOnScreenX; this.rightX = this.DrawManOnScreenX + this.width; this.topY = this.DrawManOnScreenY; this.bottomY = this.DrawManOnScreenY + this.height; } Man.prototype.checkDirection = function () { if (this.isUpKey && this.topY > 240) { this.DrawManOnScreenY -= this.speed; } if (this.isRightKey && this.rightX < 800) { this.DrawManOnScreenX += this.speed; } if (this.isDownKey && this.bottomY < 500) { this.DrawManOnScreenY += this.speed; } if (this.isLeftKey && this.leftX > 0) { this.DrawManOnScreenX -= this.speed; } if (this.isRightKey && this.rightX < 800) { if (this.actualFrame > 0) { this.actualFrame = 0; } else { this.actualFrame++; } } if (this.isLeftKey) { if (this.actualFrame > 2) { this.actualFrame = 2; } function checkKeyDown(var keyID = e.keyCode || e.which; if (keyID === 38) { Man1.isUpKey = true; e.preventDefault(); } if (keyID === 39 ) { Man1.isRightKey = true; e.preventDefault(); } if (keyID === 40 ) { Man1.isDownKey = true; e.preventDefault(); } if (keyID === 37 ) { Man1.isLeftKey = true; e.preventDefault(); } } function checkKeyUp(e) { var keyID = e.keyCode || e.which; if (keyID === 38 || keyID === 87) { Man1.isUpKey = false; e.preventDefault(); } if (keyID === 39 || keyID === 68) { Man1.isRightKey = false; e.preventDefault(); } if (keyID === 40 || keyID === 83) { Man1.isDownKey = false; e.preventDefault(); } if (keyID === 37 || keyID === 65) { Man1.isLeftKey = false; e.preventDefault(); } } function clearCtxMan() { ctxMan.clearRect(0,0,800,500); } function RedHouse() { this.srcX = 135; this.srcY = 525; this.width = 265; this.height = 245; this.drawX = 480; this.drawY = 85; } RedHouse.prototype.draw = function () { clearCtxRedHouse(); ctxRedHouse.drawImage(imgSprite,this.srcX,this.srcY, this.width,this.height,this.drawX,this.drawY,this.width,this.height); }; function clearCtxRedHouse() { ctxRedHouse.clearRect(0,0,800,500); } function loop() { if (isPlaying === true){ Man1.draw(); House1.draw(); requestAnimframe(loop); } } function startLoop(){ isPlaying = true; loop(); } function stopLoop(){ isPlaying = false; } </script> <style> .top{ position: absolute; top: 4px; left: 10px; color:black; } .top2{ position: absolute; top: 60px; left: 10px; color:black; } </style> <div class="top"> <p><font face="arial" color="black" size="4"><b>HGame</b><font/><p/> <p><font face="arial" color="black" size="3"> My Game Here <font/><p/> </div> <div class="top2"> <p><font face="arial" color="black" size="3"> It will start now <font/><p/> </div>

    Read the article

  • Why cant i draw an elipse in with code?

    - by bvivek88
    package test; import java.awt.*; import java.awt.event.*; import java.awt.geom.Ellipse2D; import java.awt.image.BufferedImage; import javax.swing.*; public class test_bmp extends JPanel implements MouseListener,MouseMotionListener,ActionListener { static BufferedImage image; Color color; Point start=new Point(); Point end =new Point(); JButton elipse=new JButton("Elipse"); JButton rectangle=new JButton("Rectangle"); JButton line=new JButton("Line"); String selected; public test_bmp() { color = Color.black; setBorder(BorderFactory.createLineBorder(Color.black)); addMouseListener(this); addMouseMotionListener(this); } public void paintComponent(Graphics g) { //super.paintComponent(g); g.drawImage(image, 0, 0, this); Graphics2D g2 = (Graphics2D)g; g2.setPaint(Color.black); if(selected=="elipse") { g2.drawOval(start.x, start.y, (end.x-start.x),(end.y-start.y)); System.out.println("Start : "+start.x+","+start.y); System.out.println("End : "+end.x+","+end.y); } if(selected=="line") g2.drawLine(start.x,start.y,end.x,end.y); } //Draw on Buffered image public void draw() { Graphics2D g2 = image.createGraphics(); g2.setPaint(color); System.out.println("draw"); if(selected=="line") g2.drawLine(start.x, start.y, end.x, end.y); if(selected=="elipse") { g2.drawOval(start.x, start.y, (end.x-start.x),(end.y-start.y)); System.out.println("Start : "+start.x+","+start.y); System.out.println("End : "+end.x+","+end.y); } repaint(); g2.dispose(); } public JPanel addButtons() { JPanel buttonpanel=new JPanel(); buttonpanel.setBackground(color.lightGray); buttonpanel.setLayout(new BoxLayout(buttonpanel,BoxLayout.Y_AXIS)); elipse.addActionListener(this); rectangle.addActionListener(this); line.addActionListener(this); buttonpanel.add(elipse); buttonpanel.add(Box.createRigidArea(new Dimension(15,15))); buttonpanel.add(rectangle); buttonpanel.add(Box.createRigidArea(new Dimension(15,15))); buttonpanel.add(line); return buttonpanel; } public static void main(String args[]) { test_bmp application=new test_bmp(); //Main window JFrame frame=new JFrame("Whiteboard"); frame.setLayout(new BorderLayout()); frame.add(application.addButtons(),BorderLayout.WEST); frame.add(application); //size of the window frame.setSize(600,400); frame.setLocation(0,0); frame.setVisible(true); int w = frame.getWidth(); int h = frame.getHeight(); image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g2 = image.createGraphics(); g2.setPaint(Color.white); g2.fillRect(0,0,w,h); g2.dispose(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } @Override public void mouseClicked(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseEntered(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mousePressed(MouseEvent event) { start = event.getPoint(); } @Override public void mouseReleased(MouseEvent event) { end = event.getPoint(); draw(); } @Override public void mouseDragged(MouseEvent e) { end=e.getPoint(); repaint(); } @Override public void mouseMoved(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void actionPerformed(ActionEvent e) { if(e.getSource()==elipse) selected="elipse"; if(e.getSource()==line) selected="line"; draw(); } } I need to create a paint application, when i draw elipse by dragging mouse from left to right it displays nothing, why?? should i use any other function here?

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >