Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 143/336 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

  • Service Broker, not ETL

    - by jamiet
    I have been very quiet on this blog of late and one reason for that is I have been very busy on a client project that I would like to talk about a little here. The client that I have been working for has a website that runs on a distributed architecture utilising a messaging infrastructure for communication between different endpoints. My brief was to build a system that could consume these messages and produce analytical information in near-real-time. More specifically I basically had to deliver a data warehouse however it was the real-time aspect of the project that really intrigued me. This real-time requirement meant that using an Extract transformation, Load (ETL) tool was out of the question and so I had no choice but to write T-SQL code (i.e. stored-procedures) to process the incoming messages and load the data into the data warehouse. This concerned me though – I had no way to control the rate at which data would arrive into the system yet we were going to have end-users querying the system at the same time that those messages were arriving; the potential for contention in such a scenario was pretty high and and was something I wanted to minimise as much as possible. Moreover I did not want the processing of data inside the data warehouse to have any impact on the customer-facing website. As you have probably guessed from the title of this blog post this is where Service Broker stepped in! For those that have not heard of it Service Broker is a queuing technology that has been built into SQL Server since SQL Server 2005. It provides a number of features however the one that was of interest to me was the fact that it facilitates asynchronous data processing which, in layman’s terms, means the ability to process some data without requiring the system that supplied the data having to wait for the response. That was a crucial feature because on this project the customer-facing website (in effect an OLTP system) would be calling one of our stored procedures with each message – we did not want to cause the OLTP system to wait on us every time we processed one of those messages. This asynchronous nature also helps to alleviate the contention problem because the asynchronous processing activity is handled just like any other task in the database engine and hence can wait on another task (such as an end-user query). Service Broker it was then! The stored procedure called by the OLTP system would simply put the message onto a queue and we would use a feature called activation to pick each message off the queue in turn and process it into the warehouse. At the time of writing the system is not yet up to full capacity but so far everything seems to be working OK (touch wood) and crucially our users are seeing data in near-real-time. By near-real-time I am talking about latencies of a few minutes at most and to someone like me who is used to building systems that have overnight latencies that is a huge step forward! So then, am I advocating that you all go out and dump your ETL tools? Of course not, no! What this project has taught me though is that in certain scenarios there may be better ways to implement a data warehouse system then the traditional “load data in overnight” approach that we are all used to. Moreover I have really enjoyed getting to grips with a new technology and even if you don’t want to use Service Broker you might want to consider asynchronous messaging architectures for your BI/data warehousing solutions in the future. This has been a very high level overview of my use of Service Broker and I have deliberately left out much of the minutiae of what has been a very challenging implementation. Nonetheless I hope I have caused you to reflect upon your own approaches to BI and question whether other approaches may be more tenable. All comments and questions gratefully received! Lastly, if you have never used Service Broker before and want to kick the tyres I have provided below a very simple “Service Broker Hello World” script that will create all of the objects required to facilitate Service Broker communications and then send the message “Hello World” from one place to anther! This doesn’t represent a “proper” implementation per se because it doesn’t close down down conversation objects (which you should always do in a real-world scenario) but its enough to demonstrate the capabilities! @Jamiet ----------------------------------------------------------------------------------------------- /*This is a basic Service Broker Hello World app. Have fun! -Jamie */ USE MASTER GO CREATE DATABASE SBTest GO --Turn Service Broker on! ALTER DATABASE SBTest SET ENABLE_BROKER GO USE SBTest GO -- 1) we need to create a message type. Note that our message type is -- very simple and allowed any type of content CREATE MESSAGE TYPE HelloMessage VALIDATION = NONE GO -- 2) Once the message type has been created, we need to create a contract -- that specifies who can send what types of messages CREATE CONTRACT HelloContract (HelloMessage SENT BY INITIATOR) GO --We can query the metadata of the objects we just created SELECT * FROM   sys.service_message_types WHERE name = 'HelloMessage'; SELECT * FROM   sys.service_contracts WHERE name = 'HelloContract'; SELECT * FROM   sys.service_contract_message_usages WHERE  service_contract_id IN (SELECT service_contract_id FROM sys.service_contracts WHERE name = 'HelloContract') AND        message_type_id IN (SELECT message_type_id FROM sys.service_message_types WHERE name = 'HelloMessage'); -- 3) The communication is between two endpoints. Thus, we need two queues to -- hold messages CREATE QUEUE SenderQueue CREATE QUEUE ReceiverQueue GO --more querying metatda SELECT * FROM sys.service_queues WHERE name IN ('SenderQueue','ReceiverQueue'); --we can also select from the queues as if they were tables SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   -- 4) Create the required services and bind them to be above created queues CREATE SERVICE Sender   ON QUEUE SenderQueue CREATE SERVICE Receiver   ON QUEUE ReceiverQueue (HelloContract) GO --more querying metadata SELECT * FROM sys.services WHERE name IN ('Receiver','Sender'); -- 5) At this point, we can begin the conversation between the two services by -- sending messages DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @message NVARCHAR(100) BEGIN   BEGIN TRANSACTION;   BEGIN DIALOG @conversationHandle         FROM SERVICE Sender         TO SERVICE 'Receiver'         ON CONTRACT HelloContract WITH ENCRYPTION=OFF   -- Send a message on the conversation   SET @message = N'Hello, World';   SEND  ON CONVERSATION @conversationHandle         MESSAGE TYPE HelloMessage (@message)   COMMIT TRANSACTION END GO --check contents of queues SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   GO -- Receive a message from the queue RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS MESSAGE FROM ReceiverQueue GO --If no messages were received and/or you can't see anything on the queues you may wish to check the following for clues: SELECT * FROM sys.transmission_queue -- Cleanup DROP SERVICE Sender DROP SERVICE Receiver DROP QUEUE SenderQueue DROP QUEUE ReceiverQueue DROP CONTRACT HelloContract DROP MESSAGE TYPE HelloMessage GO USE MASTER GO DROP DATABASE SBTest GO

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • Custom Lookup Provider For NetBeans Platform CRUD Tutorial

    - by Geertjan
    For a long time I've been planning to rewrite the second part of the NetBeans Platform CRUD Application Tutorial to integrate the loosely coupled capabilities introduced in a seperate series of articles based on articles by Antonio Vieiro (a great series, by the way). Nothing like getting into the Lookup stuff right from the get go (rather than as an afterthought)! The question, of course, is how to integrate the loosely coupled capabilities in a logical way within that tutorial. Today I worked through the tutorial from scratch, up until the point where the prototype is completed, i.e., there's a JTextArea displaying data pulled from a database. That brought me to the place where I needed to be. In fact, as soon as the prototype is completed, i.e., the database connection has been shown to work, the whole story about Lookup.Provider and InstanceContent should be introduced, so that all the subsequent sections, i.e., everything within "Integrating CRUD Functionality" will be done by adding new capabilities to the Lookup.Provider. However, before I perform open heart surgery on that tutorial, I'd like to run the scenario by all those reading this blog who understand what I'm trying to do! (I.e., probably anyone who has read this far into this blog entry.) So, this is what I propose should happen and in this order: Point out the fact that right now the database access code is found directly within our TopComponent. Not good. Because you're mixing view code with data code and, ideally, the developers creating the user interface wouldn't need to know anything about the data access layer. Better to separate out the data access code into a separate class, within the CustomerLibrary module, i.e., far away from the module providing the user interface, with this content: public class CustomerDataAccess { public List<Customer> getAllCustomers() { return Persistence.createEntityManagerFactory("CustomerLibraryPU"). createEntityManager().createNamedQuery("Customer.findAll").getResultList(); } } Point out the fact that there is a concept of "Lookup" (which readers of the tutorial should know about since they should have followed the NetBeans Platform Quick Start), which is a registry into which objects can be published and to which other objects can be listening. In the same way as a TopComponent provides a Lookup, as demonstrated in the NetBeans Platform Quick Start, your own object can also provide a Lookup. So, therefore, let's provide a Lookup for Customer objects.  import org.openide.util.Lookup; import org.openide.util.lookup.AbstractLookup; import org.openide.util.lookup.InstanceContent; public class CustomerLookupProvider implements Lookup.Provider { private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: //...to come... // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } } Point out the fact that, in the same way as we can publish an object into the Lookup of a TopComponent, we can now also publish an object into the Lookup of our CustomerLookupProvider. Instead of publishing a String, as in the NetBeans Platform Quick Start, we'll publish an instance of our own type. And here is the type: public interface ReadCapability { public void read() throws Exception; } And here is an implementation of our type added to our Lookup: public class CustomerLookupProvider implements Lookup.Provider { private Set<Customer> customerSet; private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { customerSet = new HashSet<Customer>(); // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: instanceContent.add(new ReadCapability() { @Override public void read() throws Exception { ProgressHandle handle = ProgressHandleFactory.createHandle("Loading..."); handle.start(); customerSet.addAll(new CustomerDataAccess().getAllCustomers()); handle.finish(); } }); // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } public Set<Customer> getCustomers() { return customerSet; } } Point out that we can now create a new instance of our Lookup (in some other module, so long as it has a dependency on the module providing the CustomerLookupProvider and the ReadCapability), retrieve the ReadCapability, and then do something with the customers that are returned, here in the rewritten constructor of the TopComponent, without needing to know anything about how the database access is actually achieved since that is hidden in the implementation of our type, above: public CustomerViewerTopComponent() { initComponents(); setName(Bundle.CTL_CustomerViewerTopComponent()); setToolTipText(Bundle.HINT_CustomerViewerTopComponent()); // EntityManager entityManager = Persistence.createEntityManagerFactory("CustomerLibraryPU").createEntityManager(); // Query query = entityManager.createNamedQuery("Customer.findAll"); // List<Customer> resultList = query.getResultList(); // for (Customer c : resultList) { // jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); // } CustomerLookupProvider lookup = new CustomerLookupProvider(); ReadCapability rc = lookup.getLookup().lookup(ReadCapability.class); try { rc.read(); for (Customer c : lookup.getCustomers()) { jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); } } catch (Exception ex) { Exceptions.printStackTrace(ex); } } Does the above make as much sense to others as it does to me, including the naming of the classes? Feedback would be appreciated! Then I'll integrate into the tutorial and do the same for the other sections, i.e., "Create", "Update", and "Delete". (By the way, of course, the tutorial ends up showing that, rather than using a JTextArea to display data, you can use Nodes and explorer views to do so.)

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • CodePlex Daily Summary for Saturday, June 16, 2012

    CodePlex Daily Summary for Saturday, June 16, 2012Popular ReleasesCosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes *New feature added that allows user to select remind later interval.Sumzlib: API document: API documentMicrosoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...HigLabo: HigLabo_20120613: Bug fix HigLabo.Mail Decode header encoded by CP1252Jasc (just another script compressor): 1.3.1: Updated Ajax Minifier to 4.55.WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.Bumblebee: Version 0.3.1: Changed default config values to decent ones. Restricted visibility of Hive.fs to internal. Added some XML documentation. Added Array.shuffle utility. The dll is also available on NuGet My apologies, the initial source code referenced was missing one file which prevented it from building The source code contains two examples, one in C#, one in F#, illustrating the usage of the framework on the Travelling Salesman Problem: Source CodeSharePoint XSL Templates: SPXSLT 0.0.9: Added new template FixAmpersands. Fixed the contents of the MultiSelectValueCheck.xsl file, which was missing the stylesheet wrapper.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...Media Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.New Projects.NinJa (dotNinja): An extensive JavaScript Framework revolving around principles found in .NET and aiming to integrate full Intellisense support. bab-rizg: solve unemployment problemBizTalk Multi-part Message Attachments Zipper Pipeline Component: This pipeline component replaces all attachments of a multi-part message, in a send pipeline, for its zipped equivalent.Boggle.Net: A basic implementation of Boggle for WPF.CFScript: CFScript is an ANT-like scripting system for Compact Framework. Tasks like copying files, setting registry values o install CAB files can be done with CFScript.Diablo3: Diablo3Dygraphs.NET: Dygraphs.NETDynamics CRM plugin for nopCommerce: This plugins is a bridge between nopCommerce and Dynamics CRM. nms.gaming: Place holderProject Bright Star: Project Bright Star. Deal with it.RDFSharp: RDFSharp is a library designed to ease the development of .NET applications based on the RDF and Semantic Web data model.SlamCMS: An application framework that allows you to build content managed sites leveraging SharePoint 2010 for publishing with tools to query and manifest your data.test02: no

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Combining Shared Secret and Certificates

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also combine this with certificates so that you can identify the sender of the message.   Prerequisites As in the previous article before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   Setting up the Certificates To keep the post and sample simple I am going to use the local computer store for all certificates but this bit is really just the same as setting up certificates for an example where you are using WCF without using Windows Azure Service Bus. In the sample I have included two batch files which you can use to create the sample certificates or remove them. Basically you will end up with: A certificate called PocServerCert in the personal store for the local computer which will be used by the WCF Service component A certificate called PocClientCert in the personal store for the local computer which will be used by the client application A root certificate in the Root store called PocRootCA with its associated revocation list which is the root from which the client and server certificates were created   For the sample Im just using development certificates like you would normally, and you can see exactly how these are configured and placed in the stores from the batch files in the solution using makecert and certmgr.   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.Cert.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our certificate like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element.     Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the certificate stuff in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use the clientCertificate check and also specifying the serviceCertificate with information on how to find the servers certificate in the store.     I have also added a serviceAuthorization section where I will implement my own authorization component to perform additional security checks after the service has validated that the message was signed with a good certificate. I also have the same serviceSecurityAudit configuration to log access to my service. My Authorization Manager The below picture shows you implementation of my authorization manager. WCF will eventually hand off the message to my authorization component before it calls the service code. This is where I can perform some logic to check if the identity is allowed to access resources. In this case I am simple rejecting messages from anyone except the PocClientCertificate.     The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a certificate over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding.   Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the token from a certificate with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This contains the normal transportClientEndpointBehaviour to setup the ACS shared secret configuration but we have also configured the clientCertificate to look for the PocClientCert.     Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF in exactly the normal way but the configuration will jump in and ensure that a token is passed representing the client certificate.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and certificate based token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.Cert.zip

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    ????????????????????????????????????????·???????????Java EE 6???????????????·????WebLogic Server 12c?(???)?????????Oracle Enterprise Pack for Eclipse 12c?????Java EE 6??????3???????????????????????JSF 2.0?????????????????????????JAX-RS????RESTful?Web???????????????(???)?????????????JSF 2.0???????????????? Java EE 6??????????????????????????????????????JSF(JavaServer Faces) 2.0??????????Java EE?????????????????????????????????Struts????????????????????????????????JSF 2.0?Java EE 6??????????????????????????????????????????????????JSP(JavaServer Pages)?JSF???????????????????????·???????????????????????Web???????????????????????????????????????????????????????????????????????????????? ???????????????????????????????EJB??????????????EMPLOYEES??????????????????????XHTML????????????????????????????????????????????????????????????ManagedBean????????????JSF 2.0????????????????????? ?????????Oracle Enterprise Pack for Eclipse(OEPE)?????????????????Eclipse(OEPE)???????·?????OOW?????????????????·???????????Properties?????????????????·???·????????????????????????????Project Facets????????????JavaServer Faces?????????????Apply?????????OK???????????? ???JSF????????????????????????????ManagedBean???IndexBean?????????????OOW??????????????????·???????????????NEW?-?Class??????New Java Class??????????????????????Package????managed???Name????IndexBean???????Finish???????????? ?????IndexBean??????·????????????????????????????????????????????IndexBean(IndexBean.java)?package managed;import java.util.ArrayList;import java.util.List;import javax.ejb.EJB;import javax.faces.bean.ManagedBean;import ejb.EmpLogic;import model.Employee;@ManagedBeanpublic class IndexBean {  @EJB  private EmpLogic empLogic;  private String keyword;  private List<Employee> results = new ArrayList<Employee>();  public String getKeyword() {    return keyword;  }  public void setKeyword(String keyword) {    this.keyword = keyword;  }  public List getResults() {    return results;  }  public void actionSearch() {    results.clear();    results.addAll(empLogic.getEmp(keyword));  }} ????????????????keyword?results??????????????????????????????Session Bean???EmpLogic?????????????????@EJB?????????????????????????????????????????????????????????????????????actionSearch??????????????EmpLogic?????????·????????????????????result???????? ???ManagedBean?????????????????????????????????????????·??????OOW??????????????WebContent???????index.xhtml????? ???????????index.xhtml????????????????????????????????????????????????(Index.xhtml)?<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"   "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"  xmlns:ui="http://java.sun.com/jsf/facelets"  xmlns:h="http://java.sun.com/jsf/html"  xmlns:f="http://java.sun.com/jsf/core"><h:head>  <title>Employee??????</title></h:head><h:body>  <h:form>    <h:inputText value="#{indexBean.keyword}" />    <h:commandButton action="#{indexBean.actionSearch}" value="??" />    <h:dataTable value="#{indexBean.results}" var="emp" border="1">      <h:column>        <f:facet name="header">          <h:outputText value="employeeId" />        </f:facet>        <h:outputText value="#{emp.employeeId}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="firstName" />        </f:facet>        <h:outputText value="#{emp.firstName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="lastName" />        </f:facet>        <h:outputText value="#{emp.lastName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="salary" />        </f:facet>        <h:outputText value="#{emp.salary}" />      </h:column>    </h:dataTable>  </h:form></h:body></html> index.xhtml???????????????????ManagedBean???IndexBean??????????????????????????????IndexBean?????actionSearch??????????h:commandButton???????????????????????????????????????? ???Web???????????????(web.xml)??????web.xml???????·?????OOW???????????WebContent?-?WEB-INF?????? ?????????????web-app??????????????welcome-file-list(????)?????????????Web???????????????(web.xml)?<?xml version="1.0" encoding="UTF-8"?><web-app xmlns:javaee="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="3.0">  <javaee:display-name>OOW</javaee:display-name>  <servlet>    <servlet-name>Faces Servlet</servlet-name>    <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>    <load-on-startup>1</load-on-startup>  </servlet>  <servlet-mapping>    <servlet-name>Faces Servlet</servlet-name>    <url-pattern>/faces/*</url-pattern>  </servlet-mapping>  <welcome-file-list>    <welcome-file>/faces/index.xhtml</welcome-file>  </welcome-file-list></web-app> ???JSF????????????????????????????? ??????Java EE 6?JPA 2.0?EJB 3.1?JSF 2.0????????????????????????????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????????????????????????????????????????????Oracle WebLogic Server 12c(12.1.1)??????Next??????????????? ?????????????????????Domain Directory??????Browse????????????????????????C:\Oracle\Middleware\user_projects\domains\base_domain??????Finish???????????? ?????WebLogic Server?????????????????????????????????????????????????????????????????????OEPE??Servers???????Oracle WebLogic Server 12c???????????·???????????????Properties??????????????????????????????WebLogic?-?Publishing????????????Publish as an exploded archive??????????????????OK???????????? ???????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????? ???????????????????????????????????????????????·??????????????????????????????????????????firstName?????????????????JAX-RS???RESTful?Web??????? ?????????JAX-RS????RESTful?Web??????????????? Java EE??????????Java EE 5???SOAP????Web??????????JAX-WS??????????Java EE 6????????JAX-RS?????????????RESTful?Web????????????·????????????????????????JAX-RS????????Session Bean??????·?????????Web???????????????????????????????????????????????JAX-RS?????????? ?????????????????????????????JAX-RS???RESTful Web??????????????????????????·?????OOW???????????·???????????????Properties???????????????????????????Project Facets?????????????JAX-RS(Rest Web Services)???????????Further configuration required?????????????Modify Faceted Project???????????????JAX-RS??????·?????????????????JAX-RS Implementation Library??????Manage libraries????(???????????)?????????????? ??????Preference(Filtered)???????????????New????????????????New User Library????????????????User library name????JAX-RS???????OK???????????????????Preference(Filtered)?????????????Add JARs????????????????????????C:\Oracle\Middleware\modules \com.sun.jersey.core_1.1.0.0_1-9.jar??????OK???????????? ???Modify Faceted Project??????????JAX-RS Implementation Library????JAX-RS????????????????????JAX-RS servlet class name????com.sun.jersey.spi.container.servlet.ServletContainer???????OK?????????????Project Facets???????????????????OK?????????????????? ???RESTful Web??????????????????????????????????(???????EmpLogic?????????????)??RESTful Web?????????????EmpLogic(EmpLogic.java)?package ejb; import java.util.List; import javax.ejb.LocalBean; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.ws.rs.GET;import javax.ws.rs.Path;import javax.ws.rs.PathParam;import javax.ws.rs.Produces;import model.Employee; @Stateless @LocalBean @Path("/emprest")public class EmpLogic {     @PersistenceContext(unitName = "OOW")     private EntityManager em;     public EmpLogic() {     }  @GET  @Path("/getname/{empno}")  // ?  @Produces("text/plain")  // ?  public String getEmpName(@PathParam("empno") long empno) {    Employee e = em.find(Employee.class, empno);    if (e == null) {      return "no data.";    } else {      return e.getFirstName();    }  }} ?????????????????????@Path("/emprest ")????????????RESTful Web????????????HTTP??????????????JAX-RS????????????????????????RESTful Web?????Web??????????????????@Produces???????(?)??????????????????????????text/plain????????????????????????????application/xml?????????XML???????????application/json?????JSON?????????????????? ???????????????Web???????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????????????Web??????http://localhost:7001/OOW/jaxrs/emprest/getname/186????????????????URL?????????(186)?employeeId?????????????firstName????????????????*    *    * ????????3??????WebLogic Server 12c?OEPE????Java EE 6?????????????????Java EE 6????????????????·????????????????????????????Java EE?????????????????????????????????????????????????????????????????????????????????

    Read the article

  • Activation Error while testing Exception Handling Application Block

    - by CletusLoomis
    I'm getting the following error while testing my EHAB implementation: {"Activation error occured while trying to get instance of type ExceptionPolicyImpl, key "LogPolicy""} System.Exception Stack Trace: StackTrace " at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 53 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance[TService](String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 103 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.GetExceptionPolicy(Exception exception, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 131 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException(Exception exceptionToHandle, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 55 at Blackbox.Exception.ExceptionMain.LogException(Exception pException) in C:_Work_Black Box\Blackbox.Exception\ExceptionMain.vb:line 14 at BlackBox.Business.BusinessMain.TestExceptionHandling() in C:_Work_Black Box\BlackBox.Business\BusinessMain.vb:line 16 at Blackbox.Service.Service1.TestExceptionHandling() in C:_Work_Black Box\Blackbox.Service\Service.svc.vb:line 43" String Inner Exception: InnerException {"Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl", name = "LogPolicy". Exception occurred while: Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter). Exception is: ArgumentException - Event log names must consist of printable characters and cannot contain \, *, ?, or spaces At the time of the exception, the container was: Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl,LogPolicy Resolving parameter "policyEntries" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl(System.String policyName, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] policyEntries) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry,LogPolicy.All Exceptions Resolving parameter "handlers" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry(System.Type exceptionType, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.PostHandlingAction postHandlingAction, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] handlers, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Instrumentation.IExceptionHandlingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler,LogPolicy.All Exceptions.Logging Exception Handler (mapped from Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, LogPolicy.All Exceptions.Logging Exception Handler) Resolving parameter "writer" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler(System.String logCategory, System.Int32 eventId, System.Diagnostics.TraceEventType severity, System.String title, System.Int32 priority, System.Type formatterType, Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter writer) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl,LogWriter.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter, (none)) Resolving parameter "structureHolder" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl(Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder structureHolder, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator updateCoordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder,LogWriterStructureHolder.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder, (none)) Resolving parameter "traceSources" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder(System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.Filters.ILogFilter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] filters, System.Collections.Generic.IEnumerable1[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceSourceNames, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.LogSource, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] traceSources, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource allEventsTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource notProcessedTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource errorsTraceSource, System.String defaultCategory, System.Boolean tracingEnabled, System.Boolean logWarningsWhenNoCategoriesMatch, System.Boolean revertImpersonation) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogSource,General Resolving parameter "traceListeners" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogSource(System.String name, System.Collections.Generic.IEnumerable1[[System.Diagnostics.TraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceListeners, System.Diagnostics.SourceLevels level, System.Boolean autoFlush, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper,Event Log Listener (mapped from System.Diagnostics.TraceListener, Event Log Listener) Resolving parameter "wrappedTraceListener" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper(System.Diagnostics.TraceListener wrappedTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator coordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener,Event Log Listener?implementation (mapped from System.Diagnostics.TraceListener, Event Log Listener?implementation) Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter) "} System.Exception My web.config is as follows: <?xml version="1.0"?> <configuration> <configSections> <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> <section name="exceptionHandling" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Configuration.ExceptionHandlingSettings, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> </configSections> <loggingConfiguration name="" tracingEnabled="true" defaultCategory="General"> <listeners> <add name="Event Log Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.FormattedEventLogTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" source="Enterprise Library Logging" formatter="Text Formatter" log="C:\Blackbox.log" machineName="." traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" /> </listeners> <formatters> <add type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" template="Timestamp: {timestamp}{newline}&#xA;Message: {message}{newline}&#xA;Category: {category}{newline}&#xA;Priority: {priority}{newline}&#xA;EventId: {eventid}{newline}&#xA;Severity: {severity}{newline}&#xA;Title:{title}{newline}&#xA;Machine: {localMachine}{newline}&#xA;App Domain: {localAppDomain}{newline}&#xA;ProcessId: {localProcessId}{newline}&#xA;Process Name: {localProcessName}{newline}&#xA;Thread Name: {threadName}{newline}&#xA;Win32 ThreadId:{win32ThreadId}{newline}&#xA;Extended Properties: {dictionary({key} - {value}{newline})}" name="Text Formatter" /> </formatters> <categorySources> <add switchValue="All" name="General"> <listeners> <add name="Event Log Listener" /> </listeners> </add> </categorySources> <specialSources> <allEvents switchValue="All" name="All Events" /> <notProcessed switchValue="All" name="Unprocessed Category" /> <errors switchValue="All" name="Logging Errors &amp; Warnings"> <listeners> <add name="Event Log Listener" /> </listeners> </errors> </specialSources> </loggingConfiguration> <exceptionHandling> <exceptionPolicies> <add name="LogPolicy"> <exceptionTypes> <add name="All Exceptions" type="System.Exception, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="NotifyRethrow"> <exceptionHandlers> <add name="Logging Exception Handler" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" logCategory="General" eventId="100" severity="Error" title="Enterprise Library Exception Handling" formatterType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.TextExceptionFormatter, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling" priority="0" /> </exceptionHandlers> </add> </exceptionTypes> </add> <add name="WcfExceptionShielding"> <exceptionTypes> <add name="InvalidOperationException" type="System.InvalidOperationException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="ThrowNewException"> <exceptionHandlers> <add type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.FaultContractExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" exceptionMessageResourceType="" exceptionMessageResourceName="This is the message" exceptionMessage="This is the exception" faultContractType="Blackbox.Service.WCFFault, Blackbox.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Fault Contract Exception Handler"> <mappings> <add source="{Guid}" name="Id" /> <add source="{Message}" name="MessageText" /> </mappings> </add> </exceptionHandlers> </add> </exceptionTypes> </add> </exceptionPolicies> </exceptionHandling> <connectionStrings> <add name="CompassEntities" connectionString="metadata=~\bin\CompassModel.csdl|~\bin\CompassModel.ssdl|~\bin\CompassModel.msl;provider=Devart.Data.Oracle;provider connection string=&quot;User Id=foo;Password=foo;Server=foo64mo;Home=OraClient11g_home1;Persist Security Info=True&quot;" providerName="System.Data.EntityClient" /> <add name="BlackboxEntities" connectionString="metadata=~\bin\BlackboxModel.csdl|~\bin\BlackboxModel.ssdl|~\bin\BlackboxModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=sqldev1\cps;Initial Catalog=FundServ;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration> My code is as follows: Public Shared Function LogException(ByVal pException As System.Exception) As Boolean Return ExceptionPolicy.HandleException(pException, "LogPolicy") End Function Any assistance is appreciated.

    Read the article

  • Keyboard navigation for jQuery Tabs

    - by Binyamin
    How to make Keyboard navigation left/up/right/down (like for photo gallery) feature for jQury Tabs with History? Demo without Keyboard feature in http://dl.dropbox.com/u/6594481/tabs/index.html Needed functions: 1. on keyboardtop/down make select and CSS showactivenested ajax tabs from 1-st to last level 2. on keyboardleft/right changeback/forwardcontent ofactivenested ajax tabs tab 3. an extra option, makeactivenested ajax tab on 'cursor-on' on concrete nested ajax tabs level Read more detailed question with example pictures in http://stackoverflow.com/questions/2975003/jquery-tools-to-make-keyboard-and-cookies-feature-for-ajaxed-tabs-with-history /** * @license * jQuery Tools @VERSION Tabs- The basics of UI design. * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/tabs/ * * Since: November 2008 * Date: @DATE */ (function($) { // static constructs $.tools = $.tools || {version: '@VERSION'}; $.tools.tabs = { conf: { tabs: 'a', current: 'current', onBeforeClick: null, onClick: null, effect: 'default', initialIndex: 0, event: 'click', rotate: false, // 1.2 history: false }, addEffect: function(name, fn) { effects[name] = fn; } }; var effects = { // simple "toggle" effect 'default': function(i, done) { this.getPanes().hide().eq(i).show(); done.call(); }, /* configuration: - fadeOutSpeed (positive value does "crossfading") - fadeInSpeed */ fade: function(i, done) { var conf = this.getConf(), speed = conf.fadeOutSpeed, panes = this.getPanes(); if (speed) { panes.fadeOut(speed); } else { panes.hide(); } panes.eq(i).fadeIn(conf.fadeInSpeed, done); }, // for basic accordions slide: function(i, done) { this.getPanes().slideUp(200); this.getPanes().eq(i).slideDown(400, done); }, /** * AJAX effect */ ajax: function(i, done) { this.getPanes().eq(0).load(this.getTabs().eq(i).attr("href"), done); } }; var w; /** * Horizontal accordion * * @deprecated will be replaced with a more robust implementation */ $.tools.tabs.addEffect("horizontal", function(i, done) { // store original width of a pane into memory if (!w) { w = this.getPanes().eq(0).width(); } // set current pane's width to zero this.getCurrentPane().animate({width: 0}, function() { $(this).hide(); }); // grow opened pane to it's original width this.getPanes().eq(i).animate({width: w}, function() { $(this).show(); done.call(); }); }); function Tabs(root, paneSelector, conf) { var self = this, trigger = root.add(this), tabs = root.find(conf.tabs), panes = paneSelector.jquery ? paneSelector : root.children(paneSelector), current; // make sure tabs and panes are found if (!tabs.length) { tabs = root.children(); } if (!panes.length) { panes = root.parent().find(paneSelector); } if (!panes.length) { panes = $(paneSelector); } // public methods $.extend(this, { click: function(i, e) { var tab = tabs.eq(i); if (typeof i == 'string' && i.replace("#", "")) { tab = tabs.filter("[href*=" + i.replace("#", "") + "]"); i = Math.max(tabs.index(tab), 0); } if (conf.rotate) { var last = tabs.length -1; if (i < 0) { return self.click(last, e); } if (i > last) { return self.click(0, e); } } if (!tab.length) { if (current >= 0) { return self; } i = conf.initialIndex; tab = tabs.eq(i); } // current tab is being clicked if (i === current) { return self; } // possibility to cancel click action e = e || $.Event(); e.type = "onBeforeClick"; trigger.trigger(e, [i]); if (e.isDefaultPrevented()) { return; } // call the effect effects[conf.effect].call(self, i, function() { // onClick callback e.type = "onClick"; trigger.trigger(e, [i]); }); // default behaviour current = i; tabs.removeClass(conf.current); tab.addClass(conf.current); return self; }, getConf: function() { return conf; }, getTabs: function() { return tabs; }, getPanes: function() { return panes; }, getCurrentPane: function() { return panes.eq(current); }, getCurrentTab: function() { return tabs.eq(current); }, getIndex: function() { return current; }, next: function() { return self.click(current + 1); }, prev: function() { return self.click(current - 1); } }); // callbacks $.each("onBeforeClick,onClick".split(","), function(i, name) { // configuration if ($.isFunction(conf[name])) { $(self).bind(name, conf[name]); } // API self[name] = function(fn) { $(self).bind(name, fn); return self; }; }); if (conf.history && $.fn.history) { $.tools.history.init(tabs); conf.event = 'history'; } // setup click actions for each tab tabs.each(function(i) { $(this).bind(conf.event, function(e) { self.click(i, e); return e.preventDefault(); }); }); // cross tab anchor link panes.find("a[href^=#]").click(function(e) { self.click($(this).attr("href"), e); }); // open initial tab if (location.hash) { self.click(location.hash); } else { if (conf.initialIndex === 0 || conf.initialIndex > 0) { self.click(conf.initialIndex); } } } // jQuery plugin implementation $.fn.tabs = function(paneSelector, conf) { // return existing instance var el = this.data("tabs"); if (el) { return el; } if ($.isFunction(conf)) { conf = {onBeforeClick: conf}; } // setup conf conf = $.extend({}, $.tools.tabs.conf, conf); this.each(function() { el = new Tabs($(this), paneSelector, conf); $(this).data("tabs", el); }); return conf.api ? el: this; }; }) (jQuery); /** * @license * jQuery Tools @VERSION History "Back button for AJAX apps" * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/toolbox/history.html * * Since: Mar 2010 * Date: @DATE */ (function($) { var hash, iframe, links, inited; $.tools = $.tools || {version: '@VERSION'}; $.tools.history = { init: function(els) { if (inited) { return; } // IE if ($.browser.msie && $.browser.version < '8') { // create iframe that is constantly checked for hash changes if (!iframe) { iframe = $("<iframe/>").attr("src", "javascript:false;").hide().get(0); $("body").append(iframe); setInterval(function() { var idoc = iframe.contentWindow.document, h = idoc.location.hash; if (hash !== h) { $.event.trigger("hash", h); } }, 100); setIframeLocation(location.hash || '#'); } // other browsers scans for location.hash changes directly without iframe hack } else { setInterval(function() { var h = location.hash; if (h !== hash) { $.event.trigger("hash", h); } }, 100); } links = !links ? els : links.add(els); els.click(function(e) { var href = $(this).attr("href"); if (iframe) { setIframeLocation(href); } // handle non-anchor links if (href.slice(0, 1) != "#") { location.href = "#" + href; return e.preventDefault(); } }); inited = true; } }; function setIframeLocation(h) { if (h) { var doc = iframe.contentWindow.document; doc.open().close(); doc.location.hash = h; } } // global histroy change listener $(window).bind("hash", function(e, h) { if (h) { links.filter(function() { var href = $(this).attr("href"); return href == h || href == h.replace("#", ""); }).trigger("history", [h]); } else { links.eq(0).trigger("history", [h]); } hash = h; window.location.hash = hash; }); // jQuery plugin implementation $.fn.history = function(fn) { $.tools.history.init(this); // return jQuery return this.bind("history", fn); }; })(jQuery); $(function() { $("#list").tabs("#content > div", {effect: 'ajax', history: true}); });

    Read the article

  • Can't build pyxpcom on OS X 10.6

    - by Gj
    I've been following these instructions at https://developer.mozilla.org/en/Building_PyXPCOM but getting this: $ make make export make[2]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../src/config/nsinstall.py -L /usr/local/pyxpcom/build/xpcom/src -m 644 ../../../src/xpcom/src/PyXPCOM.h ../../dist/include make[3]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl make[4]: *** No rule to make target `_xpidlgen/py_test_component.h', needed by `export'. Stop. make[3]: *** [export] Error 2 make[2]: *** [export] Error 2 make[1]: *** [export] Error 2 make: *** [default] Error 2 Any ideas? An interesting anomaly is that despite me setting the PYTHON env variable to Python 2.6, the configure and make both seem to go after the 2.5... Thanks for any advice! PS here's the configure output: $ ../src/configure --with-libxul-sdk=/Users/me/xulrunner-sdk/ loading cache ./config.cache checking host system type... i386-apple-darwin10.3.0 checking target system type... i386-apple-darwin10.3.0 checking build system type... i386-apple-darwin10.3.0 checking for mawk... (cached) gawk checking for perl5... (cached) /opt/local/bin/perl5 checking for gcc... (cached) gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... (cached) yes checking whether gcc accepts -g... (cached) yes checking for c++... (cached) c++ checking whether the C++ compiler (c++ ) works... yes checking whether the C++ compiler (c++ ) is a cross-compiler... no checking whether we are using GNU C++... (cached) yes checking whether c++ accepts -g... (cached) yes checking for ranlib... (cached) ranlib checking for as... (cached) /usr/bin/as checking for ar... (cached) ar checking for ld... (cached) ld checking for strip... (cached) strip checking for windres... no checking whether gcc and cc understand -c and -o together... (cached) yes checking how to run the C preprocessor... (cached) gcc -E checking how to run the C++ preprocessor... (cached) c++ -E checking for a BSD compatible install... (cached) /usr/bin/install -c checking whether ln -s works... (cached) yes checking for minimum required perl version >= 5.006... 5.008009 checking for full perl installation... yes checking for /opt/local/bin/python... (cached) /opt/local/bin/python2.5 checking for doxygen... (cached) : checking for whoami... (cached) /usr/bin/whoami checking for autoconf... (cached) /opt/local/bin/autoconf checking for unzip... (cached) /usr/bin/unzip checking for zip... (cached) /usr/bin/zip checking for makedepend... (cached) /opt/local/bin/makedepend checking for xargs... (cached) /usr/bin/xargs checking for pbbuild... (cached) /usr/bin/xcodebuild checking for sdp... (cached) /usr/bin/sdp checking for gmake... (cached) /opt/local/bin/gmake checking for X... (cached) no checking whether the compiler supports -Wno-invalid-offsetof... yes checking whether ld has archive extraction flags... (cached) no checking that static assertion macros used in autoconf tests work... (cached) yes checking for 64-bit OS... yes checking for minimum required Python version >= 2.4... yes checking for -dead_strip option to ld... yes checking for ANSI C header files... (cached) yes checking for working const... (cached) yes checking for mode_t... (cached) yes checking for off_t... (cached) yes checking for pid_t... (cached) yes checking for size_t... (cached) yes checking for st_blksize in struct stat... (cached) yes checking for siginfo_t... (cached) yes checking for int16_t... (cached) yes checking for int32_t... (cached) yes checking for int64_t... (cached) yes checking for int64... (cached) no checking for uint... (cached) yes checking for uint_t... (cached) no checking for uint16_t... (cached) no checking for uname.domainname... (cached) no checking for uname.__domainname... (cached) no checking for usable char16_t (2 bytes, unsigned)... (cached) no checking for usable wchar_t (2 bytes, unsigned)... (cached) no checking for compiler -fshort-wchar option... (cached) yes checking for visibility(hidden) attribute... (cached) yes checking for visibility(default) attribute... (cached) yes checking for visibility pragma support... (cached) yes checking For gcc visibility bug with class-level attributes (GCC bug 26905)... (cached) yes checking For x86_64 gcc visibility bug with builtins (GCC bug 20297)... (cached) no checking for dirent.h that defines DIR... (cached) yes checking for opendir in -ldir... (cached) no checking for sys/byteorder.h... (cached) no checking for compat.h... (cached) no checking for getopt.h... (cached) yes checking for sys/bitypes.h... (cached) no checking for memory.h... (cached) yes checking for unistd.h... (cached) yes checking for gnu/libc-version.h... (cached) no checking for nl_types.h... (cached) yes checking for malloc.h... (cached) no checking for X11/XKBlib.h... (cached) yes checking for io.h... (cached) no checking for sys/statvfs.h... (cached) yes checking for sys/statfs.h... (cached) no checking for sys/vfs.h... (cached) no checking for sys/mount.h... (cached) yes checking for sys/quota.h... (cached) yes checking for mmintrin.h... (cached) yes checking for new... (cached) yes checking for sys/cdefs.h... (cached) yes checking for gethostbyname_r in -lc_r... (cached) no checking for dladdr... (cached) yes checking for socket in -lsocket... (cached) no checking whether mmap() sees write()s... yes checking whether gcc needs -traditional... (cached) no checking for 8-bit clean memcmp... (cached) yes checking for random... (cached) yes checking for strerror... (cached) yes checking for lchown... (cached) yes checking for fchmod... (cached) yes checking for snprintf... (cached) yes checking for statvfs... (cached) yes checking for memmove... (cached) yes checking for rint... (cached) yes checking for stat64... (cached) yes checking for lstat64... (cached) yes checking for truncate64... (cached) no checking for statvfs64... (cached) no checking for setbuf... (cached) yes checking for isatty... (cached) yes checking for flockfile... (cached) yes checking for getpagesize... (cached) yes checking for localtime_r... (cached) yes checking for strtok_r... (cached) yes checking for wcrtomb... (cached) yes checking for mbrtowc... (cached) yes checking for res_ninit()... (cached) no checking for gnu_get_libc_version()... (cached) no ../src/configure: line 9881: AM_LANGINFO_CODESET: command not found checking for an implementation of va_copy()... (cached) yes checking for an implementation of __va_copy()... (cached) yes checking whether va_lists can be copied by value... (cached) no checking for C++ exceptions flag... (cached) -fno-exceptions checking for gcc 3.0 ABI... (cached) yes checking for C++ "explicit" keyword... (cached) yes checking for C++ "typename" keyword... (cached) yes checking for modern C++ template specialization syntax support... (cached) yes checking whether partial template specialization works... (cached) yes checking whether operators must be re-defined for templates derived from templates... (cached) no checking whether we need to cast a derived template to pass as its base class... (cached) no checking whether the compiler can resolve const ambiguities for templates... (cached) yes checking whether the C++ "using" keyword can change access... (cached) yes checking whether the C++ "using" keyword resolves ambiguity... (cached) yes checking for "std::" namespace... (cached) yes checking whether standard template operator!=() is ambiguous... (cached) unambiguous checking for C++ reinterpret_cast... (cached) yes checking for C++ dynamic_cast to void*... (cached) yes checking whether C++ requires implementation of unused virtual methods... (cached) yes checking for trouble comparing to zero near std::operator!=()... (cached) no checking for LC_MESSAGES... (cached) yes checking for tar archiver... checking for gnutar... (cached) gnutar gnutar checking for wget... checking for wget... (cached) wget wget checking for valid optimization flags... yes checking for gcc -pipe support... yes checking whether compiler supports -Wno-long-long... yes checking whether C compiler supports -fprofile-generate... yes checking for correct temporary object destruction order... yes checking for correct overload resolution with const and templates... no Building Python extensions using python-2.5 from /opt/local/Library/Frameworks/Python.framework/Versions/2.5 creating ./config.status creating config/autoconf.mk creating Makefile creating xpcom/Makefile creating xpcom/src/Makefile creating xpcom/src/loader/Makefile creating xpcom/src/module/Makefile creating xpcom/components/Makefile creating xpcom/test/Makefile creating xpcom/test/test_component/Makefile creating dom/Makefile creating dom/src/Makefile creating dom/test/Makefile creating dom/test/pyxultest/Makefile creating dom/nsdom/Makefile creating dom/nsdom/test/Makefile

    Read the article

  • How to display data stored in core data in a table view?

    - by Dipanjan Dutta
    Hello All, I have developed a core data model for my application. I need to display the saved data into a table view. For my app I have selected split view controller. I am writing down my codes below. Please help me in this regard and write me the code that needs to be added. This is very important as my continuation in my company depends on this. #import "RootViewController.h" #import "DetailViewController.h" #import "AddViewController.h" #import "EmployeeDetailsAppDelegate.h" /* This template does not ensure user interface consistency during editing operations in the table view. You must implement appropriate methods to provide the user experience you require. */ @interface RootViewController () - (void)configureCell:(UITableViewCell *)cell atIndexPath:(NSIndexPath *)indexPath; @end @implementation RootViewController @synthesize detailViewController, fetchedResultsController, managedObjectContext, results, empName; #pragma mark - #pragma mark View lifecycle - (void)viewDidLoad { results = [[NSMutableDictionary alloc]init]; [results setObject:empName.text forKey:@"EmployeeName"]; [self.tableView reloadData]; [super viewDidLoad]; } /* - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } */ /* - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } */ /* - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } */ /* - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } */ - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Ensure that the view controller supports rotation and that the split view can therefore show in both portrait and landscape. return YES; } - (void)configureCell:(UITableViewCell *)cell atIndexPath:(NSIndexPath *)indexPath { NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; cell.textLabel.text = [[managedObject valueForKey:@"EmployeeName"] description]; } #pragma mark - #pragma mark Add a new object - (void)insertNewObject:(id)sender { AddViewController *add = [[AddViewController alloc]initWithNibName:@"AddViewController" bundle:nil]; self.modalPresentationStyle = UIModalPresentationFormSheet; add.wantsFullScreenLayout = NO; [self presentModalViewController:add animated:YES]; [add release]; } #pragma mark - #pragma mark Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 1; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell. NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; cell.textLabel.text = [[managedObject valueForKey:@"EmployeeName"] description]; return cell; } - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the managed object. NSManagedObject *objectToDelete = [self.fetchedResultsController objectAtIndexPath:indexPath]; if (self.detailViewController.detailItem == objectToDelete) { self.detailViewController.detailItem = nil; } NSManagedObjectContext *context = [self.fetchedResultsController managedObjectContext]; [context deleteObject:objectToDelete]; NSError *error; if (![context save:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } } } - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // The table view should not be re-orderable. return NO; } #pragma mark - #pragma mark Table view delegate - (void)tableView:(UITableView *)aTableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Set the detail item in the detail view controller. NSManagedObject *selectedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; self.detailViewController.detailItem = selectedObject; } #pragma mark - #pragma mark Fetched results controller - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController != nil) { return fetchedResultsController; } /* Set up the fetched results controller. */ // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Details" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Set the batch size to a suitable number. [fetchRequest setFetchBatchSize:20]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"EmployeeName" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; return fetchedResultsController; } #pragma mark - #pragma mark Fetched results controller delegate - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { UITableView *tableView = self.tableView; switch(type) { case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self configureCell:[tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath]withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc. that aren't in use. } - (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } - (void)dealloc { [detailViewController release]; [fetchedResultsController release]; [managedObjectContext release]; [super dealloc]; } @end // // AddViewController.m // EmployeeDetails // // Created by Dipanjan on 15/02/11. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import "AddViewController.h" #import "EmployeeDetailsAppDelegate.h" #import "RootViewController.h" @implementation AddViewController @synthesize empName; @synthesize empID; @synthesize empDepartment; @synthesize backButton; // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. /* - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization. } return self; } */ /* // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } */ -(void)saveDetails{ EmployeeDetailsAppDelegate *appDelegate = [[UIApplication sharedApplication]delegate]; NSManagedObjectContext *context = [appDelegate managedObjectContext]; NSManagedObject *newDetails; newDetails = [NSEntityDescription insertNewObjectForEntityForName:@"Details" inManagedObjectContext:context]; [newDetails setValue:empID.text forKey:@"EmployeeID"]; [newDetails setValue:empName.text forKey:@"EmployeeName"]; [newDetails setValue:empDepartment.text forKey:@"EmployeeDepartment"]; empID.text = @""; empName.text = @""; empDepartment.text = @""; NSLog(@"%@........----->>>...", newDetails); NSError *error; [context save:&error]; [self dismissModalViewControllerAnimated:YES]; } -(void)findDetails { EmployeeDetailsAppDelegate *appDelegate = [[UIApplication sharedApplication]delegate]; NSManagedObjectContext *context = [appDelegate managedObjectContext]; NSEntityDescription *entityDesc = [NSEntityDescription entityForName:@"Details" inManagedObjectContext:context]; NSFetchRequest *request = [[NSFetchRequest alloc]init]; [request setEntity:entityDesc]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"(EmployeeName = %@)", empName.text]; [request setPredicate:pred]; NSManagedObject *matches = nil; NSError *error; NSArray *objects = [context executeFetchRequest:request error:&error]; if ([objects count] == 0) { } else { matches = [objects objectAtIndex:0]; empID.text = [matches valueForKey:@"EmployeeID"]; empDepartment.text = [matches valueForKey:@"EmployeeDepartment"]; } [request release]; [self dismissModalViewControllerAnimated:YES]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Overriden to allow any orientation. return YES; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc. that aren't in use. } - (void)viewDidUnload { self.empName = nil; self.empID = nil; self.empDepartment = nil; [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [empID release]; [empName release]; [empDepartment release]; [super dealloc]; } @end Please let me know the answer as soon as possible. Thank you. Regards, Dipanjan

    Read the article

  • How to target SCOM 2007 R2 monitor to monitor only one server

    - by Trondh
    Hi, This might be basic, but hopefully someone can help me: We have a well-working SCOM 2007 R2 implementation monitoring our Microsoft infrastructure. Now, on one of these servers there's an event (logged to the eventlog) that I need to be alerted on. I have created a group and put this one windows server in it. Then, I created a monitor with simple event detection, entered the event id and used the group name as "monitor target". This doesnt work - the monitor doesn't show up in health explorer at all. However, If I create the monitor with "Windows computers" as target it works, but that means I'll have to disable the monitor, and then enable it for the group, which is cumbersome and slightly illogical to me. Is this by design, or am I doing something wrong?

    Read the article

  • How to prevent question mark cursor issue cause be Insert key when doing VNC to a mac?

    - by Sorin Sbarnea
    I found out that when I press the Insert key on the client I will block the OS X VNC server by putting it in a "help mode" where you get the question mark mouse cursor. The mouse works but I cannot use the keyboard anymore. Details: Reconnecting using VNC does not help Normal keyboard is working fine on the mac The only solution in addition to relogin was to stop the VNC server on mac using: killall OSXvnc-server After few seconds it will restart by itself and it will work. I don't like current workaround and looking for something better. Tested with these versions of the VNC client and all put the VNC server in the question mark mode, requiring service restart: Ultr@VNC 1.0.8.2 RealVNC 4.1.3 I know that the problem is caused by the different/bad implementation of the VNC protocol in the server but do you need an workaround?

    Read the article

  • Can you authenticate into SSAS with AD LDS (ADAM) accounts?

    - by Jaxidian
    I'm very new to AD LDS and experienced but not qualified with SSAS, so my apologies for my ignorances with these. We have a couple implementations where we expose SSAS via an HTTPS proxy (msmdpump.dll) and currently we have a temporary domain setup handling this (where our end-users have a second account+creds to manage because of this = non-ideal). I want to move us towards a more permanent solution which I'm thinking of moving all authentication to AD LDS for our web apps, SSAS, and others. However, SSAS is where I'm concerned about this. I know SSAS requires Windows Authentication and to play nicely, and that this ultimately means Active Directory will be involved. Is there a way to get this done with AD LDS instead of having to use a full AD DS implementation? If so, how? (Note: My question over at StackOverflow had a suggestion that I post this question here on ServerFault instead. My apologies if I'm not asking in the right forum.)

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • SCCM 2007 managing hosts in non trusted forest

    - by BoxerBucks
    I have an implementation of SCCM 2007 in forest "A" that manages hosts in that Windows 2008 forest. There is another forest/domain, "B", which I have no trust with that I need to manage hosts in as well. I don't need to push out clients from the SCCM console, I am going to install them manually. I just need the hosts in domain "B" to connect back to the forest/domain "A" for management purposes. To date, I have not added any AD objects to domain "B" for hosts to query for site, SLP or management point info. I am installing the hosts with the command line: ccmsetup.exe /mp:SCCM_Server /site:mysite SCCM_Server = FQDN of my sccm server (which is resolvable by the client) There are no ACL's between the two servers. From the logs, I can see the install complete and the client tries to query the local AD for the site info for "mysite" but it can't find it and it stops and never connects. Can anyone give me some direction as to how this should be setup?

    Read the article

  • Problems with login scripts on Terminal Server 2008

    - by discovery
    We are having issues with login scripts not running on Windows 2008 Terminal Server. This is a brand new implementation and they have never worked. The test user in question doesn't have any problems running login scripts on their workstation. I have tried logging into the server directly with their account, but still no scripts run. I have setup a test account with Domain Admins rights in the same OU as theirs and the scripts don't run. I can manually run the scripts from the SYSVOL\somedomain.com\Policies folder and they run fine. The Terminal 2008 Server is in a mixed 2003/2008 domain. The user can run the gpupdate on the server without error. I have also run the Group Policy Results for this user and the terminal server and everything looks good, no errors. Any suggestions?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >