Search Results

Search found 10741 results on 430 pages for 'self improvement'.

Page 407/430 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Adaptive Layout for ADF Faces on Tablets

    - by Shay Shmeltzer
    In the 11.1.16 version of Oracle ADF we started adding specific features to the ADF Faces components so they'll work better on iPad tablets. In this entry I'm going to highlight some new capabilities that we have added to the 11.1.2.3 release. (note if you are still on the 11.1.1.* branch - you'll need to wait for 11.1.1.7 to get the features discussed here). The two key additions in the 11.1.2.3 version compared to the 11.1.1.6 features for iPad support include: pagination for tables and adaptive flow layout. The pagination for table is self explanatory, basically since iPad don't support scroll bars, we automatically switch the table component to render with a pagination toolbar that allow you to scroll set of records or directly jump to a specific set. See the image below. The adaptive flow layout takes a bit more explanation. On regular desktops the UI that you usually build for ADF Faces screens is going to use stretch layout - meaning that it stretches to fill the whole area of the browser window. If you resize the browser windoe, the ADF Faces page resizes with it. If your browser window is too small, scroll bars will appear to allow you to scroll to areas that are "hidden". However on an iPad, this is probably not the type of layout you want - you would rather have a flow layout that eliminates scroll bars and instead allows you to scroll down the page. Basically your want the page to be sized based on its content, rather then based on the browser window size. In ADF Faces terminology this can be done with the dimensionsFrom property set to "children". And here comes the tricky part, since in the past(and also today) when you create an ADF Faces page and add a stretchable component to it, the dimensionsFrom property is set to parent by default. This will be true to other layout components you'll add as well. At this point you might be wondering "Does this mean I'll need to go to each of the layout components in my page and modify the dimensionsFrom property value to be children?" ADF Faces to the rescue... To eliminate the need to do this tedious manual changes, we introduced a new web.xml parameter "oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS" You'll basically add the following to your web.xml <context-param>    <description>      This parameter controls the default value for component geometry on the page.      Supported values are:        legacy - component attributes use the default values as specified for the attributes                 in the tag documentation (default value)        auto   - component attributes use the correct default value given the value of their                 parent component. For example, with this setting, the panelStretchLayout                 will use "auto" as the default value for its "dimensionsFrom" attribute                 instead of "parent".    </description>    <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>    <param-value>auto</param-value>  </context-param> Once you set this parameter, you only need to set the dimensionsFrom attribute for the top level layout component on your page, and the rest of the components will adjust accordingly. One trick that you can use, and that is used in the demo below, is to have the dimensionsFrom property depend on the type of client that access your application. This way you can switch between stretch or flow layout based on the device accessing your application. For example I use the following in my page: <af:panelStretchLayout topHeight="70px" startWidth="0px" endWidth="0px"                                       dimensionsFrom="#{adfFacesContext.agent.capabilities['touchScreen'] eq 'none'  ? 'parent' : 'children' }"> Which results in a flow layout for iPads and a stretch layout for regular browsers. Check out the result in the below demo: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • JavaDay Taipei 2014 Trip Report

    - by reza_rahman
    JavaDay Taipei 2014 was held at the Taipei International Convention Center on August 1st. Organized by Oracle University, it is one of the largest Java developer events in Taiwan. This was another successful year for JavaDay Taipei with a fully sold out venue packed with youthful, energetic developers (this was my second time at the event and I have already been invited to speak again next year!). In addition to Oracle speakers like me, Steve Chin and Naveen Asrani, the event also featured a bevy of local speakers including Taipei Java community leaders. Topics included Java SE, Java EE, JavaFX, cloud and Big Data. It was my pleasure and privilege to present one of the opening keynotes for the event. I presented my session on Java EE titled "JavaEE.Next(): Java EE 7, 8, and Beyond". I covered the changes in Java EE 7 as well as what's coming in Java EE 8. I demoed the Cargo Tracker Java EE BluePrints. I also briefly talked about Adopt-a-JSR for Java EE 8. The slides for the keynote are below (click here to download and view the actual PDF): It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file. In the afternoon I did my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The talk was completely packed. The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. I finished off Java Day Taipei with my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE" (this was the last session of the conference). The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. After the event the Oracle University folks hosted a reception in the evening which was very well attended by organizers, speakers and local Java community leaders. I am extremely saddened by the fact that this otherwise excellent trip was scarred by terrible tragedy. After the conference I joined a few folks for a hike on the Maokong Mountain on Saturday. The group included friends in the Taiwanese Java community including Ian and Robbie Cheng. Without warning, fatal tragedy struck on a remote part of the trail. Despite best efforts by us, the excellent Taiwanese Emergency Rescue Team and World class Taiwanese physicians we were unable to save our friend Robbie Cheng's life. Robbie was just thirty-four years old and is survived by his younger brother, mother and father. Being the father of a young child myself, I can only imagine the deep sorrow that this senseless loss unleashes. Robbie was a key member of the Taiwanese Java community and a Java Evangelist at Sun at one point. Ironically the only picture I was able to take of the trail was mere moments before tragedy. I thought I should place him in that picture in profoundly respectful memoriam: Perhaps there is some solace in the fact that there is something inherently honorable in living a bright life, dying young and meeting one's end on a beautiful remote mountain trail few venture to behold let alone attempt to ascend in a long and tired lifetime. Perhaps I'd even say it's a fate I would not entirely regret facing if it were my own. With that thought in mind it seems appropriate to me to quote some lyrics from the song "Runes to My Memory" by legendary Swedish heavy metal band Amon Amarth idealizing a fallen Viking warrior cut down in his prime: "Here I lie on wet sand I will not make it home I clench my sword in my hand Say farewell to those I love When I am dead Lay me in a mound Place my weapons by my side For the journey to Hall up high When I am dead Lay me in a mound Raise a stone for all to see Runes carved to my memory" I submit my deepest condolences to Robbie's family and hope my next trip to Taiwan ends in a less somber note.

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • Examine your readiness for managing Enterprise Private Cloud

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Cloud computing promises to deliver greater agility to meet demanding  business needs, operational efficiencies, and lower cost. However these promises cannot be realized and enterprises may not be able to get the best value out of their enterprise private cloud computing infrastructure without a comprehensive cloud management solution . Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Take this new self-assessment quiz that measures the readiness of your enterprise private cloud. It scores your readiness in the following areas and discover where and how you can improve to gain total cloud control over your enterprise private cloud. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Complete Cloud Lifecycle Solution Check if you are ready to manage all phases of the building, managing, and consuming an enterprise cloud. You will learn how Oracle can help build and manage a rich catalog of cloud services – whether it is Infrastructure-as-a-Service, Database-as-a-Service, or Platform-as-a-Service, all from a single product. Integrated Cloud Stack Management Integrated management of the entire cloud stack – all the way from application to disk, is very important to eliminate the integration pains and costs that customers would have to otherwise incur by trying to create a cloud environment by integrating multiple point solutions. Business-Driven Clouds It is critical that an enterprise Cloud platform is not only able to run applications but also has deep business insight and visibility. Oracle Enterprise Manager 12c enables creation of application-aware and business-driven clouds that has deep insight into applications, business services and transactions. As the leading providers of business applications and the middleware, we are able to offer you a cloud solution that is optimized for business services. Proactive Management Integration of the enterprise cloud infrastructure with support can allow cloud administrators to benefit from Automatic Service Requests (ASR), proactive patch recommendations, health checks and end-of-life advisory for all of the technology deployed within cloud. Learn more about solution for Enterprise Cloud and Cloud management by attending various sessions , demos and hand-on labs at Oracle Open World 2012 . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Waiting for Windows 8: A Long, Hot Summer

    - by andrewbrust
    Microsoft has revealed some things about Windows 8, and revealed a part of the developer story for new Windows 8 “tailored,” “immersive” applications.  In retrospect, very little was shared.  The bit that was revealed to us is that those applications can be developed using a combination of HTML 5 and JavaScript.  Not much else was said, except that additional details would be revealed at Microsoft’s //Build/ conference in Anaheim, California in September. This has left a lot of people in suspense, and it seems that suspended state is going to last all summer.  The problem, of course, is that in the absence of hard information, people fill the void with Speculation, Rumor and Gloom.  That’s a bit like Fear, Uncertainty and Doubt, except that it’s self-imposed by the Microsoft community and not planted by Microsoft’s competitors. This is a less-than-perfect situation.  Not only is it causing developers to worry about the value of their skill sets, but I am already hearing from consulting shops that customers are getting nervous too and, in extreme cases, opting for non-Microsoft tools for their projects as a result.  I’m also hearing from dev tool ISVs that sales have suffered as a result. It’s quite possible that the customers moving off .NET wanted to do so anyway and it’s also possible that dev tool ISVs are suffering slower sales this year due a slowed rate of economic recovery. Without hard information, tend to people interpret things negatively.  Actually, that’s the major point in all of this. While there is multitude of opinions about what the Windows 8 development platform will look like once fully revealed, there is an emerging consensus around one thing: it sure would help if Microsoft revealed more of its strategy…just enough to quash absurd rumors, stabilize the .NET ecosystem and get people to stay calm. We’ve had some reassurances thus far: there will be a Windows desktop mode; we’ll still have Windows Explorer, we’ll still run Office, we’ll still have a task bar, and all the skills and tools we use now will still work there.  But with reassurances like that…people still feel insecure.  Because telling us that Windows 8 will have what is essentially a “classic” mode sure makes it sound like today’s skill sets will soon be “classic” too…and then maybe they’ll just become obsolete. Humans find change scary; it’s natural.  And when left alone with their fears – because no one is saying anything to dispel them – people can go from frightened to paranoid, and can start to viewing things in a downright conspiratorial light.  It would be great if Microsoft stepped into the void now and told us what is coming – especially because whatever they tell us is bound to be at least a little better than what people think they are going to hear. I don’t know what the announcements will be, but I do have it on authority, from a number of sources, that Microsoft isn’t gong to talk until //Build/.  That means no news until September September 13th.  Nothing until after Labor Day.  You get zippo until after the Back-to-School sales are done. What to do?  Try not to let the dark voices of gloom and doom fill your head.  Even in the absence of answers, we still have some important facts: The .NET developer community is huge. Microsoft’s customers have major investments in .NET, and in .NET skills. Political infighting in Redmond might make for irrational decisions, but ultimately public companies can’t just alienate their advocates and piss off their customers.  Spite doesn’t trump fiduciary responsibility. The computing device markets are changing, software is changing, software business models are changing and developers are changing.  Microsoft has to keep up. The HTML + JavaScript community is huge too, and it includes many of the “changed” developers. Public companies can’t ignore new markets nor the popular standards that can help them enter those new markets.  Loyalty doesn’t trump fiduciary responsibility either. If Microsoft can appeal to new developers, then it should. If Microsoft can keep catering to its existing developers and customers -- not just through legacy support, but also through empowering futures -- then it probably will. You don’t have to shove your old friends out into the rain to make room for new ones; you can bring those new constituents in under a bigger tent.  I hope Microsoft will enlarge the tent, and I have trouble imagining why it would not.

    Read the article

  • No More NCrunch For Me

    - by Steve Wilkes
    When I opened up Visual Studio this morning, I was greeted with this little popup: NCrunch is a Visual Studio add-in which runs your tests while you work so you know if and when you've broken anything, as well as providing coverage indicators in the IDE and coverage metrics on demand. It recently went commercial (which I thought was fair enough), and time is running out for the free version I've been using for the last couple of months. From my experiences using NCrunch I'm going to let it expire, and go about my business without it. Here's why. Before I start, let me say that I think NCrunch is a good product, which is to say it's had a positive impact on my programming. I've used it to help test-drive a library I'm making right from the start of the project, and especially at the beginning it was very useful to have it run all my tests whenever I made a change. The first problem is that while that was cool to start with, it’s recently become a bit of a chore. Problems Running Tests NCrunch has two 'engine modes' in which it can run tests for you - it can run all your tests when you make a change, or it can figure out which tests were impacted and only run those. Unfortunately, it became clear pretty early on that that second option (which is marked as 'experimental') wasn't really working for me, so I had to have it run everything. With a smallish number of tests and while I was adding new features that was great, but I've now got 445 tests (still not exactly loads) and am more in a 'clean and tidy' mode where I know that a change I'm making will probably only affect a particular subset of the tests. With that in mind it's a bit of a drag sitting there after I make a change and having to wait for NCrunch to run everything. I could disable it and manually run the tests I know are impacted, but then what's the point of having NCrunch? If the 'impacted only' engine mode worked well this problem would go away, but that's not what I found. Secondly, what's wrong with this picture? I've got 445 tests, and NCrunch has queued 455 tests to run. So it's queued duplicate tests - in this quickly-screenshotted case 10, but I've seen the total queue get up over 600. If I'm already itchy waiting for it to run all my tests against a change I know only affects a few, I'm even itchier waiting for it to run a lot of them twice. Problems With Code Coverage NCrunch marks each line of code with a dot to say if it's covered by tests - a black dot says the line isn't covered, a red dot says it's covered but at least one of the covering tests is failing, and a green dot means all the covering tests pass. It also calculates coverage statistics for you. Unfortunately, there's a couple of flaws in the coverage. Firstly, it doesn't support ExcludeFromCodeCoverage attributes. This feature has been requested and I expect will be included in a later release, but right now it doesn't. So this: ...is counted as a non-covered line, and drags your coverage statistics down. Hmph. As well as that, coverage of certain types of code is missed. This: ...is definitely covered. I am 100% absolutely certain it is, by several tests. NCrunch doesn't pick it up, down go my coverage statistics. I've had NCrunch find genuinely uncovered code which I've been able to remove, and that's great, but what's the coverage percentage on this project? Umm... I don't know. Conclusion None of these are major, tool-crippling problems, and I expect NCrunch to get much better in future releases. The current version has some great features, like this: ...that's a line of code with a failing test covering it, and NCrunch can run that failing test and take me to that line exquisitely easily. That's awesome! I'd happily pay for a tool that can do that. But here's the thing: NCrunch (currently) costs $159 (about £100) for a personal licence and $289 (about £180) for a commercial one. I'm not sure which one I'd need as my project is a personal one which I'm intending to open-source, but I'm a professional, self-employed developer, but in any case - that seems like a lot of money for an imperfect tool. If it did everything it's advertised to do more or less perfectly I'd consider it, but it doesn't. So no more NCrunch for me.

    Read the article

  • How to Hashtag (Without Being #Annoying)

    - by Mike Stiles
    The right tool in the wrong hands can be a dangerous thing. Giving a chimpanzee a chain saw would not be a pretty picture. And putting Twitter hashtags in the hands of social marketers who were never really sure how to use them can be equally unattractive. Boiled down, hashtags are for search and organization of tweets. A notch up from that, they can also be used as part of a marketing strategy. In terms of search, if you’re in the organic apple business, you want anyone who searches “organic” on Twitter to see your posts about your apples. It’s keyword tactics not unlike web site keyword search tactics. So get a clear idea of what keywords are relevant for your tweet. It’s reasonable to include #organic in your tweet. Is it fatal if you don’t hashtag the word? It depends on the person searching. If they search “organic,” your tweet’s going to come up even if you didn’t put the hashtag in front of it. If the searcher enters “#organic,” your tweet needs the hashtag. Err on the side of caution and hashtag it so it comes up no matter how the searcher enters it. You’ll also want to hashtag it for the second big reason people hashtag, organization. You can follow a hashtag. So can the rest of the Twitterverse. If you’re that into organic munchies, you can set up a stream populated only with tweets hashtagged #organic. If you’ve established a hashtag for your brand, like #nobugsprayapples, you (and everyone else) can watch what people are tweeting about your company. So what kind of hashtags should you include? They should be directly related to the core message of your tweet. Ancillary or very loosely-related hashtags = annoying. Hashtagging your brand makes sense. Hashtagging your core area of interest makes sense. Creating a specific event or campaign hashtag you want others to include and spread makes sense (the burden is on you to promote it and get it going). Hashtagging nearly every word in the tweet is highly annoying. Far and away, the majority of hashtagged words in such tweets have no relevance, are not terms that would be searched, and are not terms needed for categorization. It looks desperate and spammy. Two is fine. One is better. And it is possible to tweet with --gasp-- no hashtags! Make your hashtags as short as you can. In fact, if your brand’s name really is #nobugsprayapples, you’re burning up valuable, limited characters and risking the inability of others to retweet with added comments. Also try to narrow your topic hashtag down. You’ll find a lot of relevant users with #organic, but a lot of totally uninterested users with #food. Just as you can join online forums and gain credibility and a reputation by contributing regularly to that forum, you can follow hashtagged topics and gain the same kind of credibility in your area of expertise. Don’t just parachute in for the occasional marketing message. And if you’re constantly retweeting one particular person, stop it. It’s kissing up and it’s obvious. Which brings us to the king of hashtag annoyances, “hashjacking.” This is when you see what terms are hot and include them in your marketing tweet as a hashtag, even though it’s unrelated to your content. Justify it all you want, but #justinbieber has nothing to do with your organic apples. Equally annoying, piggybacking on a popular event’s hashtag to tweet something not connected to the event. You’re only fostering ill will and mistrust toward your account from the people you’ve tricked into seeing your tweet. Lastly, don’t @ mention people just to make sure they see your tweet. If the tweet’s not for them or about them, it’s spammy. What I haven’t covered is use of the hashtag for comedy’s sake. You’ll see this a lot and is a matter of personal taste. No one will search these hashtagged terms or need to categorize then, they’re just there for self-expression and laughs. Twitter is, after all, supposed to be fun.  What are some of your biggest Twitter pet peeves? #blogsovernow

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

  • UITableViewCell selected subview ghosts

    - by Jonathan Cohen
    Hi all, I'm learning about the iPhone SDK and have an interesting exception with UITableViewCell subview management when a finger is pressed on some rows. The table is used to assign sounds to hand gestures -- swiping the phone in one of 3 directions triggers the sound to play. Selecting a row displays an action sheet with 4 options for sound assignment: left, down, right, and cancel. Sounds can be mapped to one, two, or three directions so any cell can have one of seven states: left, down, right, left and down, left and right, down and right, or left down and right. If a row is mapped to any of these seven states, a corresponding arrow or arrows are displayed within the bounds of the row as a subview. Arrows come and go as they should in a given screen and when scrolling around. However, after scrolling to a new batch of rows, only when I press my finger down on some (but not all) rows, does an arrow magically appear in the selected state background. When I lift my finger off the row, and the action sheet appears, the arrow disappears. After pressing any of the four buttons, I can't replicate this anymore. But it's really disorienting and confusing to see this arrow flash on screen because the selected row isn't assigned to anything. What haven't I thought to look into here? All my table code is pasted below and this is a screencast of the problem: http://www.screencast.com/users/JonathanGCohen/folders/Jing/media/d483fe31-05b5-4c24-ab4d-70de4ff3a0bf Am I managing my subviews wrong or is there a selected state property I'm missing? Something else? Should I have included any more information in this post to make things clearer? Thank you!! #pragma mark - #pragma mark Table - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return ([categories count] > 0) ? [categories count] : 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if ([categories count] == 0) return 0; NSMutableString *key = [categories objectAtIndex:section]; NSMutableArray *nameSection = [categoriesSounds objectForKey:key]; return [nameSection count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSUInteger section = [indexPath section]; NSUInteger row = [indexPath row]; NSString *key = [categories objectAtIndex:section]; NSArray *nameSection = [categoriesSounds objectForKey:key]; static NSString *SectionsTableIdentifier = @"SectionsTableIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier: SectionsTableIdentifier]; NSArray *sound = [categoriesSounds objectForKey:key]; NSString *soundName = [[sound objectAtIndex: row] objectAtIndex: 0]; NSString *soundOfType = [[sound objectAtIndex: row] objectAtIndex: 1]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:SectionsTableIdentifier] autorelease]; } cell.textLabel.text = [[nameSection objectAtIndex:row] objectAtIndex: 0]; NSUInteger soundSection = [[[sound objectAtIndex: row] objectAtIndex: 2] integerValue]; NSUInteger soundRow = [[[sound objectAtIndex: row] objectAtIndex: 3] integerValue]; NSUInteger leftRow = [leftOldIndexPath row]; NSUInteger leftSection = [leftOldIndexPath section]; if (soundRow == leftRow && soundSection == leftSection && leftOldIndexPath !=nil){ [selectedSoundLeftAndDown removeFromSuperview]; [selectedSoundLeftAndRight removeFromSuperview]; [cell.contentView addSubview: selectedSoundLeft]; selectedSoundLeft.frame = CGRectMake(200,8,30,30); } else { [cell.contentView sendSubviewToBack: selectedSoundLeft]; } NSUInteger downRow = [downOldIndexPath row]; NSUInteger downSection = [downOldIndexPath section]; if (soundRow == downRow && soundSection == downSection && downOldIndexPath !=nil){ [selectedSoundLeftAndDown removeFromSuperview]; [selectedSoundDownAndRight removeFromSuperview]; [cell.contentView addSubview: selectedSoundDown]; selectedSoundDown.frame = CGRectMake(200,8,30,30); } else { [cell.contentView sendSubviewToBack: selectedSoundDown]; } NSUInteger rightRow = [rightOldIndexPath row]; NSUInteger rightSection = [rightOldIndexPath section]; if (soundRow == rightRow && soundSection == rightSection && rightOldIndexPath !=nil){ [selectedSoundDownAndRight removeFromSuperview]; [selectedSoundLeftAndRight removeFromSuperview]; [cell.contentView addSubview: selectedSoundRight]; selectedSoundRight.frame = CGRectMake(200,8,30,30); } else { [cell.contentView sendSubviewToBack: selectedSoundRight]; } // combos if (soundRow == leftRow && soundRow == downRow && soundSection == leftSection && soundSection == downSection){ [selectedSoundLeft removeFromSuperview]; [selectedSoundDown removeFromSuperview]; [selectedSoundLeftAndDownAndRight removeFromSuperview]; [cell.contentView addSubview: selectedSoundLeftAndDown]; selectedSoundLeftAndDown.frame = CGRectMake(200,8,30,30); } else { [cell.contentView sendSubviewToBack: selectedSoundLeftAndDown]; } if (soundRow == leftRow && soundRow == rightRow && soundSection == leftSection && soundSection == rightSection){ [selectedSoundLeft removeFromSuperview]; [selectedSoundRight removeFromSuperview]; [selectedSoundLeftAndDownAndRight removeFromSuperview]; [cell.contentView addSubview: selectedSoundLeftAndRight]; selectedSoundLeftAndRight.frame = CGRectMake(200,8,30,30); } else { [cell.contentView sendSubviewToBack: selectedSoundLeftAndRight]; } if (soundRow == downRow && soundRow == rightRow && soundSection == downSection && soundSection == rightSection){ [selectedSoundDown removeFromSuperview]; [selectedSoundRight removeFromSuperview]; [selectedSoundLeftAndDownAndRight removeFromSuperview]; [cell.contentView addSubview: selectedSoundDownAndRight]; selectedSoundDownAndRight.frame = CGRectMake(200,8,30,30); } else { [cell.contentView sendSubviewToBack: selectedSoundDownAndRight]; } if (soundRow == leftRow && soundRow == downRow && soundRow == rightRow && soundSection == leftSection && soundSection == downSection && soundSection == rightSection){ [selectedSoundLeftAndDown removeFromSuperview]; [selectedSoundLeftAndRight removeFromSuperview]; [selectedSoundDownAndRight removeFromSuperview]; [selectedSoundLeft removeFromSuperview]; [selectedSoundDown removeFromSuperview]; [selectedSoundRight removeFromSuperview]; [cell.contentView addSubview: selectedSoundLeftAndDownAndRight]; selectedSoundLeftAndDownAndRight.frame = CGRectMake(200,8,30,30); } else { [cell.contentView sendSubviewToBack: selectedSoundLeftAndDownAndRight]; } [indexPath retain]; return cell; } - (NSMutableString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { if ([categories count] == 0) return nil; NSMutableString *key = [categories objectAtIndex:section]; if (key == UITableViewIndexSearch) return nil; return key; } - (NSMutableArray *)sectionIndexTitlesForTableView:(UITableView *)tableView { if (isSearching) return nil; return categories; } - (NSIndexPath *)tableView:(UITableView *)tableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath { [table reloadData]; [selectedSoundLeft removeFromSuperview]; [selectedSoundDown removeFromSuperview]; [selectedSoundRight removeFromSuperview]; [selectedSoundLeftAndDown removeFromSuperview]; [selectedSoundLeftAndRight removeFromSuperview]; [selectedSoundDownAndRight removeFromSuperview]; [selectedSoundLeftAndDownAndRight removeFromSuperview]; [search resignFirstResponder]; if (isSearching == YES && [search.text length] != 0 ){ searched = YES; } search.text = @""; isSearching = NO; [tableView reloadData]; [indexPath retain]; [indexPath retain]; return indexPath; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [table reloadData]; selectedIndexPath = indexPath; [table reloadData]; NSInteger section = [indexPath section]; NSInteger row = [indexPath row]; NSString *key = [categories objectAtIndex:section]; NSArray *sound = [categoriesSounds objectForKey:key]; NSString *soundName = [[sound objectAtIndex: row] objectAtIndex: 0]; [indexPath retain]; [indexPath retain]; NSMutableString *title = [NSMutableString stringWithString: @"Assign Gesture for "]; NSMutableString *soundFeedback = [NSMutableString stringWithString: (@"%@", soundName)]; [title appendString: soundFeedback]; UIActionSheet *action = [[UIActionSheet alloc] initWithTitle:(@"%@", title) delegate:self cancelButtonTitle:@"Cancel" destructiveButtonTitle: nil otherButtonTitles:@"Left",@"Down",@"Right",nil]; action.actionSheetStyle = UIActionSheetStyleDefault; [action showInView:self.view]; [action release]; } - (void)actionSheet:(UIActionSheet *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex{ NSInteger section = [selectedIndexPath section]; NSInteger row = [selectedIndexPath row]; NSString *key = [categories objectAtIndex:section]; NSArray *sound = [categoriesSounds objectForKey:key]; NSString *soundName = [[sound objectAtIndex: row] objectAtIndex: 0]; NSString *soundOfType = [[sound objectAtIndex: row] objectAtIndex: 1]; NSUInteger soundSection = [[[sound objectAtIndex: row] objectAtIndex: 2] integerValue]; NSUInteger soundRow = [[[sound objectAtIndex: row] objectAtIndex: 3] integerValue]; NSLog(@"sound row is %i", soundRow); NSLog(@"sound section is row is %i", soundSection); typedef enum { kLeftButton = 0, kDownButton, kRightButton, kCancelButton } gesture; switch (buttonIndex) { //Left case kLeftButton: showLeft.text = soundName; left = [[NSBundle mainBundle] pathForResource:(@"%@", soundName) ofType:(@"%@", soundOfType)]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:left], &soundNegZ); AudioServicesPlaySystemSound (soundNegZ); [table deselectRowAtIndexPath:selectedIndexPath animated:YES]; leftIndexSection = [NSNumber numberWithInteger:section]; leftIndexRow = [NSNumber numberWithInteger:row]; NSInteger leftSection = [leftIndexSection integerValue]; NSInteger leftRow = [leftIndexRow integerValue]; NSString *leftKey = [categories objectAtIndex: leftSection]; NSArray *leftSound = [categoriesSounds objectForKey:leftKey]; NSInteger leftSoundSection = [[[leftSound objectAtIndex: leftRow] objectAtIndex: 2] integerValue]; NSInteger leftSoundRow = [[[leftSound objectAtIndex: leftRow] objectAtIndex: 3] integerValue]; leftOldIndexPath = [NSIndexPath indexPathForRow:leftSoundRow inSection:leftSoundSection]; break; //Down case kDownButton: showDown.text = soundName; down = [[NSBundle mainBundle] pathForResource:(@"%@", soundName) ofType:(@"%@", soundOfType)]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:down], &soundNegX); AudioServicesPlaySystemSound (soundNegX); [table deselectRowAtIndexPath:selectedIndexPath animated:YES]; downIndexSection = [NSNumber numberWithInteger:section]; downIndexRow = [NSNumber numberWithInteger:row]; NSInteger downSection = [downIndexSection integerValue]; NSInteger downRow = [downIndexRow integerValue]; NSString *downKey = [categories objectAtIndex: downSection]; NSArray *downSound = [categoriesSounds objectForKey:downKey]; NSInteger downSoundSection = [[[downSound objectAtIndex: downRow] objectAtIndex: 2] integerValue]; NSInteger downSoundRow = [[[downSound objectAtIndex: downRow] objectAtIndex: 3] integerValue]; downOldIndexPath = [NSIndexPath indexPathForRow:downSoundRow inSection:downSoundSection]; break; //Right case kRightButton: showRight.text = soundName; right = [[NSBundle mainBundle] pathForResource:(@"%@", soundName) ofType:(@"%@", soundOfType)]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:right], &soundPosX); AudioServicesPlaySystemSound (soundPosX); [table deselectRowAtIndexPath:selectedIndexPath animated:YES]; rightIndexSection = [NSNumber numberWithInteger:section]; rightIndexRow = [NSNumber numberWithInteger:row]; NSInteger rightSection = [rightIndexSection integerValue]; NSInteger rightRow = [rightIndexRow integerValue]; NSString *rightKey = [categories objectAtIndex: rightSection]; NSArray *rightSound = [categoriesSounds objectForKey:rightKey]; NSInteger rightSoundSection = [[[rightSound objectAtIndex: rightRow] objectAtIndex: 2] integerValue]; NSInteger rightSoundRow = [[[rightSound objectAtIndex: rightRow] objectAtIndex: 3] integerValue]; rightOldIndexPath = [NSIndexPath indexPathForRow:rightSoundRow inSection:rightSoundSection]; break; case kCancelButton: [table deselectRowAtIndexPath:selectedIndexPath animated:YES]; break; default: break; } UITableViewCell *viewCell = [table cellForRowAtIndexPath: selectedIndexPath]; NSArray *subviews = viewCell.subviews; for (UIView *cellView in subviews){ cellView.alpha = 1; } [table reloadData]; } - (NSInteger)tableView:(UITableView *)tableView sectionForSectionIndexTitle:(NSMutableString *)title atIndex:(NSInteger)index { NSMutableString *category = [categories objectAtIndex:index]; if (category == UITableViewIndexSearch) { [tableView setContentOffset:CGPointZero animated:NO]; return NSNotFound; } else return index; }

    Read the article

  • c# Truncate HTML safely for article summary

    - by WickedW
    Hi All, Does anyone have a c# variation of this? This is so I can take some html and display it without breaking as a summary lead in to an article? http://stackoverflow.com/questions/1193500/php-truncate-html-ignoring-tags Save me from reinventing the wheel! Thank you very much ---------- edit ------------------ Sorry, new here, and your right, should have phrased the question better, heres a bit more info I wish to take a html string and truncate it to a set number of words (or even char length) so I can then show the start of it as a summary (which then leads to the main article). I wish to preserve the html so I can show the links etc in preview. The main issue I have to solve is the fact that we may well end up with unclosed html tags if we truncate in the middle of 1 or more tags! The idea I have for solution is to a) truncate the html to N words (words better but chars ok) first (be sure not to stop in the middle of a tag and truncate a require attribute) b) work through the opened html tags in this truncated string (maybe stick them on stack as I go?) c) then work through the closing tags and ensure they match the ones on stack as I pop them off? d) if any open tags left on stack after this, then write them to end of truncated string and html should be good to go!!!! -- edit 12112009 Here is what I have bumbled together so far as a unittest file in VS2008, this 'may' help someone in future My hack attempts based on Jan code are at top for char version + word version (DISCLAIMER: this is dirty rough code!! on my part) I assume working with 'well-formed' HTML in all cases (but not necessarily a full document with a root node as per XML version) Abels XML version is at bottom, but not yet got round to fully getting tests to run on this yet (plus need to understand the code) ... I will update when I get chance to refine having trouble with posting code? is there no upload facility on stack? Thanks for all comments :) using System; using System.Collections.Generic; using System.Text.RegularExpressions; using System.Xml; using System.Xml.XPath; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace PINET40TestProject { [TestClass] public class UtilityUnitTest { public static string TruncateHTMLSafeishChar(string text, int charCount) { bool inTag = false; int cntr = 0; int cntrContent = 0; // loop through html, counting only viewable content foreach (Char c in text) { if (cntrContent == charCount) break; cntr++; if (c == '<') { inTag = true; continue; } if (c == '>') { inTag = false; continue; } if (!inTag) cntrContent++; } string substr = text.Substring(0, cntr); //search for nonclosed tags MatchCollection openedTags = new Regex("<[^/](.|\n)*?>").Matches(substr); MatchCollection closedTags = new Regex("<[/](.|\n)*?>").Matches(substr); // create stack Stack<string> opentagsStack = new Stack<string>(); Stack<string> closedtagsStack = new Stack<string>(); // to be honest, this seemed like a good idea then I got lost along the way // so logic is probably hanging by a thread!! foreach (Match tag in openedTags) { string openedtag = tag.Value.Substring(1, tag.Value.Length - 2); // strip any attributes, sure we can use regex for this! if (openedtag.IndexOf(" ") >= 0) { openedtag = openedtag.Substring(0, openedtag.IndexOf(" ")); } // ignore brs as self-closed if (openedtag.Trim() != "br") { opentagsStack.Push(openedtag); } } foreach (Match tag in closedTags) { string closedtag = tag.Value.Substring(2, tag.Value.Length - 3); closedtagsStack.Push(closedtag); } if (closedtagsStack.Count < opentagsStack.Count) { while (opentagsStack.Count > 0) { string tagstr = opentagsStack.Pop(); if (closedtagsStack.Count == 0 || tagstr != closedtagsStack.Peek()) { substr += "</" + tagstr + ">"; } else { closedtagsStack.Pop(); } } } return substr; } public static string TruncateHTMLSafeishWord(string text, int wordCount) { bool inTag = false; int cntr = 0; int cntrWords = 0; Char lastc = ' '; // loop through html, counting only viewable content foreach (Char c in text) { if (cntrWords == wordCount) break; cntr++; if (c == '<') { inTag = true; continue; } if (c == '>') { inTag = false; continue; } if (!inTag) { // do not count double spaces, and a space not in a tag counts as a word if (c == 32 && lastc != 32) cntrWords++; } } string substr = text.Substring(0, cntr) + " ..."; //search for nonclosed tags MatchCollection openedTags = new Regex("<[^/](.|\n)*?>").Matches(substr); MatchCollection closedTags = new Regex("<[/](.|\n)*?>").Matches(substr); // create stack Stack<string> opentagsStack = new Stack<string>(); Stack<string> closedtagsStack = new Stack<string>(); foreach (Match tag in openedTags) { string openedtag = tag.Value.Substring(1, tag.Value.Length - 2); // strip any attributes, sure we can use regex for this! if (openedtag.IndexOf(" ") >= 0) { openedtag = openedtag.Substring(0, openedtag.IndexOf(" ")); } // ignore brs as self-closed if (openedtag.Trim() != "br") { opentagsStack.Push(openedtag); } } foreach (Match tag in closedTags) { string closedtag = tag.Value.Substring(2, tag.Value.Length - 3); closedtagsStack.Push(closedtag); } if (closedtagsStack.Count < opentagsStack.Count) { while (opentagsStack.Count > 0) { string tagstr = opentagsStack.Pop(); if (closedtagsStack.Count == 0 || tagstr != closedtagsStack.Peek()) { substr += "</" + tagstr + ">"; } else { closedtagsStack.Pop(); } } } return substr; } public static string TruncateHTMLSafeishCharXML(string text, int charCount) { // your data, probably comes from somewhere, or as params to a methodint XmlDocument xml = new XmlDocument(); xml.LoadXml(text); // create a navigator, this is our primary tool XPathNavigator navigator = xml.CreateNavigator(); XPathNavigator breakPoint = null; // find the text node we need: while (navigator.MoveToFollowing(XPathNodeType.Text)) { string lastText = navigator.Value.Substring(0, Math.Min(charCount, navigator.Value.Length)); charCount -= navigator.Value.Length; if (charCount <= 0) { // truncate the last text. Here goes your "search word boundary" code: navigator.SetValue(lastText); breakPoint = navigator.Clone(); break; } } // first remove text nodes, because Microsoft unfortunately merges them without asking while (navigator.MoveToFollowing(XPathNodeType.Text)) { if (navigator.ComparePosition(breakPoint) == XmlNodeOrder.After) { navigator.DeleteSelf(); } } // moves to parent, then move the rest navigator.MoveTo(breakPoint); while (navigator.MoveToFollowing(XPathNodeType.Element)) { if (navigator.ComparePosition(breakPoint) == XmlNodeOrder.After) { navigator.DeleteSelf(); } } // moves to parent // then remove *all* empty nodes to clean up (not necessary): // TODO, add empty elements like <br />, <img /> as exclusion navigator.MoveToRoot(); while (navigator.MoveToFollowing(XPathNodeType.Element)) { while (!navigator.HasChildren && (navigator.Value ?? "").Trim() == "") { navigator.DeleteSelf(); } } // moves to parent navigator.MoveToRoot(); return navigator.InnerXml; } [TestMethod] public void TestTruncateHTMLSafeish() { // Case where we just make it to start of HREF (so effectively an empty link) // 'simple' nested none attributed tags Assert.AreEqual(@"<h1>1234</h1><b><i>56789</i>012</b>", TruncateHTMLSafeishChar( @"<h1>1234</h1><b><i>56789</i>012345</b>", 12)); // In middle of a! Assert.AreEqual(@"<h1>1234</h1><a href=""testurl""><b>567</b></a>", TruncateHTMLSafeishChar( @"<h1>1234</h1><a href=""testurl""><b>5678</b></a><i><strong>some italic nested in string</strong></i>", 7)); // more Assert.AreEqual(@"<div><b><i><strong>1</strong></i></b></div>", TruncateHTMLSafeishChar( @"<div><b><i><strong>12</strong></i></b></div>", 1)); // br Assert.AreEqual(@"<h1>1 3 5</h1><br />6", TruncateHTMLSafeishChar( @"<h1>1 3 5</h1><br />678<br />", 6)); } [TestMethod] public void TestTruncateHTMLSafeishWord() { // zero case Assert.AreEqual(@" ...", TruncateHTMLSafeishWord( @"", 5)); // 'simple' nested none attributed tags Assert.AreEqual(@"<h1>one two <br /></h1><b><i>three ...</i></b>", TruncateHTMLSafeishWord( @"<h1>one two <br /></h1><b><i>three </i>four</b>", 3), "we have added ' ...' to end of summary"); // In middle of a! Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four ...</b></a>", TruncateHTMLSafeishWord( @"<h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four five </b></a><i><strong>some italic nested in string</strong></i>", 4)); // start of h1 Assert.AreEqual(@"<h1>one two three ...</h1>", TruncateHTMLSafeishWord( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 3)); // more than words available Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i> ...", TruncateHTMLSafeishWord( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 99)); } [TestMethod] public void TestTruncateHTMLSafeishWordXML() { // zero case Assert.AreEqual(@" ...", TruncateHTMLSafeishWord( @"", 5)); // 'simple' nested none attributed tags string output = TruncateHTMLSafeishCharXML( @"<body><h1>one two </h1><b><i>three </i>four</b></body>", 13); Assert.AreEqual(@"<body>\r\n <h1>one two </h1>\r\n <b>\r\n <i>three</i>\r\n </b>\r\n</body>", output, "XML version, no ... yet and addeds '\r\n + spaces?' to format document"); // In middle of a! Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four ...</b></a>", TruncateHTMLSafeishCharXML( @"<body><h1>one two three </h1><a href=""testurl""><b class=""mrclass"">four five </b></a><i><strong>some italic nested in string</strong></i></body>", 4)); // start of h1 Assert.AreEqual(@"<h1>one two three ...</h1>", TruncateHTMLSafeishCharXML( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 3)); // more than words available Assert.AreEqual(@"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i> ...", TruncateHTMLSafeishCharXML( @"<h1>one two three </h1><a href=""testurl""><b>four five </b></a><i><strong>some italic nested in string</strong></i>", 99)); } } }

    Read the article

  • Problems with inheritance query view and one to many association in entity framework 4

    - by Kazys
    Hi, I have situation in with I stucked and don't know way out. The problem is in my bigger model, but I have made small example which shows the same problem. I have 4 tables. I called them SuperParent, NamedParent, TypedParent and ParentType. NamedParent and TypedParent derives from superParent. TypedParent has one to many association with ParentType. I describe mapping for entities using queryView. The problem is then I want to get TypedParents and Include ParentType I get the following exception: An error occurred while preparing the command definition. See the inner exception for details. --- System.ArgumentException: The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[PasibandymaiModel.SuperParent]' but the required type is 'Transient.reference[PasibandymaiModel.TypedParent]'. Parameter name: arguments[1] To get TypedParents I use following code: context.SuperParent.OfType().Include("ParentType"); my edmx file: <edmx:Edmx Version="2.0" xmlns:edmx="http://schemas.microsoft.com/ado/2008/10/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="PasibandymaiModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/02/edm/ssdl"> <EntityContainer Name="PasibandymaiModelStoreContainer"> <EntitySet Name="NamedParent" EntityType="PasibandymaiModel.Store.NamedParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.Store.ParentType" store:Type="Tables" Schema="dbo" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.Store.SuperParent" store:Type="Tables" Schema="dbo" /> <EntitySet Name="TypedParent" EntityType="PasibandymaiModel.Store.TypedParent" store:Type="Tables" Schema="dbo" /> <AssociationSet Name="fk_NamedParent_SuperParent" Association="PasibandymaiModel.Store.fk_NamedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="NamedParent" EntitySet="NamedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_ParentType" Association="PasibandymaiModel.Store.fk_TypedParent_ParentType"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> <AssociationSet Name="fk_TypedParent_SuperParent" Association="PasibandymaiModel.Store.fk_TypedParent_SuperParent"> <End Role="SuperParent" EntitySet="SuperParent" /> <End Role="TypedParent" EntitySet="TypedParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="Name" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Name="ParentTypeId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="Name" Type="nvarchar" MaxLength="100" /> </EntityType> <EntityType Name="SuperParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="SomeAttribute" Type="nvarchar" Nullable="false" MaxLength="100" /> </EntityType> <EntityType Name="TypedParent"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Name="ParentId" Type="int" Nullable="false" /> <Property Name="ParentTypeId" Type="int" Nullable="false"/> </EntityType> <Association Name="fk_NamedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="NamedParent" Type="PasibandymaiModel.Store.NamedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="NamedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_ParentType"> <End Role="ParentType" Type="PasibandymaiModel.Store.ParentType" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> <Association Name="fk_TypedParent_SuperParent"> <End Role="SuperParent" Type="PasibandymaiModel.Store.SuperParent" Multiplicity="1" /> <End Role="TypedParent" Type="PasibandymaiModel.Store.TypedParent" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="SuperParent"> <PropertyRef Name="ParentId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentId" /> </Dependent> </ReferentialConstraint> </Association> <Function Name="ChildDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> </Function> <Function Name="ChildInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="ChildUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ChildId" Type="int" Mode="In" /> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="NamedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="NamedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> <Function Name="ParentTypeInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="ParentTypeUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="Name" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentDelete" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> </Function> <Function Name="TypedParentInsert" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> </Function> <Function Name="TypedParentUpdate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo"> <Parameter Name="ParentId" Type="int" Mode="In" /> <Parameter Name="SomeAttribute" Type="nvarchar" Mode="In" /> <Parameter Name="ParentTypeId" Type="int" Mode="In" /> </Function> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="PasibandymaiModel" Alias="Self" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2008/09/edm"> <EntityContainer Name="PasibandymaiEntities" annotation:LazyLoadingEnabled="true"> <EntitySet Name="ParentType" EntityType="PasibandymaiModel.ParentType" /> <EntitySet Name="SuperParent" EntityType="PasibandymaiModel.SuperParent" /> <AssociationSet Name="ParentTypeTypedParent" Association="PasibandymaiModel.ParentTypeTypedParent"> <End Role="ParentType" EntitySet="ParentType" /> <End Role="TypedParent" EntitySet="SuperParent" /> </AssociationSet> </EntityContainer> <EntityType Name="NamedParent" BaseType="PasibandymaiModel.SuperParent"> <Property Type="String" Name="Name" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="ParentType"> <Key> <PropertyRef Name="ParentTypeId" /> </Key> <Property Type="Int32" Name="ParentTypeId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="Name" MaxLength="100" FixedLength="false" Unicode="true" /> <NavigationProperty Name="TypedParent" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="ParentType" ToRole="TypedParent" /> </EntityType> <EntityType Name="SuperParent" Abstract="true"> <Key> <PropertyRef Name="ParentId" /> </Key> <Property Type="Int32" Name="ParentId" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="SomeAttribute" Nullable="false" MaxLength="100" FixedLength="false" Unicode="true" /> </EntityType> <EntityType Name="TypedParent" BaseType="PasibandymaiModel.SuperParent"> <NavigationProperty Name="ParentType" Relationship="PasibandymaiModel.ParentTypeTypedParent" FromRole="TypedParent" ToRole="ParentType" /> <Property Type="Int32" Name="ParentTypeId" Nullable="false" /> </EntityType> <Association Name="ParentTypeTypedParent"> <End Type="PasibandymaiModel.ParentType" Role="ParentType" Multiplicity="1" /> <End Type="PasibandymaiModel.TypedParent" Role="TypedParent" Multiplicity="*" /> <ReferentialConstraint> <Principal Role="ParentType"> <PropertyRef Name="ParentTypeId" /> </Principal> <Dependent Role="TypedParent"> <PropertyRef Name="ParentTypeId" /> </Dependent> </ReferentialConstraint> </Association> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2008/09/mapping/cs"> <EntityContainerMapping StorageEntityContainer="PasibandymaiModelStoreContainer" CdmEntityContainer="PasibandymaiEntities"> <EntitySetMapping Name="ParentType"> <QueryView> SELECT VALUE PasibandymaiModel.ParentType(tp.ParentTypeId, tp.Name) FROM PasibandymaiModelStoreContainer.ParentType AS tp </QueryView> </EntitySetMapping> <EntitySetMapping Name="SuperParent"> <QueryView> SELECT VALUE CASE WHEN (np.ParentId IS NOT NULL) THEN PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) WHEN (tp.ParentId IS NOT NULL) THEN PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) END FROM PasibandymaiModelStoreContainer.SuperParent AS sp LEFT JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId LEFT JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.TypedParent"> SELECT VALUE PasibandymaiModel.TypedParent(sp.ParentId, sp.SomeAttribute, tp.ParentTypeId) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.TypedParent AS tp ON sp.ParentId = tp.ParentId </QueryView> <QueryView TypeName="PasibandymaiModel.NamedParent"> SELECT VALUE PasibandymaiModel.NamedParent(sp.ParentId, sp.SomeAttribute, np.Name) FROM PasibandymaiModelStoreContainer.SuperParent AS sp INNER JOIN PasibandymaiModelStoreContainer.NamedParent AS np ON sp.ParentId = np.ParentId </QueryView> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> </edmx:Edmx> I have tried using AssociationSetMapping instead of using Association with ReferentialConstraint. But then couldn't insert related entities at once, becouse entity framework didn't provided entity key of inserted entities for related entities. Thanks for any idea

    Read the article

  • SignalR Server Error "The ConnectionId is in the incorrect format." with SignalR-ObjC Library

    - by ozzotto
    Before asking a separate question I've done lots of googling about it and added a comment in the already existing stackoverflow question. I have a SignalR Hub (tried both v. 1.1.3 and 2.0.0-rc) in my server with the below code: [HubName("TestHub")] public class TestHub : Hub { [Authorize] public void TestMethod(string test) { //some stuff here Clients.Caller.NotifyOnTestCompleted(); } } The problem persists if I remove the Authorize attribute. And in my iOS client I try to call it with the below code: SRHubConnection *hubConnection = [SRHubConnection connectionWithURL:_baseURL]; SRHubProxy *hubProxy = [hubConnection createHubProxy:@"TestHub"]; [hubProxy on:@"NotifyOnTestCompleted" perform:self selector:@selector(stopConnection)]; hubConnection.started = ^{ [hubProxy invoke:@"TestMethod" withArgs:@[@"test"]]; }; //received, error handling [hubConnection start]; When the app starts the user is not logged in and there is no open SignalR connection. The users logs in by calling a Login service in the server which makes use of WebSecurity.Login method. If the login service returns success I then make the above call to SignalR Hub and I get the server error 500 with description "The ConnectionId is in the incorrect format.". The full server stacktrace is the following: Exception information: Exception type: InvalidOperationException Exception message: The ConnectionId is in the incorrect format. at Microsoft.AspNet.SignalR.PersistentConnection.GetConnectionId(HostContext context, String connectionToken) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.Hubs.HubDispatcher.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(IDictionary`2 environment) at Microsoft.Owin.Mapping.MapMiddleware.<Invoke>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.IntegratedPipelineContext.EndFinalWork(IAsyncResult ar) at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Request information: Request URL: http://myserverip/signalr/signalr/connect?transport=webSockets&connectionToken=axJs EQMZxpmUopL36owSUkdhNs85E0fyB2XvV5R5znZfXYI/CiPbTRQ3kASc3 mq60cLkZU7coYo1P fbC0U1LR2rI6WIvCNIMOmv/mHut/Unt9mX3XFkQb053DmWgCan5zHA==&connectionData=[{"Name":"testhub"}] Request path: /signalr/signalr/connect User host address: User: Is authenticated: False Authentication Type: Thread account name: IIS APPPOOL\DefaultAppPool Thread information: Thread ID: 14 Thread account name: IIS APPPOOL\DefaultAppPool Is impersonating: True Stack trace: at Microsoft.AspNet.SignalR.PersistentConnection.GetConnectionId(HostContext context, String connectionToken) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.Hubs.HubDispatcher.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(IDictionary`2 environment) at Microsoft.Owin.Mapping.MapMiddleware.<Invoke>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.IntegratedPipelineContext.EndFinalWork(IAsyncResult ar) at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I understand this is some kind of authentication and user identity mismatching but up to now I have found no way of solving it. All other questions suggest stoping the opened connection when the user identity changes but as I mentioned above I have no open connection before the user logs in successfully. Any help would be much appreciated. Thank you.

    Read the article

  • WPF CommandParameter is NULL first time CanExecute is called

    - by Jonas Follesø
    I have run into an issue with WPF and Commands that are bound to a Button inside the DataTemplate of an ItemsControl. The scenario is quite straight forward. The ItemsControl is bound to a list of objects, and I want to be able to remove each object in the list by clicking a Button. The Button executes a Command, and the Command takes care of the deletion. The CommandParameter is bound to the Object I want to delete. That way I know what the user clicked. A user should only be able to delete their "own" objects - so I need to do some checks in the "CanExecute" call of the Command to verify that the user has the right permissions. The problem is that the parameter passed to CanExecute is NULL the first time it's called - so I can't run the logic to enable/disable the command. However, if I make it allways enabled, and then click the button to execute the command, the CommandParameter is passed in correctly. So that means that the binding against the CommandParameter is working. The XAML for the ItemsControl and the DataTemplate looks like this: <ItemsControl x:Name="commentsList" ItemsSource="{Binding Path=SharedDataItemPM.Comments}" Width="Auto" Height="Auto"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Button Content="Delete" FontSize="10" Command="{Binding Path=DataContext.DeleteCommentCommand, ElementName=commentsList}" CommandParameter="{Binding}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> So as you can see I have a list of Comments objects. I want the CommandParameter of the DeleteCommentCommand to be bound to the Command object. So I guess my question is: have anyone experienced this problem before? CanExecute gets called on my Command, but the parameter is always NULL the first time - why is that? Update: I was able to narrow the problem down a little. I added an empty Debug ValueConverter so that I could output a message when the CommandParameter is data bound. Turns out the problem is that the CanExecute method is executed before the CommandParameter is bound to the button. I have tried to set the CommandParameter before the Command (like suggested) - but it still doesn't work. Any tips on how to control it. Update2: Is there any way to detect when the binding is "done", so that I can force re-evaluation of the command? Also - is it a problem that I have multiple Buttons (one for each item in the ItemsControl) that bind to the same instance of a Command-object? Update3: I have uploaded a reproduction of the bug to my SkyDrive: http://cid-1a08c11c407c0d8e.skydrive.live.com/self.aspx/Code%20samples/CommandParameterBinding.zip

    Read the article

  • iPhone UIButton addTarget:action:forControlEvents: not working

    - by Aaron Vegh
    I see there are similar problems posted here, but none of the solutions work for me. Here goes: I have a UIButton instance inside a UIView frame, which is positioned within another UIView frame, which is positioned in the main window UIView. To wit: UIWindow --> UIView (searchView) --> UISearchBar (findField) --> UIView (prevButtonView) --> UIButton (prevButton) --> UIView (nextButtonView) --> UIButton (nextButton) So far, so good: everything is laid out as I want it. However, the buttons aren't accepting user input of any kind. I am using the UIButton method addTarget:action:forControlEvents: and to give you an idea of what I'm doing, here's my code for nextButton: nextButton = [UIButton buttonWithType:UIButtonTypeCustom]; [nextButton setImage:[UIImage imageNamed:@"find_next_on.png"] forState:UIControlStateNormal]; [nextButton setImage:[UIImage imageNamed:@"find_next_off.png"] forState:UIControlStateDisabled]; [nextButton setImage:[UIImage imageNamed:@"find_next_off.png"] forState:UIControlStateHighlighted]; [nextButton addTarget:self action:@selector(nextResult:) forControlEvents:UIControlEventTouchUpInside]; The method nextResult: never gets called, the state of the button doesn't change on touching, and there's not a damned thing I've been able to do to get it working. I assume that there's an issue with all these layers of views: maybe something is sitting on top of my button, but I'm not sure what it could be. In the interest of information overload, I found a bit of code that would print out all my view info. This is what I get for my entire view hierarchy: UIWindow {{0, 0}, {320, 480}} UILayoutContainerView {{0, 0}, {320, 480}} UINavigationTransitionView {{0, 0}, {320, 480}} UIViewControllerWrapperView {{0, 64}, {320, 416}} UIView {{0, 0}, {320, 416}} UIWebView {{0, 44}, {320, 416}} UIScroller {{0, 0}, {320, 416}} UIImageView {{0, 0}, {54, 54}} UIImageView {{0, 0}, {54, 54}} UIImageView {{0, 0}, {54, 54}} UIImageView {{0, 0}, {54, 54}} UIImageView {{-14.5, 14.5}, {30, 1}} UIImageView {{-14.5, 14.5}, {30, 1}} UIImageView {{0, 0}, {1, 30}} UIImageView {{0, 0}, {1, 30}} UIImageView {{0, 430}, {320, 30}} UIImageView {{0, 0}, {320, 30}} UIWebDocumentView {{0, 0}, {320, 21291}} UIView {{0, 0}, {320, 44}} UISearchBar {{10, 8}, {240, 30}} UISearchBarBackground {{0, 0}, {240, 30}} UISearchBarTextField {{5, -2}, {230, 31}} UITextFieldBorderView {{0, 0}, {230, 31}} UIPushButton {{205, 6}, {19, 19}} UIImageView {{10, 8}, {15, 15}} UILabel {{30, 7}, {163, 18}} UIView {{290, 15}, {23, 23}} UIButton {{0, 0}, {0, 0}} UIImageView {{-12, -12}, {23, 23}} UIView {{260, 15}, {23, 23}} UIButton {{0, 0}, {0, 0}} UIImageView {{-12, -12}, {23, 23}} UINavigationBar {{0, 20}, {320, 44}} UINavigationButton {{267, 7}, {48, 30}} UIImageView {{0, 0}, {48, 30}} UIButtonLabel {{11, 7}, {26, 15}} UINavigationItemView {{79, 8}, {180, 27}} UINavigationItemButtonView {{5, 7}, {66, 30}} MBProgressHUD {{0, 0}, {320, 480}} UIActivityIndicatorView {{141, 206}, {37, 37}} UILabel {{117, 247}, {86, 27}} The relevant part is noted above the UINavigationBar section. Anyone have any suggestions? I'm all out. Thanks for reading. Aaron.

    Read the article

  • System.ServiceModel.Syndication.SyndicationFeed throws when the RSS document includes a <script> blo

    - by Cheeso
    The code: using (XmlReader xmlr = XmlReader.Create(new StringReader(allXml))) { var items = from item in SyndicationFeed.Load(xmlr).Items select item; } The exception: Exception: System.Xml.XmlException: Unexpected node type Element. ReadElementString method can only be called on elements with simple or empty content. Line 11, position 25. at System.Xml.XmlReader.ReadElementString() at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadXml(XmlReader reader, SyndicationFeed result) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadFeed(XmlReader reader) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadFrom(XmlReader reader) at System.ServiceModel.Syndication.SyndicationFeed.Load[TSyndicationFeed](XmlReader reader) at System.ServiceModel.Syndication.SyndicationFeed.Load(XmlReader reader) at Ionic.ToolsAndTests.ReadRss.Run() in c:\dev\dotnet\ReadRss.cs:line 90 The XML content: <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/styles/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" > <channel> <title>Software architecture, software engineering, and Renaissance Jazz</title> <link>https://www.ibm.com/developerworks/mydeveloperworks/blogs/gradybooch</link> <atom:link rel="self" type="application/rss+xml" href="https://www.ibm.com/developerworks/mydeveloperworks/blogs/gradybooch/feed/entries/rss?lang=en" /> <description>Software architecture, software engineering, and Renaissance Jazz</description> <language>en-us</language> <copyright>Copyright <script type='text/javascript'> document.write(blogsDate.date.localize (1273534889181));</script></copyright> <lastBuildDate>Mon, 10 May 2010 19:41:29 -0400</lastBuildDate> As you can see, on line 11, at position 25, there's a script block inside the <copyright> element. Other people have reported similar errors with other XML documents. The way I worked around this was to do a StreamReader.ReadToEnd, then do Regex.Replace on the result of that to yank out the script block, before passing the modified string to XmlReader.Create(). Feels like a hack. Has anyone got a better approach? I don't like this because I have to read in a 125k string into memory. Is it valid rss to include "complex content" like that - a script block inside an element?

    Read the article

  • Sunrise / set calculations

    - by dassouki
    I'm trying to calculate the sunset / rise times using python based on the link provided below. My results done through excel and python do not match the real values. Any ideas on what I could be doing wrong? My Excel sheet can be found under .. http://transpotools.com/sun_time.xls # Created on 2010-03-28 # @author: dassouki # @source: [http://williams.best.vwh.net/sunrise_sunset_algorithm.htm][2] # @summary: this is based on the Nautical Almanac Office, United States Naval # Observatory. import math, sys class TimeOfDay(object): def calculate_time(self, in_day, in_month, in_year, lat, long, is_rise, utc_time_zone): # is_rise is a bool when it's true it indicates rise, # and if it's false it indicates setting time #set Zenith zenith = 96 # offical = 90 degrees 50' # civil = 96 degrees # nautical = 102 degrees # astronomical = 108 degrees #1- calculate the day of year n1 = math.floor( 275 * in_month / 9 ) n2 = math.floor( ( in_month + 9 ) / 12 ) n3 = ( 1 + math.floor( in_year - 4 * math.floor( in_year / 4 ) + 2 ) / 3 ) new_day = n1 - ( n2 * n3 ) + in_day - 30 print "new_day ", new_day #2- calculate rising / setting time if is_rise: rise_or_set_time = new_day + ( ( 6 - ( long / 15 ) ) / 24 ) else: rise_or_set_time = new_day + ( ( 18 - ( long/ 15 ) ) / 24 ) print "rise / set", rise_or_set_time #3- calculate sun mean anamoly sun_mean_anomaly = ( 0.9856 * rise_or_set_time ) - 3.289 print "sun mean anomaly", sun_mean_anomaly #4 calculate true longitude true_long = ( sun_mean_anomaly + ( 1.916 * math.sin( math.radians( sun_mean_anomaly ) ) ) + ( 0.020 * math.sin( 2 * math.radians( sun_mean_anomaly ) ) ) + 282.634 ) print "true long ", true_long # make sure true_long is within 0, 360 if true_long < 0: true_long = true_long + 360 elif true_long > 360: true_long = true_long - 360 else: true_long print "true long (360 if) ", true_long #5 calculate s_r_a (sun_right_ascenstion) s_r_a = math.degrees( math.atan( 0.91764 * math.tan( math.radians( true_long ) ) ) ) print "s_r_a is ", s_r_a #make sure it's between 0 and 360 if s_r_a < 0: s_r_a = s_r_a + 360 elif true_long > 360: s_r_a = s_r_a - 360 else: s_r_a print "s_r_a (modified) is ", s_r_a # s_r_a has to be in the same Quadrant as true_long true_long_quad = ( math.floor( true_long / 90 ) ) * 90 s_r_a_quad = ( math.floor( s_r_a / 90 ) ) * 90 s_r_a = s_r_a + ( true_long_quad - s_r_a_quad ) print "s_r_a (quadrant) is ", s_r_a # convert s_r_a to hours s_r_a = s_r_a / 15 print "s_r_a (to hours) is ", s_r_a #6- calculate sun diclanation in terms of cos and sin sin_declanation = 0.39782 * math.sin( math.radians ( true_long ) ) cos_declanation = math.cos( math.asin( sin_declanation ) ) print " sin/cos declanations ", sin_declanation, ", ", cos_declanation # sun local hour cos_hour = ( math.cos( math.radians( zenith ) ) - ( sin_declanation * math.sin( math.radians ( lat ) ) ) / ( cos_declanation * math.cos( math.radians ( lat ) ) ) ) print "cos_hour ", cos_hour # extreme north / south if cos_hour > 1: print "Sun Never Rises at this location on this date, exiting" # sys.exit() elif cos_hour < -1: print "Sun Never Sets at this location on this date, exiting" # sys.exit() print "cos_hour (2)", cos_hour #7- sun/set local time calculations if is_rise: sun_local_hour = ( 360 - math.degrees(math.acos( cos_hour ) ) ) / 15 else: sun_local_hour = math.degrees( math.acos( cos_hour ) ) / 15 print "sun local hour ", sun_local_hour sun_event_time = sun_local_hour + s_r_a - ( 0.06571 * rise_or_set_time ) - 6.622 print "sun event time ", sun_event_time #final result time_in_utc = sun_event_time - ( long / 15 ) + utc_time_zone return time_in_utc #test through main def main(): print "Time of day App " # test: fredericton, NB # answer: 7:34 am long = 66.6 lat = -45.9 utc_time = -4 d = 3 m = 3 y = 2010 is_rise = True tod = TimeOfDay() print "TOD is ", tod.calculate_time(d, m, y, lat, long, is_rise, utc_time) if __name__ == "__main__": main()

    Read the article

  • Subclassed django models with integrated querysets

    - by outofculture
    Like in this question, except I want to be able to have querysets that return a mixed body of objects: >>> Product.objects.all() [<SimpleProduct: ...>, <OtherProduct: ...>, <BlueProduct: ...>, ...] I figured out that I can't just set Product.Meta.abstract to true or otherwise just OR together querysets of differing objects. Fine, but these are all subclasses of a common class, so if I leave their superclass as non-abstract I should be happy, so long as I can get its manager to return objects of the proper class. The query code in django does its thing, and just makes calls to Product(). Sounds easy enough, except it blows up when I override Product.__new__, I'm guessing because of the __metaclass__ in Model... Here's non-django code that behaves pretty much how I want it: class Top(object): _counter = 0 def __init__(self, arg): Top._counter += 1 print "Top#__init__(%s) called %d times" % (arg, Top._counter) class A(Top): def __new__(cls, *args, **kwargs): if cls is A and len(args) > 0: if args[0] is B.fav: return B(*args, **kwargs) elif args[0] is C.fav: return C(*args, **kwargs) else: print "PRETENDING TO BE ABSTRACT" return None # or raise? else: return super(A).__new__(cls, *args, **kwargs) class B(A): fav = 1 class C(A): fav = 2 A(0) # => None A(1) # => <B object> A(2) # => <C object> But that fails if I inherit from django.db.models.Model instead of object: File "/home/martin/beehive/apps/hello_world/models.py", line 50, in <module> A(0) TypeError: unbound method __new__() must be called with A instance as first argument (got ModelBase instance instead) Which is a notably crappy backtrace; I can't step into the frame of my __new__ code in the debugger, either. I have variously tried super(A, cls), Top, super(A, A), and all of the above in combination with passing cls in as the first argument to __new__, all to no avail. Why is this kicking me so hard? Do I have to figure out django's metaclasses to be able to fix this or is there a better way to accomplish my ends?

    Read the article

  • Hook IDispatch v-table in C++

    - by monoceres
    I'm trying to modify the behavior of an IDispatch interface already present in the system. To do this my plan was to hook into the objects v-table during runtime and modify the pointers so it points to a custom hook method instead. If I can get this to work I can add new methods and properties to already existing objects. Nice. First I tried hooking into the v-table for IUnknown (from which IDispatch inherits from) and that worked fine. However trying to change entires in IDispatch doesn't work at all. Nothing happens at all, the code works just as it did without the hook. Here's the code, it's very simple so it shouldn't be any problems to understand #include <iostream> #include <windows.h> #include <Objbase.h> #pragma comment (lib,"Ole32.lib") using namespace std; HRESULT __stdcall typecount(IDispatch *self,UINT*u) { cout << "hook" << endl; *u=1; return S_OK; } int main() { CoInitialize(NULL); // Get clsid from name CLSID clsid; CLSIDFromProgID(L"shell.application",&clsid); // Create instance IDispatch *obj=NULL; CoCreateInstance(clsid,NULL,CLSCTX_INPROC_SERVER,__uuidof(IDispatch),(void**)&obj); // Get vtable and offset in vtable for idispatch void* iunknown_vtable= (void*)*((unsigned int*)obj); // There are three entries in IUnknown, therefore add 12 to go to IDispatch void* idispatch_vtable = (void*)(((unsigned int)iunknown_vtable)+12); // Get pointer of first emtry in IDispatch vtable (GetTypeInfoCount) unsigned int* v1 = (unsigned int*)iunknown_vtable; // Change memory permissions so address can be overwritten DWORD old; VirtualProtect(v1,4,PAGE_EXECUTE_READWRITE,&old); // Override v-table pointer *v1 = (unsigned int) typecount; // Try calling GetTypeInfo count, should now be hooked. But isn't works as usual UINT num=0; obj->GetTypeInfoCount(&num); /* HRESULT hresult; OLECHAR FAR* szMember = (OLECHAR*)L"MinimizeAll"; DISPID dispid; DISPPARAMS dispparamsNoArgs = {NULL, NULL, 0, 0}; hresult = obj->GetIDsOfNames(IID_NULL, &szMember, 1, LOCALE_SYSTEM_DEFAULT, &dispid) ; hresult = obj->Invoke(dispid,IID_NULL,LOCALE_SYSTEM_DEFAULT,DISPATCH_METHOD,&dispparamsNoArgs, NULL, NULL, NULL); */ } Thanks in advance!

    Read the article

  • Benchmark Linq2SQL, Subsonic2, Subsonic3 - Any other ideas to make them faster ?

    - by Aristos
    I am working with Subsonic 2 more than 3 years now... After Linq appears and then Subsonic 3, I start thinking about moving to the new Linq futures that are connected to sql. I must say that I start move and port my subsonic 2 with SubSonic 3, and very soon I discover that the speed was so slow thats I didn't believe it - and starts all that tests. Then I test Linq2Sql and see also a delay - compare it with Subsonic 2. My question here is, especial for the linq2sql, and the up-coming dotnet version 4, what else can I do to speed it up ? What else on linq2sql settings, or classes, not on this code that I have used for my messures I place here the project that I make the tests, also the screen shots of the results. How I make the tests - and the accurate of my measures. I use only for my question Google chrome, because its difficult for me to show here a lot of other measures that I have done with more complex programs. This is the most simple one, I just measure the Data Read. How can I prove that. I make a simple Thread.Sleep(10 seconds) and see if I see that 10 seconds on Google Chrome Measure, and yes I see it. here are more test with this Sleep thead to see whats actually Chrome gives. 10 seconds delay 100 ms delay Zero delay There is only a small 15ms thats get on messure, is so small compare it with the rest of my tests that I do not care about. So what I measure I measure just the data read via each method - did not count the data or database delay, or any disk read or anything like that. Later on the image with the result I show that no disk activity exist on the measures See this image to see what really I measure and if this is correct Why I chose this kind of test Its simple, it's real, and it's near my real problem that I found the delay of subsonic 3 in real program with real data. Now lets tests the dals Start by see this image I have 4-5 calls on every method, the one after the other. The results are. For a loop of 100 times, ask for 5 Rows, one not exist, approximatively.. Simple adonet:81ms SubSonic 2 :210ms linq2sql :1.70sec linq2sql using CompiledQuery.Compile :239ms Subsonic 3 :15.00sec (wow - extreme slow) The project http://www.planethost.gr/DalSpeedTests.rar Can any one confirm this benchmark, or make any optimizations to help me out ? Other tests Some one publish here this link http://ormbattle.net/ (and then remove it - don not know why) In this page you can find a really useful advanced tests for all, except subsonic 2 and subsonic 3 that I have here ! Optimizing What I really ask here is if some one can now any trick how to optimize the DALs, not by changing the test code, but by changing the code and the settings on each dal. For example... Optimizing Linq2SQL I start search how to optimize Linq2sql and found this article, and maybe more exist. Finally I make the tricks from that page to run, and optimize the code using them all. The speed was near 1.50sec from 1.70.... big improvement, but still slow. Then I found a different way - same idea article, and wow ! the speed is blow up. Using this trick with CompiledQuery.Compile, the time from 1.5sec is now 239ms. Here is the code for the precompiled... Func<DataClassesDataContext, int, IQueryable<Product>> compiledQuery = CompiledQuery.Compile((DataClassesDataContext meta, int IdToFind) => (from myData in meta.Products where myData.ProductID.Equals(IdToFind) select myData)); StringBuilder Test = new StringBuilder(); int[] MiaSeira = { 5, 6, 10, 100, 7 }; using (DataClassesDataContext context = new DataClassesDataContext()) { context.ObjectTrackingEnabled = false; for (int i = 0; i < 100; i++) { foreach (int EnaID in MiaSeira) { var oFindThat2P = compiledQuery(context, EnaID); foreach (Product One in oFindThat2P) { Test.Append("<br />"); Test.Append(One.ProductName); } } } } Optimizing SubSonic 3 and problems I make many performance profiling, and start change the one after the other and the speed is better but still too slow. I post them on subsonic group but they ignore the problem, they say that everything is fast... Here is some capture of my profiling and delay points inside subsonic source code I have end up that subsonic3 make more call on the structure of the database rather than on data itself. Needs to reconsider the hole way of asking for data, and follow the subsonic2 idea if this is possible. Try to make precompile to subsonic 3 like I did in linq2Sql but fail for the moment... Optimizing SubSonic 2 After I discover that subsonic 3 is extreme slow, I start my checks on subsonic 2 - that I have never done before believing that is fast. (and it is) So its come up with some points that can be faster. For example there are many loops like this ones that actually is slow because of string manipulation and compares inside the loop. I must say to you that this code called million of times ! on a period of few minutes ! of data asking from the program. On small amount of tables and small fields maybe this is not a big think for some people, but on large amount of tables, the delay is even more. So I decide and optimize the subsonic 2 by my self, by replacing the string compares, with number compares! Simple. I do that almost on every point that profiler say that is slow. I change also all small points that can be even a little faster, and disable some not so used thinks. The results, 5% faster on NorthWind database, near 20% faster on my database with 250 tables. That is count with 500ms less in 10 seconds process on northwind, 100ms faster on my database on 500ms process time. I do not have captures to show you for that because I have made them with different code, different time, and track them down on paper. Anyway this is my story and my question on all that, what else do you know to make them even faster... For this measures I have use Subsonic 2.2 optimized by me, Subsonic 3.0.0.3 a little optimized by me, and Dot.Net 3.5

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >