Search Results

Search found 2222 results on 89 pages for 'functional'.

Page 29/89 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Why are bugs responsible for big deficiencies in functionality given such low priority?

    - by keepitsimpleengineer
    Well, first of all, change is inevitable and mostly good. Furthermore attempts at simplifying the User Interface such as Gnome 3, Unity to make Linux more inclusive hold much promise, even though they adversely affect my style of working. Additionally, though now retired, I have worked with computers for 47 years, and though I do nothing serious for others now, I still do heavy duty things. 10.04 LTS is my big workstation, and I had three 10.10 systems for Mythtv, and one of which is further adapted for video & related. The Mythtv were 10.10 because of a dormant bug regarding installing to 10.04. My work habits consistently use dual monitors and compiz cube and 3D windows with the computing horsepower to support them. The dual monitors with separate X screens has been not been functional since 11.04, and cube/3D windows not functional in Unity, and with diminished functionality Gnome. There is a bug filed (after upgrade to 12.04 amd64 Gnome Classic not properly draw second screen) I have mitigated the situation some by switching to Xubuntu and eschewing Unity. The question that comes to mind is why this bug is not given more attention in that it nearly cuts functionality in half for more competent workstations. Sample workspace... Please know that I appreciate all the hard work, dedication require to pull off something as big as Ubuntu et al.

    Read the article

  • PeopleSoft Reconnect Conference

    - by Matthew Haavisto
    The PeopleSoft Reconnect Conference is coming in July.  This conference is run by Quest, and unlike other conferences, is focused specifically on PeopleSoft.  You can learn about the conference and register here. We have a lot of great sessions planned this year for both PeopleSoft applications and PeopleTools.  Since this is the Tech blog, I'll highlight some of the PeopleTools and related technology sessions: PeopleSoft Technology Roadmap:  Current Features and Future Plans PeopleTools Features for the Smart Functional User Mastering PeopleTools:  Using the Peoplesoft Integration Network Mastering PeopleTools:  Getting Started with PeopleSoft Update Manager Mastering PeopleTools:  Putting Dashboards and Workcenters to Work for You Mastering PeopleTools:  Exploiting PeopleTools Tips and Tricks PeopleSoft Administration Across the Enterprise As you can see from this list, we're covering a broad range of topics that will appeal to everyone from your technical staff to savvy functional experts.  And these are just the sessions that we in the Oracle/PeopleTools group are presenting.  There are also dozens of valuable and interesting sessions being presented by customers and partners.  You can view the entire program here. We hope to see you there!

    Read the article

  • Do you think Scala will be the dominant JVM langauge, ie be the next Java? [on hold]

    - by user1037729
    From what I've read about Scala do far I think it has some nice features but I do not think it should be "the next Java". It might however end up being the next Java (due to fashion rather than fact) but lets not hope it does not... To me adds a lot of complexity over Java which is a simple and scalable language. Scala Pattern matching allows you to perform some type/value checking in a more concise way, this is possible in Java, Scala's pattern matching has a limit to it, you cannot continuously match deeper and deeper down the object graph, so why not just stick to Java and use decent invariants? Scala provides tuples, easy enough to make in Java, create a static factory method and it all reads nicely too. Scala provides mixins, why not just use composition? I believe Scala implicit's are bad, they can lead to code becoming complex and hard to maintain, explicitness is good. Scala provides closures, well they will be in Java 8 too. Scala has lazy keyword for lazy instantiation, this is easy enough to do in Java by calling a getter which creates the instance when needed, no hidden magic here. Scala can be used with AKKA, well so can Java, there is an Java AKKA implementation. Scala offers addition functional features but these can all be created in Java, there are many frameworks with have implemented functional features in Java. All in all Scala seems to offer is addition complexity and thats it...

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • Herding Cats - That's My Job....

    - by user709270
    Written by Mike Schmitz - Sr. Director, Program Management Oracle JD Edwards  I remember seeing a super bowl commercial several years ago showing some well dressed people on the African savanna herding cats. I remember turning to the people I was watching the game with and telling them, “You just watched my job description”. Releasing software is a multi-facetted undertaking. In addition to making sure the code changes are complete, you also need to make sure the other key parts of a release are ready. For example when you have a question about the software, will the person on the other end of the phone be ready to answer your question? If you need training on that cool new piece of functionality, will there be an online training course ready for you to review? If you want to read about how the software is supposed to function, is there a user manual available? Putting all the release pieces together so they are available at the same time is what the JD Edwards Program Management team does. It is my team’s job to work with all the different functional teams so when a release is made generally available you have all the things you need to be successful. The JD Edwards Program Management team uses an internal planning tool called the Release Process Model (RPM) to ensure all deliverables are accounted for in a release. The RPM makes sure all the release deliverables are ready at the correct time and in the correct format. The RPM really helps all the functional teams in JD Edwards know what release deliverables they are accountable for and when they are to be delivered. It is my team’s job to make sure everyone understands what they need to do and when they need to deliver. We then make sure they are all on track to deliver on-time and in the right format. It is just that some days this feels like herding cats.

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • Learning curve webdevelopment

    - by refro
    At the moment our team has a huge challenge, we're being asked to deliver a new GUI for an embedded controller. De deadline is very tight and is set on april 2013. Our team is very diverse some people are on the level of functional programming (mostly C), others (including myself) also master object oriented programming (C++, C#). We build a prototype with android, although it has its quirks it is mostly just OO. For the future there is a wish to support multiple platforms (Windows, Android, iOS). In my opinion a HTML5 app with a native app shell is the way to go. When gathering more information on the frameworks to use etc it becomes obvious to me a paradigm shift is needed. None of us have a web background so we need to learn from the ground up. The shift from functional to oo took us about 6 months to become productive (and some of the early subsystems were rewritten because they were a total mess) . Can we expect the learning curve to be similar? Can this be pulled off with a webapp? (My feeling says it will already be hard to pull off as a native app which is at the edge of our comfort zone)

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • The idea of functionN in Scala / Functionaljava

    - by Luke Murphy
    From brain driven development It turns out, that every Function you’ll ever define in Scala, will become an instance of an Implementation which will feature a certain Function Trait. There is a whole bunch of that Function Traits, ranging from Function1 up to Function22. Since Functions are Objects in Scala and Scala is a statically typed language, it has to provide an appropriate type for every Function which comes with a different number of arguments. If you define a Function with two arguments, the compiler picks Function2 as the underlying type. Also, from Michael Froh's blog You need to make FunctionN classes for each number of parameters that you want? Yes, but you define the classes once and then you use them forever, or ideally they're already defined in a library (e.g. Functional Java defines classes F, F2, ..., F8, and the Scala standard library defines classes Function1, ..., Function22) So we have a list of function traits (Scala), and a list of interfaces (Functional-java) to enable us to have first class funtions. I am trying to understand exactly why this is the case. I know, in Java for example, when I write a method say, public int add(int a, int b){ return a + b; } That I cannot go ahead and write add(3,4,5); ( error would be something like : method add cannot be applied to give types ) We simply have to define an interface/trait for functions with different parameters, because of static typing?

    Read the article

  • F# &ndash; Converting your C# brain to the F# way

    - by MarkPearl
    My brain still thinks in C#!!! I have been looking at F# and trying to figure out the basics of it, but all the time in the back of my mind I am going – what is the C# equivalent to this or that… It’s frustrating because I almost want a F# to C# dictionary the whole time – and simply translate my C# code to F# – which would negate the main motivation for learning F# – as I want learn functional programming - if I was simply doing C# code in a F# syntax I would be gaining nothing! So I am experiencing pain while my brain forms some new neural networks… but luckily I live in a country where we have 11 official spoken languages, and plenty more unofficial languages so I have gone through the pain of learning how to speak a new language before – and I am finding the process is almost identical in learning a programming language that promotes a different way of looking at problems (from Object Orientated to Functional). That beings said… the first thing to learn is the basic syntax… I have searched the web for appropriate places to get a translation – and have been quite disappointed with what is out there for F#. Luckily, OCaml came to the rescue. There are some really good tutorials on getting started with OCaml syntax, one in particular that stood out was the OCamal-Tutorial. What I particularly like about it is that it is doing comparisons between C based languages and OCaml. Give it a read sometime – it’s well worth it and has definitely helped me understand F# a little better.

    Read the article

  • Access a PLESK website before propagation?

    - by RCNeil
    My web host uses Plesk and I want to know if there is anyway to access and view a website (with PHP and other processes being functional) without propagation of the domain name? I have found countless forums on this but they are all pretty old (circa 01-04) and involve either tricking your localhost or SSH commands and some even result in terrible security risks. I would like to access a web page directory through a browser and see it's contents while having the PHP processes carry out... before I propagate it's potential domain name. People claim this is pointless but during a site migration why on earth would you not test a site before propagating it? I'm looking for something similar to what cPanel offers i.e. http://IP.ADDRESS./~mydomain.com The only solution I could think of is storing the site in a new directory of an already functional site and then setting up databases and testing the site once it's complete. Once tested and working I should be easily be able to migrate the files to the "new" domain name's root directory and just setup a new databases and then propagate the domain name. I can't believe that Plesk V10+ still does not have a site preview method that includes PHP, JS, and Flash ability.

    Read the article

  • Ternary and Artificial Intelligence

    - by user2957844
    Not much of a programmer myself, however I have been thinking about the future of AI. If a fully functional AI is programmed in a binary environment as is used in current computing, would that create a bit of a black and white personality? As in just yes/no, on/off, 1/0? I will use the Skynet computer from the Terminator series as a bad analogy; it is brought online and comes to the conclusion that humanity should just be destroyed so the problem is resolved, basically its only options were; fire the missiles or not. (The films do not really go into what its moves would be after doing such a thing, but that goes into the realms of AI evolution so does not really fit with this question.) It may also have been badly programmed. Now, the human mind has been akin to a ternary system which allows our "out of the box" thinking along with all the other wonderful things our minds can do. So, would it not be more prudent to create a functional ternary system and program an AI using it so the resulting personality would be able to benefit from the third "maybe" (so to speak) option? I understand that in binary there are ways to get around the whole yes/no etc. way of things, however the basic operations are still just 1's and 0's. Again with using the above bad Skynet analogy; if it could have had that third "maybe" option as part of its core system, it may have decided to not launch due to being able to make sense of the intricacies of human nature and the politics of such a move etc. In effect, my question is; Would an AI benefit more from ternary computing as opposed to binary due to the inclusion of -1, or 2, dependent on the system ("maybe," as I call it)?

    Read the article

  • Why is a software development life-cycle so inefficient?

    - by user87166
    Currently the software development lifecycle followed in the IT company I work at is: The "Business" works with a solution manager to build a Business Requirement document The solution manager works with the Program manager to build a Functional Spec The PM works with the engineering lead to develop a release plan and with the engineering team to develop technical specifications If there are any clarifications required, developers contact the PM who contacts the solution manager who contacts the business and all the way back introducing a latency of nearly 24 hours and massive email chains for any clarifications By the time the tech spec is made, nearly 1 month has passed in back and forth Now, 2 weeks go to development while the test writes test cases Code is dropped formally to test, test starts raising bugs. Even if there is 1 root cause for 10 different issues, and its an easily fixed one, developers are not allowed to give fresh code to test for the next 1 week. After 2-3 such drops to test the code is given to the ops team as a "golden drop" ( 2 months passed from the beginning) Ops team will now deploy the code in a staging environment. If it runs stable for a week, it will be promoted to UAT and after 2 weeks of that it will be promoted to prod. If there are any bugs found here, well, applying for a visa requires less paperwork This entire process is followed even if a single SSRS report is to be released. How do other companies process such requirements? I'm wondering why, the business cannot just drop the requirements to developers, developers build and deploy to UAT themselves, expose it to the business who raise functional bugs and after fixing those promote to prod. (even for more complex stuff)

    Read the article

  • Learning curve for web development

    - by refro
    At the moment our team has a huge challenge, we're being asked to deliver a new GUI for an embedded controller. The deadline is very tight and is set on April 2013. Our team is very diverse, some people are on the level of functional programming (mostly C), others (including myself) have mastered object oriented programming (C++, C#). We built a prototype for Android, although it has its quirks, it is mostly just OO. For the future there is a wish to support multiple platforms (Windows, Android, iOS). In my opinion a HTML5 app with a native app shell is the way to go. When gathering more information on the frameworks to use etc., it became obvious to me a paradigm shift is needed. None of us have a web background so we need to learn from the ground up. The shift from functional to OO took us about 6 months to become productive (and some of the early subsystems were rewritten because they were a total mess). Can we expect the learning curve to be similar? Can this be pulled off with a web app? (My feeling says it will already be hard to pull off as a native app which is at the edge of our comfort zone).

    Read the article

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • Parallelism in .NET – Part 8, PLINQ’s ForAll Method

    - by Reed
    Parallel LINQ extends LINQ to Objects, and is typically very similar.  However, as I previously discussed, there are some differences.  Although the standard way to handle simple Data Parellelism is via Parallel.ForEach, it’s possible to do the same thing via PLINQ. PLINQ adds a new method unavailable in standard LINQ which provides new functionality… LINQ is designed to provide a much simpler way of handling querying, including filtering, ordering, grouping, and many other benefits.  Reading the description in LINQ to Objects on MSDN, it becomes clear that the thinking behind LINQ deals with retrieval of data.  LINQ works by adding a functional programming style on top of .NET, allowing us to express filters in terms of predicate functions, for example. PLINQ is, generally, very similar.  Typically, when using PLINQ, we write declarative statements to filter a dataset or perform an aggregation.  However, PLINQ adds one new method, which provides a very different purpose: ForAll. The ForAll method is defined on ParallelEnumerable, and will work upon any ParallelQuery<T>.  Unlike the sequence operators in LINQ and PLINQ, ForAll is intended to cause side effects.  It does not filter a collection, but rather invokes an action on each element of the collection. At first glance, this seems like a bad idea.  For example, Eric Lippert clearly explained two philosophical objections to providing an IEnumerable<T>.ForEach extension method, one of which still applies when parallelized.  The sole purpose of this method is to cause side effects, and as such, I agree that the ForAll method “violates the functional programming principles that all the other sequence operators are based upon”, in exactly the same manner an IEnumerable<T>.ForEach extension method would violate these principles.  Eric Lippert’s second reason for disliking a ForEach extension method does not necessarily apply to ForAll – replacing ForAll with a call to Parallel.ForEach has the same closure semantics, so there is no loss there. Although ForAll may have philosophical issues, there is a pragmatic reason to include this method.  Without ForAll, we would take a fairly serious performance hit in many situations.  Often, we need to perform some filtering or grouping, then perform an action using the results of our filter.  Using a standard foreach statement to perform our action would avoid this philosophical issue: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action foreach (var item in filteredItems) { // These will now run serially item.DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This would cause a loss in performance, since we lose any parallelism in place, and cause all of our actions to be run serially. We could easily use a Parallel.ForEach instead, which adds parallelism to the actions: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action once the filter completes Parallel.ForEach(filteredItems, item => { // These will now run in parallel item.DoSomething(); }); This is a noticeable improvement, since both our filtering and our actions run parallelized.  However, there is still a large bottleneck in place here.  The problem lies with my comment “perform an action once the filter completes”.  Here, we’re parallelizing the filter, then collecting all of the results, blocking until the filter completes.  Once the filtering of every element is completed, we then repartition the results of the filter, reschedule into multiple threads, and perform the action on each element.  By moving this into two separate statements, we potentially double our parallelization overhead, since we’re forcing the work to be partitioned and scheduled twice as many times. This is where the pragmatism comes into play.  By violating our functional principles, we gain the ability to avoid the overhead and cost of rescheduling the work: // Perform an action on the results of our filter collection .AsParallel() .Where( i => i.SomePredicate() ) .ForAll( i => i.DoSomething() ); The ability to avoid the scheduling overhead is a compelling reason to use ForAll.  This really goes back to one of the key points I discussed in data parallelism: Partition your problem in a way to place the most work possible into each task.  Here, this means leaving the statement attached to the expression, even though it causes side effects and is not standard usage for LINQ. This leads to my one guideline for using ForAll: The ForAll extension method should only be used to process the results of a parallel query, as returned by a PLINQ expression. Any other usage scenario should use Parallel.ForEach, instead.

    Read the article

  • Oracle Fusion Applications User Experience Design Patterns: Feeling the Love after Launch

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceIn the first video by the Oracle Applications User Experience team on the Oracle Partner Network, Vice President Jeremy Ashley said that Oracle is looking to expand the ecosystem of support for Oracle’s applications customers as they begin to assess their investment and adoption of Oracle Fusion Applications. Oracle has made a massive investment to maintain the benefits of the Fusion Applications User Experience. This summer, the Applications User Experience team released the Oracle Fusion Applications user experience design patterns.Design patterns help create consistent experiences across devices.The launch has been very well received:Angelo Santagata, Senior Principal Technologist and Fusion Middleware evangelist for Oracle,  wrote this to the system integrator community: “The web site is the result of many years of Oracle R&D into user interface design for Fusion Applications and features a really cool web app which allows you to visualise the UI components in action.”  Grant Ronald, Director of Product Management, Application Development Framework (ADF) said: “It’s a science I don't understand, but now I don't have to ... Now you can learn from the UX experience of Fusion Applications.”Frank Nimphius, Senior Principal Product Manager, Oracle (ADF) wrote about the launch of the design patterns for the ADF Code Corner, and Jürgen Kress, Senior Manager EMEA Alliances & Channels for Fusion MiddleWare and Service Oriented Architecture, (SOA), shared the news with his Partner Community. Oracle Twitter followers also helped spread the message about the design patterns launch: ?@bex – Brian Huff, founder and Chief Software Architect for Bezzotech, and Oracle ACE Director:“Nifty! The Oracle Fusion UX team just released new ADF design patterns.”@maiko_rocha, Maiko Rocha, Oracle Consulting Solutions Architect and Oracle FMW engineer: “Haven't seen any other vendor offer such comprehensive UX Design Patterns catalog for free!”@zirous_chad, Chad Thompson, Senior Solutions Architect for Zirous, Inc. and ADF Developer:Wow - @ultan and company did a great job with the Fusion UX PatternsWhat is a user experience design pattern?A user experience design pattern is a re-usable, usability tested functional blueprint for a particular user experience.  Some examples are guided processes, shopping carts, and search and search results.  Ultan O’Broin discusses the top design patterns every developer should know.The patterns that were just released are based on thousands of hours of end-user field studies, state-of-the-art user interface assessments, and usability testing.  To be clear, these are functional design patterns, not technical design patterns that developers may be used to working with.  Because we know there is a gap, we are putting together some training that will help close that gap.Who should care?This is an offering targeted primarily at Application Development Framework (ADF) developers. If you are faced with the following questions regarding Fusion Applications, you will want to know and learn more:•    How do I build something that looks like Fusion Applications?•    How do I build a next-generation application?•    How do I extend a Fusion Application and maintain the user experience?•    I don’t want to re-invent the wheel on the user interface, so where do I start?•    I need to build something that will eventually co-exist with Fusion Applications. How do I do that?These questions are relevant to partners with an ADF competency, individual practitioners, or small consultancies with an ADF specialization, and customers who are trying to shift their IT staff over to supporting Fusion Applications.Where you can find out more?OnlineOur Fusion User Experience design patterns maven is Ultan O’Broin. The Oracle Partner Network is helping our team bring this first e-seminar to you in order to go into a more detail on what this means and how to take advantage of it:? Webinar: Build a Better User Experience with Oracle: Oracle Fusion Applications Functional Design PatternsSept 20, 2012 , 10:30am-11:30am PacificDial-In:  1. 877-664-9137 / Passcode 102546?International:  706-634-9619  http://www.intercall.com/national/oracleuniversity/gdnam.htmlAccess the Live Event Or Via Webconference Access http://ouweb.webex.com  ?and enter this session number: 598036234At a Usergroup eventThe Fusion User Experience Advocates (FXA) are also going to be getting some deep-dive training on this content and can share it with local user groups.At OpenWorld Ultan O’Broin               Chris MuirIf you will be at OpenWorld this year, our own Ultan O’Broin will be visiting the ADF demopod to say hello, thanks to Shay Shmeltzer, Senior Group Manager for ADF outbound communication and at the OTN lounge: Monday 10-10:45, Tuesday 2:15-2:45, Wednesday 2:15-3:30 ?  Oracle JDeveloper and Oracle ADF,  Moscone South, Right - S-207? “ADF Meet and Greett”, OTN Lounge, Wednesday 4:30 And I cannot talk about OpenWorld and ADF without mentioning Chris Muir’s ADF EMG event: the Year After the Year Of the ADF Developer – Sunday, Sept 30 of OpenWorld. Chris has played host to Ultan and the Applications user experience message for his online community and is now a seasoned UX expert.Expect to see additional announcements about expanded and training on similar topics in the future.

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • CodePlex Daily Summary for Tuesday, May 18, 2010

    CodePlex Daily Summary for Tuesday, May 18, 2010New ProjectsCafeControl: Supports Remote LOGIN,Remote logout ,Account Creation ,Account LOGOUT ,Temporary Login ,SMS Reported and many other features Requires .net 4.0 Cloud & Contacts: Cloud Contacts makes it easier for share your contacts.Cow connect: Ziel des Projektes Cow connect, ist es ein Tool zu schrieben das Verschiedene Datenbanken, unterschiedlicher Herdenmanagement Tool z.b: Helm Multik...DNN Simple Blog: A simplified blog module that adheres to the DNN API and is designed for a single blog author per module instance. The module also makes use of Web...dotSpatial: dotSpatial is an open source project focused on developing a core set of GIS and mapping libraries that live together harmoniously in the System.Sp...Dynamic Survey Forms - SharePoint Web Part: Create manage dynamic survey forms as SharePoint web part. Record survey score for logged user or for someone else. This project has been designed ...EDXL Sharp: EDXLSharp is a C# / .NET 3.5 implementation of the OASIS Emergency Data Exchange Language (EDXL) family of standards. This set of libraries can be...EPiServer CMS 6 Visual Studio Project Template for VB w/ Public Templates: This is a Visual Studio 2008 Project Template with will allow the creation of a EPiServer CMS 6 project set up as and with all code in Visual Basic...Functional Command Toolbar for Windows: A floating window with a text box for typing functional command and executing it. The tool supports .NET addin, functional scripting and other feat...GameFX: Silverlight Game Development LibraryLightweight Fluent Workflow: ObjectFlow is a lightweight workflow engine for creating & executing workflows. The fluent interface makes it easy to define and understand workf...LinqSpecs: A toolset for use the specification pattern in linq queries.Money Watch: Personal Finances management system written in C#, NHibernate and SQL express.Multi-screen Viewer: This viewer allows to open and view pdf file (presentation) on multiple screens. There is no need to see the file in fullscreen on each screen (mon...neo-tsql: set of stored procedures and functions for sql serverNHTrace: NHTrace is a tool for tracing sql commands executed by NHibernate with syntax highlighting.NQueue: NQueue provides an enterprise level work scheduling and execution framework and toolset that contains no single point of failure. Using a farm of s...Online Cash Manager: Online Cash Manager based on ASP .NET MVC2 VS 2010 RTM MVC 2POCO Bridge: Bridging Silverlight and full .NET apps.REngine - game engine in Silverlight: REngine makes it easier for game developers to develop games in Microsoft Silverlight. RunAs Launcher: RunAs Launcher is a C# utility that provides a GUI for running applications under different credentials. It works in situations where the built-in ...secs4net: SECS-II/GEM/HSMS implementation on .NET. This library provide easy way to communicate with SEMI standard compatible device.SharePoint Admin Dashboard: SharePoint Dashboard for admins. Allows lightening fast multiple server management. RDP doesn't scale. Manage 10 servers easier than 1 with i...Silver spring: saltSocial Map: Social map is a social network based on geograpghical informationTweetZone: TweetZone is new type of twitter client application include DATABASE in it, and it shows you STATS. This Application's cache makes it faster to acc...Yet Another Database Viewer: Yet Another Database Viewer is developed for a basic database view and editing so you don't have to install anything. It's developed in c#.New Releases3FD - Framework For Fast Development: Alpha 1: The first test release. There is still some bugs, but it is functional. The garbage collector is showing memory leaking that must be corrected in t...Ajax Toolkit for ASP.NET MVC: MvcAjaxToolkit gridext with ContextMenu and Tmpl: MvcAjaxToolkit gridext with ContextMenu and Tmpl gridext is a extension for flexigridASP.NET MVC Extensions: SP1 Preview: SP1 Preview ========= 1. Autofac support added. 2. Changed Windsor Adapter. IWindsorInstaller is used instead of IModule.Book Cataloger: BookCataloger1.0.7a: New Features: Author editor form prototype Improvements: .NET Framework 4.0 required Input checking improved Comment edit loads and saves text...Braintree Client Library: Braintree-2.2.0: Prevent race condition when pulling back collection results -- search results represent the state of the data at the time the query was run Renam...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta 2: Documentation New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. Simplified test fixtures. Un-Refactored the not-so-simple MVP pa...dotNetTips: dotNetTips.Utility 3.5 R3: This is a new release (version 3.5.0.4) compatible with .NET 3.5. Requires SP1 if using the Entity Framework extensions. This is a minor update fro...Dynamic Survey Forms - SharePoint Web Part: Dynamic Survey forms for SharePoint. Alpha 1.0.1: Alpha release. Before running installer create database from script attached in zip file. In your web.config make sure your first connection strin...Event Scavenger: Viewer version 3.2.1: Added quick filters on event source and event id dialog boxes. Collector and Admin tool unaffected.Expression Blend Samples: Blend 4 Sketch Mockups Library: This library provides a set of commonly needed controls, icons and cursors to use in SketchFlow applications. After running the installer, create ...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.3: Fluent Ribbon Control Suite 1.3(supports .NET 3.5 and .NET 4 RTM) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples Found...Home Access Plus+: v4.2.2: Version 4.2.2 Change Log: Changes to how mycomputer handles NTFS permissions File Changes: ~/Bin/HAP.Web.dll ~/Bin/HAP.Web.pdbIdeaNMR: IdeaNMR Web App PreAlpha 0.1: This is the first release.IP Informer: Beta Release: V0.8.0.0 Beta.LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.9: Main updates for this release Fixed Status update. (updates where not correctly passed on to LinkedIn) Added landscape/GSensor support. (Currentl...LINQ to Twitter: LINQ to Twitter Beta v2.0.11: New items added since v1.1 include: Support for OAuth (via DotNetOpenAuth), secure communication via https, VB language support, serialization of ...LinqSpecs: Version 1.0 alpha: This is the alpha version of LinqSpecs.miniTodo: mini Todo version 0.2: ・デザインを透明主体に変更  -件数を表示している部分のドラッグでウィンドウ移動  -上記の部分右クリックで、「最前面に表示」、「全アイテム管理」 ・グラフを日/週/月単位の3種類に増やした ・新規作成、完了時にアニメーション追加。完了時にはサウンドも追加 ・テキスト未入力時は追加ボタンも非表示My Notepad: My Notepad (Beta): This is the Beta version of My Notepad. The software is stable enough for you to use. Enjoy the flexibility of docking and also the all new Syntax ...NHTrace: NHTrace-45713: NHTrace-45713Nito.KitchenSink: Version 8: New features (since Version 5) Fixed null reference bug in ExceptionExtensions. Added DynamicStaticTypeMembers and RefOutArg for dynamically (lat...Nito.LINQ: Beta (v0.5): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2521.102, released 2010-05-14. Breaking changes Corrected internal read-on...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.9: Work on UI templates for associative tables (2-column PK), using users/roles pages as an example. Added templates for Not-In-Group sql and cache-ba...patterns & practices - GAX Extensions Library: GEL for gax2010: This is the GEL for GAX 2010, support Visual Stuido 2010patterns & practices - Smart Client Guidance: Smart Client Software Factory 2010 Documentation: If the right-side pane of the chm file is not displayed correctly, do the following: 1) Download SCSF2010Guide.chm file. 2) Start the windows explo...patterns & practices - Windows Azure Guidance: WAAG - Part 1 - Release Candidate: "Release Candidate" for Part 1 of the Windows Azure Guide Highlights of this release are: Code samples complete. Fixed few bugs on "Dependency Ch...Rawr: Rawr 2.3.17: >Rawr3 Public Beta has been released! Click here for details.< - Lots of improvements to the default data files. There is a known issue with the s...RunAs Launcher: RunAs Launcher 1.2: This is the first version being released to CodePlex. Simply extract the file and run the executable. For those that wish to download the source c...Rx Contrib: V1.5: Bug fixsecs4net: Release 1.0: Notes: Runtime requirement: .Net framework 2.0 SP2 with System.Core(.NET 3.5), System.Threading(Rx for 3.5 SP1)SharePoint Admin Dashboard: SPDashboard v1.0: SPDashboard v1.0ShortURL Creator: ShortURL Creator 1.3.0.0: Added new provider u.nu and minimum UI changesStyleCop+: StyleCop+ 0.7: StyleCop+ is now fully compatible with StyleCop 4.4. The following entities were supported in Advanced Naming Rules: - Delegate - Event - Property...Value Injecter: an aspect oriented mapper: Value Injecter 1.2: ValueInjecter library, Sample solution that contains: web-forms sample project win-forms sample project unit tests samplesVCC: Latest build, v2.1.30517.0: Automatic drop of latest buildVCC: Latest build, v2.1.30517.1: Automatic drop of latest buildVidCoder: 0.4.1: Changes: Marks system as "working" to prevent computer from sleeping during an encode. CPU priority changed to BelowNormal during encodes. Enco...WSP Listener: WSP Listener version 2.0.0.0: This new version includes: All assemblies and required assets in one WSP Seperated code in library assembly Activate the WSP Listener with one...Yet Another Database Viewer: Beta: first release of the programYet another developer blog - Examples: Asynchronous TreeView in ASP.NET WebForms: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET WebForms. This application is acco...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelRawrBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersGMap.NET - Great Maps for Windows Forms & PresentationCassiniDev - Cassini 3.5/4.0 Developers EditionDotNetZip Library

    Read the article

  • ASP.NET AJAX Problem

    - by Rich Andrews
    I've updated some code to use the Ajax Control toolkit 0911 beta and for some reason code that dynamically added collapsable panel extenders in the code behind now causes the following error in the client side jscript... Microsoft JScript runtime error: Sys.ArgumentException: Value must not be null for Controls and Behaviors. Parameter name: element in... $create = Sys.Component.create = function Sys$Component$create(type, properties, events, references, element) { /// <summary locid="M:J#Sys.Component.create" /> /// <param name="type" type="Type"></param> /// <param name="properties" optional="true" mayBeNull="true"></param> /// <param name="events" optional="true" mayBeNull="true"></param> /// <param name="references" optional="true" mayBeNull="true"></param> /// <param name="element" domElement="true" optional="true" mayBeNull="true"></param> /// <returns type="Object"></returns> var e = Function._validateParams(arguments, [ {name: "type", type: Type}, {name: "properties", mayBeNull: true, optional: true}, {name: "events", mayBeNull: true, optional: true}, {name: "references", mayBeNull: true, optional: true}, {name: "element", mayBeNull: true, domElement: true, optional: true} ]); if (e) throw e; if (type.inheritsFrom(Sys.UI.Behavior) || type.inheritsFrom(Sys.UI.Control)) { if (!element) throw Error.argument('element', Sys.Res.createNoDom); } I accept that this is only a beta but I'm unable to either find a work around or even understand the reason why this pretty simple code no longer works. Code private Panel GetReportPanel(DataRow dr, ReportParameter[] Params) { Panel pnlReport = new Panel(); pnlReport.ID = Uri.EscapeDataString(dr["ReportName"].ToString()) + "_MainReportContainer"; //Report Title Section var pnlReportTitle = new Panel(); pnlReportTitle.CssClass = "ReportSectionTitle"; var tblReportTitle = new Table(); var trowReportTitle = new TableRow(); var tcellReportTitle = new TableCell(); var imgReportTitleExpand = new Image(); imgReportTitleExpand.ID = Uri.EscapeDataString("img" + dr["ReportName"].ToString() + "DataExpand"); tcellReportTitle.Controls.Add(imgReportTitleExpand); trowReportTitle.Controls.Add(tcellReportTitle); tcellReportTitle = new TableCell(); var lblReportTitle = new Label(); lblReportTitle.ID = Uri.EscapeDataString("lnk" + dr["ReportName"].ToString()); lblReportTitle.Text = "Functional " + dr["ReportName"].ToString(); tcellReportTitle.Controls.Add(lblReportTitle); trowReportTitle.Controls.Add(tcellReportTitle); tblReportTitle.Controls.Add(trowReportTitle); pnlReportTitle.Controls.Add(tblReportTitle); pnlReport.Controls.Add(pnlReportTitle); //Report Section var pnlReportSection = new Panel(); pnlReportSection.ID = Uri.EscapeDataString("pnlReportSection" + dr["ReportName"].ToString()); pnlReportSection.CssClass = "ReportSection"; pnlReportSection.ScrollBars = ScrollBars.None; var pnlInnerReportSection = new Panel(); pnlInnerReportSection.CssClass = "ReportSection"; var rptControl = new ReportViewer(); rptControl.ID = "rpt" + dr["ReportName"].ToString().Replace(' ', '_'); rptControl.ProcessingMode = ProcessingMode.Remote; rptControl.Width = new Unit("100%"); rptControl.ShowDocumentMapButton = false; rptControl.ShowParameterPrompts = false; rptControl.Visible = true; rptControl.Height = new Unit("500px"); rptControl.AsyncRendering = (bool)dr["ASyncRenderingEnabled"]; rptControl.ServerReport.ReportPath = dr["SSRSReportPath"].ToString(); rptControl.ServerReport.ReportServerUrl = new Uri("http://horoap336/reportserver"); rptControl.ServerReport.SetParameters(Params); pnlInnerReportSection.Controls.Add(rptControl); pnlReportSection.Controls.Add(pnlInnerReportSection); pnlReport.Controls.Add(pnlReportSection); //Collapsable Panel Extender var Extender = new AjaxControlToolkit.CollapsiblePanelExtender(); Extender.TargetControlID = pnlReportSection.ID; Extender.ID = Uri.EscapeDataString(dr["ReportName"].ToString()) + "_Extender"; Extender.CollapsedSize = 0; Extender.Collapsed = true; Extender.ExpandControlID = lblReportTitle.ID; Extender.CollapseControlID = lblReportTitle.ID; Extender.AutoCollapse = false; Extender.AutoExpand = false; Extender.ScrollContents = false; Extender.TextLabelID = lblReportTitle.ID; Extender.CollapsedText = "Functional " + dr["ReportName"].ToString() + " (Click To Show Details...)"; Extender.ExpandedText = "Functional " + dr["ReportName"].ToString() + " (Click To Hide Details...)"; Extender.ImageControlID = imgReportTitleExpand.ID; Extender.ExpandedImage = "~/images/collapse.jpg"; Extender.CollapsedImage = "~/images/expand.jpg"; Extender.ExpandDirection = AjaxControlToolkit.CollapsiblePanelExpandDirection.Vertical; pnlReport.Controls.Add(Extender); return pnlReport; } This panel is then added to a panel in the aspx file using... pnlContainer.Controls.Add(GetReportPanel(dr,Params)); Aspx file... <%@ Page Title="Operations MI Dashboard - Functional Reporting" Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="FunctionalReport.aspx.cs" Inherits="TelephonyReport" %> <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:Panel ID="pnlContainer" runat="server"> </asp:Panel> </asp:Content> So, my questions are: Is there a problem with my code that is only evident in the later version of the toolkit? Does anyone know of a workaround that I can try? Can anyone explain why this problem happens only in the latest version?

    Read the article

  • SOAP client generator for ruby/ruby on rails

    - by Sohan
    I have been looking for a fully functional WSDL client generator for ruby. I tried the one called wsdl2ruby and it didn't work. I think it has problems with detecting complex types correctly. Can someone point me to the right library if there is one? I am specifically looking to generate a full functional client for SOAP API provided by jira. I looked into jira4r but it seems to be dead now and not up to date. Any help is much appreciated. Thank you.

    Read the article

  • Model Based Testing Strategies

    - by Doubt
    What strategies have you used with Model Based Testing? Do you use it exclusively for integration testing, or branch it out to other areas (unit/functional/system/spec verification)? Do you build focused "sealed" models or do you evolve complex onibus models over time? When in the product cycle do you invest in creating MBTs? What sort of base test libraries do you exclusively create for MBTs? What difference do you make in your functional base test libraries to better support MBTs?

    Read the article

  • Which Programming Language Should I Learn?

    - by Esteban Araya
    I've decided, for educational purposes, I want to learn a new language every 2 years or so. Which language should I learn first? Why? I'm proficient with C, C# and Java. Other than that, I really haven't done much with any other languages. Thanks! Edit: Thanks to all of those that recommended functional languages. Making the mental switch to a functional language seems hard. How did you overcome your instinct to keep doing things in a procedural manner?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >