Search Results

Search found 12670 results on 507 pages for 'ie tweaker plus'.

Page 472/507 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • Silverlight Cream Monday WP7 App Review # 2

    - by Dave Campbell
    Today's Review (alphabetic order): GooNews, Grocery Shopping List, Need for Speed, SurfCube, and United Nations News. I'm a day late if these are going to be 'Monday' posts, but there are lots of apps, lots of goodness, and lots of email, so I might try to do 2 a week, we'll see. So once again I've got a small review of 5 apps that are either on my phone or have been. Disclaimers at the end. In this Issue:   GooNews is a very cool app from Shawn Wildermuth (AgiliTrain). I don't know if he uses this as a demo during his instruction, but it definitely serves a purpose... wanna pick up the top news items from Google on a never-ending basis? ... this is it. You can add your own keyword searches, and send stories to InstaPaper or share via email. I like this because it brings me the news quickly and updated, and works great. GooNews is by AgiliTrain and is Free This was a request by the author, and actually surprised me. I'm a big one for lists, but I would have just done a OneNote list to SkyDrive and to my phone. This app is a lot more than that, but will take you some setup to make it be 'yours'. For obvious reasons, there are no unit prices on things, so you have to set that up to get some idea of the cost of what you're shopping for. But if you do that, you'll get a nice total. Lots of thought went into the various categories and you can add your own. There's a bit of animation on the category selection that's nice. He seems to have covered all the bases necessary to use this, even shopping 'plans' that can be saved, and emailing of lists. As I said, I'm more of a raw list person, but if you take the time to set this up, it should work very nicely for you. Grocery Shopping List is by Grocery Shopper and is $0.99 ($1.99 after Feb 1) with a free trial. This was my 2nd commercial game I bought, and the one I've played the most. I ran the trial, thought it worked great, and bought it. I've had a lot of fun with this... there's no gas pedal.. your foot is in the carbeurator from the GO!, and unless you wanna tap the screen and brake like a little girl, just hang onto the steering wheel (the phone), and guide your way through. Hours of fun and challenges here. I like this because it's got some challenge to it, and the cars seem to be very realistic in their reactions. Need for Speed Undercover is by Electronic Arts is $4.99 and has a free trial. SurfCube Browser is another app by the folks that did the GuitarTuner I reviewed on Monday. You have to see SurfCube to believe it. You've probably seen the YouTube video, if not check SilverlightCream number 1017. The app works very solid, and just as the video demonstrates. I downloaded and tried this, and it immediately did 2 things: bought it, and pinned it to my start page. I like this because it's fun to work with, and it works great as a browser. I'm about *this* close to replacing the IE tile on my front page with SurfCube. SurfCube Browser is by Kinabalu Innovation Limited and is $1.99 and has a free trial. Coming in with another News app is United Nations News by Justin Angel. This is definitely a news aggregator for 'grown ups'... news, photos, videos, and radio broadcsts from the international community all in one very slick app. This is an amazingly well thought-out and complete app. Even better yet, Justin has the code on CodePlex. A very well-done International news aggregator. United Nations News is by Justin Angel and is Free. A few disclaimers: Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week (twice if I find time), and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. I'm still playing with the format, comments are welcome. I decided I should alphabetize the list today... so there's no order implied Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • jQuery with SharePoint solutions

    - by KunaalKapoor
    For me jQuery is the 'Plan-B' for everything.And most of my projects include the use of jQuery for something or the other, so I decided to write a small note on what works best while using jQuery along with SharePoint.I prefer to use the jQuery JavaScript library, which is far more robust, easier to use, and allows for plugins. Follow the steps below to add jQuery to your master page. For office 365, the prefered location to add jQuery files is the "Site Asserts" library.Deployment Best PracticesThey are only as good as the context it’s being referenced.  In other words, take into account your world before applying it.Script your deployment options.  Folder in SPD. Use the file system.  Make external references.  The JQuery library is on the Microsoft Ajax Content Delivery Network. You may even choose to publish to and from the document library. (pros and cons to this approach)Reference options when referencing the script.ScriptLink will make sure it’s loaded at the top of the page and only loaded once. You need Visual Studio or SPDContent Editor Web Part (CEWP).  Drop it on the page and it’s there.  Easy but dangerousCustom Actions. Great for global deployments of JQuery.  Loads it on every page. It also works in Sandbox installations.Deployment Maintenance Dont’sDon’t add scripts directly to your Master Page. That’s way too much effort because the pages are hard to maintain.Don’t add scripts directly to the CEWP.  Use a content link instead. That will allow for reuse. If you or someone deletes the CEWP you won’t lose code in the web partSecurity.  Any scripts run with the same privileges of the current user.  In other words, you can’t get in trouble.Development Best PracticesDon’t abuse the DOM.  There are better options to load the DOM without hitting it 1,000 times.User other performance boosters.Try other libraries.  Try some custom codeAvoid String conversionMinify your filesUse CAML to reduce number of returns rowsOnly update your JQuery library AFTER RIGOROUS REGRESSION TESTINGCRUD operations can come with some funSP Services wraps SharePoint’s web services for executionThe Bing SDK is pretty easy to use.  You can add it to your page with a script,  put it into a content editor web part and connect it from the address parameters in a list.Steps:1. Go to jquery.com and download the latest jQuery library to your desktop. You want to get the compressed production version, not the development version.2. Open SharePoint Designer (SPD) and connect to the root level of your site's site collection.In SPD, open the "Style Library" folder. Create a folder named "Scripts" inside of the Style Library. Drag the jQuery library JavaScript file from your desktop into the Scripts folder.In the Scripts folder, create a new JavaScript file and name it (e.g. "actions.js").3. If you are using visual studio add a folder for js, you can create a new folder at the root level or if you prefer more cleaner solutions like me, you can use the layouts folder which cleans out on deactivation/uninstall.4. Within the <head> tag of the master page, add a script reference to the jQuery library just above the content place holder named "PlaceHolderAdditonalPageHead" (and above your custom CSS references, if applicable) as follows:<script src="/Style%20Library/Scripts/{jquery library file}.js" type="text/javascript"></script>Immediately after the jQuery library reference add a script reference to your custom scripts file as follows:<script src="/Style%20Library/Scripts/actions.js" type="text/javascript"></script>Inside your script tag, you can test if jQuery is already defined and if not, then add it to the page.<script type='text/javascript'>  if (typeof jQuery == 'undefined')    document.write('<scr'+'ipt type="text/javascript" src="http://code.jquery.com/jquery-1.6.1.min.js"></sc'+'ript>');</script>For the inquisitive few... Read on if you'd like :)Why jQuery on SharePoiny is AwesomeIt’s all about that visual wow factor.  You can get past that, “But it looks like SharePoint”  Take a long list view and put it into JQuery with pagination, etc and you are the hero.  It’s also about new controls you get with JQuery that you couldn’t do before.Why jQuery with SharePoint should be AwfulAlthough it’s fairly easy to get jQuery up and running. Copy/Paste can cause a problem.  If you don’t understand what it’s doing in the Client Object Model and the Document Object Model then it will do things on your site that were completely unexpected. Many blogs will note workarounds they employed on their sites. Why it’s not working: Debugging “sucks”.You need to develop small blocks of functionality, Test it by putting in some alerts  and console.log. Set breakpoints and monitor the DOM via Firebug and some IE development toolsPerformance - It happens all the time. But you should look at the tradeoffs. More time may give you more functionality.Consistency - ”But it works fine on my computer. So test on many browsers.  Take into account client resourcesHarm the Farm -  You need to code wisely and negatively test.  Don’t be the cause of a DoS attack that’s really JQuery asking for a resource over and over and over again.  So code wisely. Do negative testing. Monitor Server Resources.They also did a demo where JQuery did an endless loop to pull data from a list. It’s a poor decision but also an easy mistake.  They spiked their server resources within a couple seconds and had to shut down the call before it brought it down.ConclusionJQuery is now another tool in your tool kit. You don’t have to use it. Use it where it makes sense and where it helps you get your job done.Don’t abuse it, you will pay for it laterIt will add to page bloat so take that into accountIt can slow your performance

    Read the article

  • Stepping outside Visual Studio IDE [Part 1 of 2] with Eclipse

    - by mbcrump
    “If you're walking down the right path and you're willing to keep walking, eventually you'll make progress." – Barack Obama In my quest to become a better programmer, I’ve decided to start the process of learning Java. I will be primary using the Eclipse Language IDE. I will not bore you with the history just what is needed for a .NET developer to get up and running. I will provide links, screenshots and a few brief code tutorials. Links to documentation. The Official Eclipse FAQ’s Links to binaries. Eclipse IDE for Java EE Developers the Galileo Package (based on Eclipse 3.5 SR2)  Sun Developer Network – Java Eclipse officially recommends Java version 5 (also known as 1.5), although many Eclipse users use the newer version 6 (1.6). That's it, nothing more is required except to compile and run java. Installation Unzip the Eclipse IDE for Java EE Developers and double click the file named Eclipse.exe. You will probably want to create a link for it on your desktop. Once, it’s installed and launched you will have to select a workspace. Just accept the defaults and you will see the following: Lets go ahead and write a simple program. To write a "Hello World" program follow these steps: Start Eclipse. Create a new Java Project: File->New->Project. Select "Java" in the category list. Select "Java Project" in the project list. Click "Next". Enter a project name into the Project name field, for example, "HW Project". Click "Finish" Allow it to open the Java perspective Create a new Java class: Click the "Create a Java Class" button in the toolbar. (This is the icon below "Run" and "Window" with a tooltip that says "New Java Class.") Enter "HW" into the Name field. Click the checkbox indicating that you would like Eclipse to create a "public static void main(String[] args)" method. Click "Finish". A Java editor for HW.java will open. In the main method enter the following line.      System.out.println("This is my first java program and btw Hello World"); Save using ctrl-s. This automatically compiles HW.java. Click the "Run" button in the toolbar (looks like a VCR play button). You will be prompted to create a Launch configuration. Select "Java Application" and click "New". Click "Run" to run the Hello World program. The console will open and display "This is my first java program and btw Hello World". You now have your first java program, lets go ahead and make an applet. Since you already have the HW.java open, click inside the window and remove all code. Now copy/paste the following code snippet. Java Code Snippet for an applet. 1: import java.applet.Applet; 2: import java.awt.Graphics; 3: import java.awt.Color; 4:  5: @SuppressWarnings("serial") 6: public class HelloWorld extends Applet{ 7:  8: String text = "I'm a simple applet"; 9:  10: public void init() { 11: text = "I'm a simple applet"; 12: setBackground(Color.GREEN); 13: } 14:  15: public void start() { 16: System.out.println("starting..."); 17: } 18:  19: public void stop() { 20: System.out.println("stopping..."); 21: } 22:  23: public void destroy() { 24: System.out.println("preparing to unload..."); 25: } 26:  27: public void paint(Graphics g){ 28: System.out.println("Paint"); 29: g.setColor(Color.blue); 30: g.drawRect(0, 0, 31: getSize().width -1, 32: getSize().height -1); 33: g.setColor(Color.black); 34: g.drawString(text, 15, 25); 35: } 36: } The Eclipse IDE should look like Click "Run" to run the Hello World applet. Now, lets test our new java applet. So, navigate over to your workspace for example: “C:\Users\mbcrump\workspace\HW Project\bin” and you should see 2 files. HW.class java.policy.applet Create a HTML page with the following code: 1: <HTML> 2: <BODY> 3: <APPLET CODE=HW.class WIDTH=200 HEIGHT=100> 4: </APPLET> 5: </BODY> 6: </HTML> Open, the HTML page in Firefox or IE and you will see your applet running.  I hope this brief look at the Eclipse IDE helps someone get acquainted with Java Development. Even if your full time gig is with .NET, it will not hurt to have another language in your tool belt. As always, I welcome any suggestions or comments.

    Read the article

  • How To: Using SimpleMembserhipProvider with MySql Connector/Net.

    - by Francisco Tirado
    Now on Connector/Net 6.9 the users will have the ability to use SimpleMembership Provider on MVC4 templates. The configuration is very simple and also have compatibility with OAuth, in this post we'll explain step by step how to configure it in a MVC 4 Web Application. Requirements  The requirements to use SimpleMembership with Connector/Net are: Install Connector/Net 6.9, or download the No Install version. Net Framework 4.0 or greater. MVC 4  Visual Studio 2012 or newer version Creating and configuring a new project In this example we'll use VS2012 to create the project basis on the Internet Aplication template and using Entity Framework to manage the User model. Open VS 2012 and create a new project, we'll create a new MVC 4 Web Application and configure the project to use Net Framework 4.5. Type a name for the project and then click “Ok”. In the next dialog we'll choose the “Internet Application” template and use Razor as engine without creating a test project. Click “Ok” to continue. Now we have a new project with the templates necessaries to run a Web Application with the default values. We'll use the current files to continue working. If you have installed Connector/Net you can skip this step, if you don't have installed but you're planning to do it, please install it and continue with the next step. If you're using the No Install version of Connector/Net we'll need to add the references to our project, the assemblies needed are: MySql.Data, MySql.Data.Entities and MySql.Web. Be sure that the assemblies chosen match the Net Framework version used in our project and the MySql.Data.Entities is compatible with EF5 (EF5 is the default added by the project). Now open the “web.config” file, and under the <connectionStrings> node add a connection string that points to a MySql instance. We'll use the following connection configuration: <add name="MyConnection" connectionString="server=localhost;UserId=root;password=pass;database=MySqlSimpleMembership;" providerName="MySql.Data.MySqlClient"/> Under the node <system.web> we'll add the following configuration: <membership defaultProvider="MySqlSimpleMembershipProvider"><providers><clear/><add name="MySqlSimpleMembershipProvider" type="MySql.Web.Security.MySqlSimpleMembershipProvider,MySql.Web,Version=6.9.3.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" applicationName="MySqlSimpleMembershipTest" description="MySQLdefaultapplication" connectionStringName="MyConnection"  userTableName="UserProfile" userIdColumn="UserId" userNameColumn="UserName" autoGenerateTables="True"/></providers></membership> In the previous configuration the mandatory properties are: connectionStringName, userTableName, userIdColumn, userNameColumn and autoGenerateTables. If the other properties are not provided a default value is set to it but if the mandatory properties are not set a ProviderException will be thrown. The valid properties for the MySqlSimpleMembership are the same used for MySqlMembership plus the mandatory fields. UserTableName: Name of the table where will be stored the user, this table is independent from the schema generated by the provider and can be edited later by the user. UserId: name of the column that will store the id for the records in the userTableName. UserName : name of the column that will store the name/user for the records in the userTableName. The connectionStringName property must match a connection string defined in web.config file. Once the configuration is done in web.config, we need to be sure that our database context for the Users Table point to the right connection string. In our case we just need to update the class UsersContext in the file AcountModel.cs in the Models folder. The file also contains the UserProfile class which match the configuration for our UserTable. Other class that needs to be updated is the SimpleMembershipInitializer in the file InitializeSimpleMembershipAttribute.cs in the Filters folder. In that class we'll see a call to the method “WebSecurity.InitializeDatabaseConnection”, in that call is where we need to update the parameters to match our configuration. If the database that you configure in your connection string doesn't exists, you need to create it empty. Now we're ready to run our web application, press F5 or the Run button in the tool bar. You'll see the following screen: If you go to your database used by the application you'll see some tables created, now we are using SimpleMembership. Now create a user, click on “Register” at the top-right in the web page. Type your user name and password, then click on “Register”. You'll be redirected to the home page and you'll see the name of your user at the top-right page. If you take a look on the tables just created in your database you will find the data about the user you just register. In our case the tables that contains the information are UserProfile and Webpages_Membership.  Configuring OAuth Other option to access your website will be using OAuth, so you can validate an user using an external account like Facebook, Twitter, Google, etc. In this post we'll enable the authentication for Google account in our application. Go to the class AuthConfig.cs in the folder App_Start. In the method “RegisterAuth” uncomment the last line where is the call to the method “OauthWebSecurity.RegisterGoogleClient”. Run the application. Once the application is running click on “Login”. You will see at the right side the option to login using a Google account, click on “Google”.  You will be asked for Google credentials. If your login is successful you'll see a message asking for your approval to give permission to your site to access your information. Click on “Accept”. Now a page to register your user will be shown, click on “Register”. Now your new user is logged in in your application. You can take a look of the user information created in the tables  UserProfile and Webpages_OauthMembership. If you want to use another external option to authenticate users you must enable the client in the same class where we enable the Google authentication, but for others providers is mandatory to register your Application in their site. Once you have register your application they will give you a token/key and the id for your application, that information you're going to use it to register the client. Thanks for reading.

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • How does gluLookAt work?

    - by Chan
    From my understanding, gluLookAt( eye_x, eye_y, eye_z, center_x, center_y, center_z, up_x, up_y, up_z ); is equivalent to: glRotatef(B, 0.0, 0.0, 1.0); glRotatef(A, wx, wy, wz); glTranslatef(-eye_x, -eye_y, -eye_z); But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet: #include <stdlib.h> #include <stdio.h> #include <GL/glut.h> #include <iomanip> #include <iostream> #include <string> using namespace std; static const int Rx = 0; static const int Ry = 1; static const int Rz = 2; static const int Ux = 4; static const int Uy = 5; static const int Uz = 6; static const int Ax = 8; static const int Ay = 9; static const int Az = 10; static const int Tx = 12; static const int Ty = 13; static const int Tz = 14; void init() { glClearColor(0.0, 0.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); } void displayModelviewMatrix(float MV[16]) { int SPACING = 12; cout << left; cout << "\tMODELVIEW MATRIX\n"; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl; cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl; cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl; cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl; cout << "--------------------------------------------------" << endl; cout << endl; } void reshape(int w, int h) { float ratio = static_cast<float>(w)/h; glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, ratio, 1.0, 425.0); } void draw() { float m[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glGetFloatv(GL_MODELVIEW_MATRIX, m); gluLookAt( 300.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f ); glColor3f(1.0, 0.0, 0.0); glutSolidCube(100.0); glGetFloatv(GL_MODELVIEW_MATRIX, m); displayModelviewMatrix(m); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); glutInitWindowPosition(100, 100); glutCreateWindow("Demo"); glutReshapeFunc(reshape); glutDisplayFunc(draw); init(); glutMainLoop(); return 0; } No matter what value I use for the eye vector: 300, 0, 0 or 0, 300, 0 or 0, 0, 300 the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior? Here is the output with the eye vector is (0.0f, 300.0f, 0.0f) MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 0 0 0 0 0 0 0 0 0 1 0 -300 0 0 0 1 -------------------------------------------------- I would expect the T column to be (0, -300, 0)! So could anyone help me explain this? The implementation of gluLookAt from http://www.mesa3d.org void GLAPIENTRY gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx, GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy, GLdouble upz) { float forward[3], side[3], up[3]; GLfloat m[4][4]; forward[0] = centerx - eyex; forward[1] = centery - eyey; forward[2] = centerz - eyez; up[0] = upx; up[1] = upy; up[2] = upz; normalize(forward); /* Side = forward x up */ cross(forward, up, side); normalize(side); /* Recompute up as: up = side x forward */ cross(side, forward, up); __gluMakeIdentityf(&m[0][0]); m[0][0] = side[0]; m[1][0] = side[1]; m[2][0] = side[2]; m[0][1] = up[0]; m[1][1] = up[1]; m[2][1] = up[2]; m[0][2] = -forward[0]; m[1][2] = -forward[1]; m[2][2] = -forward[2]; glMultMatrixf(&m[0][0]); glTranslated(-eyex, -eyey, -eyez); }

    Read the article

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • Right Hand Column Does Not Align Properly in IE6/7/8

    - by Kalpesh Vasta
    Hi Guys, I'm new to this but here goes. I have been developing this website http://www.panelmaster.co.uk and i have managed to solve the majority of design problems but one! If you take a look at the site in IE the right column seems to drop down and is not aligned with the right and centre column. This problem only occurs in IE as upon testing i found it was fine in firefox and safari. I have provided below the CSS for the website. I would appreciate if you guys can help me with the problem. Thanks in advance. :) ========================== body { margin: 0; padding: 0; line-height: 1.5em; font-family: Tahoma, Geneva, sans-serif; font-size: 12px; color: #666; background-image: url(images/templatemo_body_top.jpg); background-color: #90857c; background-repeat: repeat-x; background-position: top; text-align: left; } a:link, a:visited { color: #073475; text-decoration: none; font-weight: normal; } a:active, a:hover { color: #073475; text-decoration: underline; } h3 { color: #1e7da9; font-size: 16px; font-weight: bold; } h2 { color: #1e7da9; font-size: 16px; font-weight: bold; } h1 { color: #696969; font-size: 20px; font-weight: bold; } p { margin: 0px; padding: 0px; } img { margin: 0px; padding: 0px; border: none; } .cleaner { clear: both; width: 100%; height: 0px; font-size: 0px; } .cleaner_h30 { clear: both; width:100%; height: 30px; } .cleaner_h40 { clear: both; width:100%; height: 40px; } .float_l { float: left; } .float_r { float: right; } .margin_r20 { margin-right: 20px; } #templatemo_body_wrapper { width: 100%; background: url(images/templatemo_body_bottom.png) repeat-x bottom center; } #templatemo_wrapper { width: 970px; padding: 0 10px; margin: 0 auto; background: url(images/templatemo_wrapper_top.jpg) no-repeat top center; } /* header */ #templatemo_header { clear: both; width: 890px; height: 60px; padding: 20px 40px } #templatemo_header #site_title { float: left; padding-top: 15px; } #site_title a { font-size: 24px; color: #FFFFFF; font-weight: bold; text-decoration: none; } #site_title a:hover { font-weight: bold; text-decoration: none; } #site_title a span { display: block; margin-top: 5px; font-size: 14px; color: #fff; font-weight: bold; letter-spacing: 2px; } /* end of header */ /* menu */ #templatemo_menu { clear: both; width: 970px; height: 80px; background: url(images/templatemo_menubar.png) no-repeat; } #search_box { width: 990px; height: 35px; text-align: right; } #search_box form { margin: 0; padding: 5px 40px; } #search_box #input_field { height: 20px; width: 300px; color: #000000; font-size: 12px; font-variant: normal; line-height: normal; border: 1px solid #CCCCCC; background: #FFFFFF; } #search_box #submit_btn { height: 24px; width: 100px; cursor: pointer; font-size: 12px; text-align: center; vertical-align: bottom; white-space: pre; outline: none; color:#666666; border: 1px solid #CCCCCC; background: #FFFFFF; } #templatemo_menu ul { width: 890px; height: 35px; margin: 0; padding: 7px 40px; list-style: none; } #templatemo_menu ul li { padding: 0px; margin: 0px; display: inline; } #templatemo_menu ul li a { float: left; display: block; margin-right: 40px; font-size: 13px; text-decoration: none; color: #fff; font-weight: normal; outline: none; } #templatemo_menu ul li a:hover, #templatemo_menu ul .current { color: #162127; } /* end of menu */ /* contetnt */ #templatemo_content_wrapper { clear: both; padding: 0px 0; } #templatemo_content { float: left; margin-left: 10px; width: 550px; } #banner { margin: 0 0 10px 0; } #templatemo_content #content_top { width: 550px; height: 20px; background: url(images/templatemo_content_top.png) no-repeat; } #templatemo_content #content_bottom { width: 550px; height: 20px; background: url(images/templatemo_content_bottom.png) no-repeat; } #templatemo_content #content_middle { width: 510px; padding: 5px 20px 0px 20px; background: url(images/templatemo_content_middle.png) repeat-y; } #content_middle p { text-align: justify; } .templatemo_sidebar_wrapper { width: 200px; } .templatemo_sidebar { width: 197px; padding-right: 3px; background: url(images/templatemo_sidebar_middle.png) repeat-y; } .templatemo_sidebar_top { width: 200px; height: 20px; background: url(images/templatemo_sidebar_top.png) no-repeat; } .templatemo_sidebar_bottom { width: 200px; height: 20px; background: url(images/templatemo_sidebar_bottom.png) no-repeat; } .templatemo_sidebar .sidebar_box { clear: both; padding-bottom: 20px; } .sidebar_box1 { padding: 15px; } .sidebar_box h2 { color: #2d84ad; font-size: 16px; padding-left: 25px; font-weight: bold; margin: 0 0 10px 10px; background: url(images/templatemo_sidebar_h1.jpg) left center no-repeat; } .sidebar_box .sidebar_box_content { padding: 15px; background: url(images/templatemo_sidebar_box_top.png) top repeat-x; } .sidebar_box img { border: 1px solid #999; margin-bottom: 5px; } .sidebar_box .discount { margin: 5px 0 0 0; font-weight: bold; } .sidebar_box .discount span { color: #C00; } .left_sidebar_box .discount a { font-weight: bold; color: #000; } .sidebar_box .categories_list { margin: 0; padding: 0; list-style: none; } .categories_list li { padding: 0; margin: 0; } .categories_list li a { display: block; color: #201f1c; padding: 5px 0 5px 20px; background: url(images/list.png) center left no-repeat; } .categories_list li a:hover { color: #439ac3; text-decoration: none; } .news_box { clear: both; margin-bottom: 5px; padding-bottom: 5px; border-bottom: 1px solid #999; } .news_box h4 { padding: 2px 0; margin: 0; } .news_box h4 a { font-size: 12px; font-weight: normal; color: #1893f2; } #newsletter_box label { display: block; margin-bottom: 10px; } #newsletter_box .input_field { height: 20px; width: 155px; padding: 0 5px; margin-bottom: 10px; color: #000000; font-size: 12px; font-variant: normal; line-height: normal; } #newsletter_box .submit_btn { float: right; height: 30px; width: 80px; margin: 0px; padding: 3px 0 15px 0; cursor: pointer; font-size: 12px; text-align: center; vertical-align: bottom; white-space: pre; outline: none; } .product_box { float: left; width: 223px; padding: 10px; margin-bottom: 20px; border: 1px solid #CCC; text-align: center; } .product_box img { margin-bottom: 10px; } .product_box h3 { color: #2a2522; font-size: 12px; margin: 0 0 10px; } .product_box p { margin-bottom: 10px; } .product_box p span { color: #cf5902; font-size: 14px; font-weight: bold; } .product_box .detail { float: right; } .product_box .addtocard { float: left; font-weight: bold; padding-right: 20px; background: url(images/templatemo_shopping_cart.png) bottom right no-repeat; } /* end of content */ /* footer */ #templatemo_footer_wrapper { background: url(images/templatemo_footer.png) repeat-x; } #templatemo_footer { width: 910px; height: 85px; padding: 50px 40px 30px 40px; margin: 0 auto; text-align: center; color: #a9a098; } #templatemo_footer a { color: #d7d1cc; font-weight: normal; } #templatemo_footer a:hover { text-decoration: none; color: #FFFF33; } #templatemo_footer .footer_menu { margin: 0 0 30px 0; padding: 0px; list-style: none; } .footer_menu li { margin: 0px; padding: 0 20px; display: inline; border-right: 1px solid #d7d1cc; } .footer_menu li a { color: #d7d1cc; } .footer_menu .last_menu { border: none; } /* end of footer */ /*twitter*/ #twitter_div {border-top: 0px;} #twitter_div a {color: #0000ff !important;} #twitter_update_list {margin-left: -1em !important; margin-bottom: 0px !important;} #twitter_update_list li {list-style-type: none; padding-right: 5px; } #twitter_update_list li a {color: #0000ff; padding-right: 5px;} #twitter_div {border-bottom: 0px; padding-bottom: 10px; padding-top:6px; padding-right: 5px;} #twitter_div a, #twitter_update_list li a {text-decoration: none !important;} #twitter_div a:hover, #twitter_update_list li a:hover {text-decoration:underline !important;}

    Read the article

  • Passing array values using Ajax & JSP

    - by Maya
    This is my chart application... <script type="text/javascript" > function listbox_moveacross(sourceID, destID) { var src = document.getElementById(sourceID); var dest = document.getElementById(destID); for(var count=0; count < src.options.length; count++) { if(src.options[count].selected == true) { option = src.options[count]; newOption = document.createElement("option"); newOption.value = option.value; newOption.text = option.text; newOption.selected = true; try { dest.add(newOption,null); //Standard src.remove(count,null); alert("New Option Value: " + newOption.value); } catch(error) { dest.add(newOption); // IE only src.remove(count); alert("success IE User"); } count--; } } } function printValues(oSel) { len=oSel.options.length; for(var i=0;i<len;i++) { if(oSel.options[i].selected) { data+="\n"+ oSel.options[i].text + "["+ "\t" + oSel.options[i].value + "]"; } } type=document.getElementById("typeId"); type_text=type.options[type.selectedIndex].text; type_value=document.getElementById("typeId").value; } function GetSelectedItem() { len = document.chart.d.length; i = 0; chosen = ""; for (i = 0; i < len; i++) { if (document.chart.d[i].selected) { chosen = chosen + document.chart.d[i].value + "\n" } } return chosen } $(document).ready(function() { var d; var current_month; var month; var str; var w; var sel; var sel_data; var sel_data_value; $('.submit').click(function(){ // to get current month d=new Date(); month=new Array(12); month[0]="January"; month[1]="February"; month[2]="March"; month[3]="April"; month[4]="May"; month[5]="June"; month[6]="July"; month[7]="August"; month[8]="September"; month[9]="October"; month[10]="November"; month[11]="December"; current_month=d.getMonth(); str=month[d.getMonth()]; w=document.chart.periodId.selectedIndex; // to get selected index value.... sel=document.chart.periodId.options[w].text; // to get selected index value text... for(i=sel;i>=1;i--) { alert(month[i]); } sel_data=document.chart.d.selectedIndex; sel_data_value=document.chart.d.options[sel_data].text; var data_len=document.chart.d.length; var j=0; var chosen=""; for(j=0;j<data_len;j++) { if(document.chart.d.options[i].selected) { chosen=chosen+document.chart.d.options[i].value; } } chart = new Highcharts.Chart({ chart: { renderTo: 'container', defaultSeriesType: 'column' }, title: { text: document.chart.chartTitle.value }, subtitle: { text: 'Source: WorldClimate.com' }, xAxis: { categories: month }, yAxis: { min: 0, title: {text: 'Count' } }, legend: { layout: 'vertical', backgroundColor: '#FFFFFF', align: 'left', verticalAlign: 'top', x: 100, y: 70, floating: true, shadow: true }, tooltip: { formatter: function() { return ''+ this.x +': '+ this.y +' mm'; } }, plotOptions: { column: { pointPadding: 0.2, borderWidth: 0 } }, series: [{ name: sel_data_value, data: [50, 71.5, 106.4, 129.2, 144.0, 176.0, 135.6, 148.5, 216.4, 194.1, 95.6, 54.4] }, { name: 'New York', data: [83.6, 78.8, 98.5, 93.4, 106.0, 84.5, 105.0, 104.3, 91.2, 83.5, 106.6, 92.3] }, { name: 'London', data: [48.9, 38.8, 39.3, 41.4, 47.0, 48.3, 59.0, 59.6, 52.4, 65.2, 59.3, 51.2] }, { name: 'Berlin', data: [42.4, 33.2, 34.5, 39.7, 52.6, 75.5, 57.4, 60.4, 47.6, 39.1, 46.8, 51.1] }] }); }); }); </script> <%! Connection con = null; Statement stmt = null; ResultSet rs = null; String url = "jdbc:postgresql://192.168.1.196:5432/autocube3"; String user = "autocube"; String pass = "autocube"; String query = ""; int mid; %> <% ChartCategory chartCategory = new ChartCategory(); chartCategory.setBar_name("vehicle reporting"); chartCategory.setMonth("3"); chartCategory.setValue("1000"); if (request.getParameter("mid") != null) { mid = Integer.parseInt(request.getParameter("mid")); } else { mid = 0; } Class.forName("org.postgresql.Driver"); con = DriverManager.getConnection(url, user, pass); System.out.println("Connected to Database"); stmt = con.createStatement(); rs = stmt.executeQuery("select code,description from plant"); %> </head> <body> <form method="post" name="chart"> <fieldset> <legend>Chart Options</legend> <br /> <!-- Plant Select box --> <label for="hstate">Plant:</label> <select name="plantId" size="1" id="plantId" > <!--onchange="selectPlant(this)" --> <% while (rs.next()) { %> <option value="<%=rs.getString("code")%>"><%=rs.getString("description")%></option> <% } String plant = request.getParameter("hstate"); System.out.println("Selected Plant" + request.getParameterValues("plantId")); %> </select> <br /> <label for="hcountry">Period</label> <select name="periodId" id="periodId"> <option value="0">1</option> <option value="1">2</option> <option value="2">3</option> <option value="3">4</option> <option value="4">5</option> <option value="5">6</option> <option value="6">7</option> <option value="7">8</option> <option value="8">9</option> <option value="9">10</option> <option value="10">11</option> <option value="11">12</option> </select> <br/> <!--Interval --> <label for="hstate" >Interval</label> <select name="intervalId" id="intervalId"> <option value="day">Day</option> <option value="month" selected>Month</option> </select> </fieldset> <fieldset> <legend>Chart Data</legend> <br/> <br/> <table > <tbody> <tr> <td> &emsp;<select multiple name="data" size="5" id="s" style="width: 230px; height: 130px;" > <% String[] list = ReportField.getList(); for (int i = 0; i < list.length; i++) { String field = ReportField.getFieldName(list[i]); %> <option value="<%=field%>"><%=list[i]%></option> <% //System.out.println("Names :" + list[i]); //System.out.println("Field Names :" + field); } %> </select> </td> <td> <input type="button" value=">>" onclick="listbox_moveacross('s', 'd')" /><br/> <input type="button" value="<<" onclick="listbox_moveacross('d', 's')" /> &emsp; </td> <td> &emsp; <select name="selectedData" size="5" id="d" style="width: 230px; height: 130px;"> </select></td> <% for (int i = 0; i <= 4; i++) { String arr = request.getParameter("selectedData"); System.out.println("Arrya" + arr); } %> </tr> </tbody> </table> <br/> </fieldset> <fieldset> <legend>Chart Info</legend> <br/> <label for="hstate" >Type</label> <select name="typeId" id="typeId"> <option value="" selected>select...</option> <option value="bar">Bar</option> <option value="pie" >Pie</option> <option value="line" >Line</option> </select> <br/> <label for="uname" id="titleId">Title </label> <input class="text" type="text" name="chartTitle"/> <br /> <label for="uemail2">Pin to Dash board:</label> <input class="text" type="checkbox" id="pinId" name="pinId"/> </fieldset> <input class="submit" type="button" value="Submit" /> <!--onclick="printValues(s)"--> </form> <div id="container" style="width: 800px; height: 400px; margin: 0 auto"> </div> </body> </html> using javascript function, am storing the selected listbox values in 'sel_data_value'. I need to pass this selected array values to database to retrieve values regarding selection. How can i do this using ajax. i don know how to pass array values in ajax and retrieve it from database. Thanks.

    Read the article

  • data validation on wpf passwordbox:type password, re-type password

    - by black sensei
    Hello Experts !! I've built a wpf windows application in with there is a registration form. Using IDataErrorInfo i could successfully bind the field to a class property and with the help of styles display meaningful information about the error to users.About the submit button i use the MultiDataTrigger with conditions (Based on a post here on stackoverflow).All went well. Now i need to do the same for the passwordbox and apparently it's not as straight forward.I found on wpftutorial an article and gave it a try but for some reason it wasn't working. i've tried another one from functionalfun. And in this Functionalfun case the properties(databind,databound) are not recognized as dependencyproperty even after i've changed their name as suggested somewhere in the comments plus i don't have an idea whether it will work for a windows application, since it's designed for web. to give you an idea here is some code on textboxes <Window.Resources> <data:General x:Key="recharge" /> <Style x:Key="validButton" TargetType="{x:Type Button}" BasedOn="{StaticResource {x:Type Button}}" > <Setter Property="IsEnabled" Value="False"/> <Style.Triggers> <MultiDataTrigger> <MultiDataTrigger.Conditions> <Condition Binding="{Binding ElementName=txtRecharge, Path=(Validation.HasError)}" Value="false" /> </MultiDataTrigger.Conditions> <Setter Property="IsEnabled" Value="True" /> </MultiDataTrigger> </Style.Triggers> </Style> <Style x:Key="txtboxerrors" TargetType="{x:Type TextBox}" BasedOn="{StaticResource {x:Type TextBox}}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> <Setter Property="Validation.ErrorTemplate"> <Setter.Value> <ControlTemplate> <DockPanel LastChildFill="True"> <TextBlock DockPanel.Dock="Bottom" FontSize="8" FontWeight="ExtraBold" Foreground="red" Padding="5 0 0 0" Text="{Binding ElementName=showerror, Path=AdornedElement.(Validation.Errors)[0].ErrorContent}"></TextBlock> <Border BorderBrush="Red" BorderThickness="2"> <AdornedElementPlaceholder Name="showerror" /> </Border> </DockPanel> </ControlTemplate> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> </Window.Resources> <TextBox Margin="12,69,12,70" Name="txtRecharge" Style="{StaticResource txtboxerrors}"> <TextBox.Text> <Binding Path="Field" Source="{StaticResource recharge}" ValidatesOnDataErrors="True" UpdateSourceTrigger="PropertyChanged"> <Binding.ValidationRules> <ExceptionValidationRule /> </Binding.ValidationRules> </Binding> </TextBox.Text> </TextBox> <Button Height="23" Margin="98,0,0,12" Name="btnRecharge" VerticalAlignment="Bottom" Click="btnRecharge_Click" HorizontalAlignment="Left" Width="75" Style="{StaticResource validButton}">Recharge</Button> some C# : class General : IDataErrorInfo { private string _field; public string this[string columnName] { get { string result = null; if(columnName == "Field") { if(Util.NullOrEmtpyCheck(this._field)) { result = "Field cannot be Empty"; } } return result; } } public string Error { get { return null; } } public string Field { get { return _field; } set { _field = value; } } } So what are suggestion you guys have for me? I mean how would you go about this? how do you do this since the databinding first purpose here is not to load data onto the fields they are just (for now) for data validation. thanks for reading this.

    Read the article

  • Entity Framework and multi-tenancy database design

    - by Junto
    I am looking at multi-tenancy database schema design for an SaaS concept. It will be ASP.NET MVC - EF, but that isn't so important. Below you can see an example database schema (the Tenant being the Company). The CompanyId is replicated throughout the schema and the primary key has been placed on both the natural key, plus the tenant Id. Plugging this schema into the Entity Framework gives the following errors when I add the tables into the Entity Model file (Model1.edmx): The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Product' uses the set of foreign keys '{ProductId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The question is in two parts: Is my database design incorrect? Should I refrain from these compound primary keys? I'm questioning my sanity regarding the fundamental schema design (frazzled brain syndrome). Please feel free to suggest the 'idealized' schema. Alternatively, if the database design is correct, then is EF unable to match the keys because it perceives these foreign keys as a potential mis-configured 1:1 relationships (incorrectly)? In which case, is this an EF bug and how can I work around it?

    Read the article

  • MVC2 and MVC Futures causing RedirectToAction issues

    - by Darragh
    I've been trying to get the strongly typed version of RedirectToAction from the MVC Futures project to work, but I've been getting no where. Below are the steps I've followed, and the errors I've encountered. Any help is much appreciated. I created a new MVC2 app and changed the About action on the HomeController to redirect to the Index page. Return RedirectToAction("Index") However, I wanted to use the strongly typed extensions, so I downloaded the MVC Futures from CodePlex and added a reference to Microsoft.Web.Mvc to my project. I addded the following "import" statement to the top of HomeContoller.vb Imports Microsoft.Web.Mvc I commented out the above RedirectToAction and added the following line: Return RedirectToAction(Of HomeController)(Function(c) c.Index()) So far, so good. However, I noticed if I uncomment out the first (non Generic) RedirectToAction, it was now causing the following compile error: Error 1 Overload resolution failed because no accessible 'RedirectToAction' can be called with these arguments: Extension method 'Public Function RedirectToAction(Of TController)(action As System.Linq.Expressions.Expression(Of System.Action(Of TController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Data type(s) of the type parameter(s) cannot be inferred from these arguments. Specifying the data type(s) explicitly might correct this error. Extension method 'Public Function RedirectToAction(action As System.Linq.Expressions.Expression(Of System.Action(Of HomeController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Value of type 'String' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Action(Of mvc2test1.HomeController))'. Even though intelli-sense was showing 8 overloads (the original 6 non-generic overloads, plus the 2 new generic overloads from the Futures assembly), it seems when trying to complie the code, the compiler would only 'find' the 2 non-gneneric extension methods from the Futures assessmbly. I thought this might be an issue that I was using conflicting versions of the MVC2 assembly, and the futures assembly, so I added MvcDiaganotics.aspx from the Futures download to my project and everytyhing looked correct: ASP.NET MVC Assembly Information (System.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM (2.0.50217.0) Full name: System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 Code base: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Web.Mvc/2.0.0.0__31bf3856ad364e35/System.Web.Mvc.dll Deployment: GAC-deployed ASP.NET MVC Futures Assembly Information (Microsoft.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM Futures (2.0.50217.0) Full name: Microsoft.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=null Code base: file:///xxxx/bin/Microsoft.Web.Mvc.DLL Deployment: bin-deployed This is driving me crazy! Becuase I thought this might be some VB issue, I created a new MVC2 project using C# and tried the same as above. I added the following "using" statement to the top of HomeController.cs using Microsoft.Web.Mvc; This time, in the About action method, I could only manage to call the non-generic RedirectToAction by typing the full commmand as follows: return Microsoft.Web.Mvc.ControllerExtensions.RedirectToAction<HomeController>(this, c => c.Index()); Even though I had a "using" statement at the top of the class, if I tried to call the non-generic RedirectToAction as follows: return RedirectToAction<HomeController>(c => c.Index()); I would get the following compile error: Error 1 The non-generic method 'System.Web.Mvc.Controller.RedirectToAction(string)' cannot be used with type arguments What gives? It's not like I'm trying to do anything out of the ordinary. It's a simple vanilla MVC2 project with only a reference to the Futures assembly. I'm hoping that I've missed out something obvious, but I've been scratching my head for too long, so I figured I'd seek some assisstance. If anyone's managed to get this simple scenario working (in VB and/or C#) could they please let me know what, if anything, they did differently? Thanks!

    Read the article

  • Silverlight Confirm Dialog to Pause Thread

    - by AlishahNovin
    I'm trying to do a confirmation dialog using Silverlight's ChildWindow object. Ideally, I'd like it to work like MessageBox.Show(), where the entire application halts until an input is received from the user. For example: for(int i=0;i<5;i++) { if (i==3 && MessageBox.Show("Exit early?", "Iterator", MessageBoxButton.OKCancel) == MessageBoxResult.OK) { break; } } Would stop the iteration at 3 if the user hits OK... However, if I were to do something along the lines: ChildWindow confirm = new ChildWindow(); confirm.Title = "Iterator"; confirm.HasCloseButton = false; Grid container = new Grid(); Button closeBtn = new Button(); closeBtn.Content = "Exit early"; closeBtn.Click += delegate { confirm.DialogResult = true; confirm.Close(); }; container.Children.Add(closeBtn); Button continueBtn = new Button(); continueBtn.Content = "Continue!"; continueBtn.Click += delegate { confirm.DialogResult = false; confirm.Close(); }; container.Children.Add(continueBtn); confirm.Content = container; for(int i=0;i<5;i++) { if (i==3) { confirm.Show(); if (confirm.DialogResult.HasResult && (bool)confirm.DialogResult) { break; } } } This clearly would not work, as the thread isn't halted... confirm.DialogResult.HasResult would be false, and the loop would continue past 3. I'm just wondering, how I could go about this properly. Silverlight is single-threaded, so I can't just put the thread to sleep and then wake it up when I'm ready, so I'm just wondering if there's anything else that people could recommend? I've considered reversing the logic - ie, passing the actions I want to occur to the Yes/No events, but in my specific case this wouldn't quite work. Thanks in advance!

    Read the article

  • Android custom ListView unable to click on items

    - by MattC
    So I have a custom ListView object. The list items have two textviews stacked on top of each other, plus a horizontal progress bar that I want to remain hidden until I actually do something. To the far right is a checkbox that I only want to display when the user needs to download updates to their database(s). When I disable the checkbox by setting the visibility to Visibility.GONE, I am able to click on the list items. When the checkbox is visible, I am unable to click on anything in the list except the checkboxes. I've done some searching but haven't found anything relevant to my current situation. I found this question but I'm using an overridden ArrayAdapter since I'm using ArrayLists to contain the list of databases internally. Do I just need to get the LinearLayout view and add an onClickListener like Tom did? I'm not sure. Here's the listview row layout XML: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="?android:attr/listPreferredItemHeight" android:padding="6dip"> <LinearLayout android:orientation="vertical" android:layout_width="0dip" android:layout_weight="1" android:layout_height="fill_parent"> <TextView android:id="@+id/UpdateNameText" android:layout_width="wrap_content" android:layout_height="0dip" android:layout_weight="1" android:textSize="18sp" android:gravity="center_vertical" /> <TextView android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1" android:id="@+id/UpdateStatusText" android:singleLine="true" android:ellipsize="marquee" /> <ProgressBar android:id="@+id/UpdateProgress" android:layout_width="fill_parent" android:layout_height="wrap_content" android:indeterminateOnly="false" android:progressDrawable="@android:drawable/progress_horizontal" android:indeterminateDrawable="@android:drawable/progress_indeterminate_horizontal" android:minHeight="10dip" android:maxHeight="10dip" /> </LinearLayout> <CheckBox android:text="" android:id="@+id/UpdateCheckBox" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> And here's the class that extends the ListActivity. Obviously it's still in development so forgive the things that are missing or might be left laying around: import java.util.List; import android.app.ListActivity; import android.content.Context; import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.CheckBox; import android.widget.ListView; import android.widget.ProgressBar; import android.widget.TextView; import com.xxxx.android.R; import com.xxxx.android.DAO.AccountManager; import com.xxxx.android.model.UpdateItem; public class UpdateActivity extends ListActivity { AccountManager lookupDb; boolean allSelected; UpdateListAdapter list; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); lookupDb = new AccountManager(this); lookupDb.loadUpdates(); setContentView(R.layout.update); allSelected = false; list = new UpdateListAdapter(this, R.layout.update_row, lookupDb.getUpdateItems()); setListAdapter(list);

    Read the article

  • passing parameters to .aspx page using renderpartial

    - by dexter
    in my index.aspx page i want to render another module.aspx page using renderpartial which then render a .htm file depanding on which parameter is passed from index.aspx (it would be number ie 1,2 etc ,so as to call different different .htm file everytime depending on the parameter) 1). now i want Index.aspx page to render module.aspx and pass it a parameter(1,2,3,etc) [the parameters would be passed programatically (hardcoded)] and 2). mudule.aspx should catch the parameter and depending on it will call .htm file my index.aspx has <% ViewData["TemplateId"] = 1; %> <% Html.RenderPartial("/Views/Templates/MyModule.aspx", ViewData["TemplateId"]); %> and module.aspx contains <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage" %> <script type="text/javascript" src="/Scripts/jquery-1.3.2.js"></script> <script type="text/javascript" src="/Scripts/Service.js"></script> <script type="text/javascript"> debugger; var tid = '<%=ViewData["TemplateId"] %>'; $.get("/Templates/Select/" + tid, function(result) { $("#datashow").html(result); }); </script> <div id="datashow"></div> this is my controller which is called by $.get(....) (see code) public ActionResult Select(int id) { return File("/Views/Templates/HTML_Temp" +id.ToString()+".htm" , "text/html"); } and finally my .htm file <div id="divdata" class="sys-template"> <p>Event Title:<input id="title" size="150" type="text" style="background-color:yellow;font-size:25px;width: 637px;" readonly="readonly" value="{{title}}" /> </p> <p>Event Description:<input type="text" id="description" value="{{ description }}" readonly="readonly" style="width: 312px" /></p> <p>Event Date: <input type="text" id="date" value="{{ date }}" readonly="readonly" style="width: 251px"/></p> <p>Keywords : <input type="text" id="keywords" value="{{keywords}}" readonly="readonly" /></p> </div> <script type="text/javascript"> Sys.Application.add_init(appInit); function appInit() { start(); } </script> start() is javascript method which is in file Service.js when i run this programm it gives me error js runtime error: 'object expected' and debugger highlighted on <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/**xhtml**1-strict.dtd"> pls help me solve the problem

    Read the article

  • jQuery doesn't work in IE8?

    - by Wade D Ouellet
    Hi, I am working on a site here: mfm.treethink.net All the jquery works fine in Firefox, Chrome and Safari but on IE8 it gives me errors and the banner at the top doesn't work (which uses the crossSlide jQuery plugin) and as well the image rollovers don't work with the colour change. IE8 is telling me that the errors are on lines 53, 134 and 149 in the source, all of those lines are where the jquery function is declared. $(document).ready(function(){ I am running jquery 1.4. Oddly enough, the other piece of jQuery I have on that page works, the artist browse/select menu on the right. But the banner and image rollovers don't. Here are all the scripts I'm running: 1: the banner - doesn't work in IE8 <script type="text/javascript"> $(function() { $('#banner').crossSlide({ sleep: 5, fade: 1 }, [ <?php $pages = get_posts('numberposts=2000&post_type=artist&post_status=publish'); $i = 1; foreach( $pages as $page ) { $content = $page->post_title; if( empty($content) ) continue; $content = apply_filters('the_content', $content); ?> { src: '/wp-content/uploads/<?php echo $page->post_name ?>.jpg' }, <?php $i++; } ?> ]); }); </script> 2 - image rollovers - doesn't work in IE8 <script type="text/javascript"> $(function(){ $("ul#artists li").hover(function() { /* On hover */ var thumbOver = $(this).find("img").attr("src"); /* Find image source */ /* Swap background */ $(this).find("a.thumb").css({'background' : 'url(' + thumbOver + ') center bottom no-repeat'}); $(this).find("span").stop().fadeTo('fast', 0 , function() { $(this).hide() }); } , function() { $(this).find("span").stop().fadeTo('fast', 1).show(); }); }); </script> 3 - the artist select - works in IE 8 <script> $("#browse-select").change(function() { window.location.href = $(this).val(); }); </script> These scripts were done by referencing previously made scripts, like I said I'm still new to jQuery. The second works in IE8 and the first one is the one that doesn't. I noticed the third one, the only one working, is written differently than the first two non-working ones without a function declaration at the top. Could this have anything to do with it? Any help figuring out this problem would be so appreciated. Thanks a lot, Wade

    Read the article

  • How to support both HTTP and HTTPS channels in Flex/BlazeDS?

    - by digitalsanctum
    I've been trying to find the right configuration for supporting both http/s requests in a Flex app. I've read all the docs and they allude to doing something like the following: <default-channels> <channel ref="my-secure-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> <channel ref="my-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> This works great when hitting the app via https but get intermittent communication failures when hitting the same app via http. Here's an abbreviated services-config.xml: <channel-definition id="my-amf" class="mx.messaging.channels.AMFChannel"> <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amf" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <!-- HTTPS requests don't work on IE when pragma "no-cache" headers are set so you need to set the add-no-cache-headers property to false --> <add-no-cache-headers>false</add-no-cache-headers> <!-- Use to limit the client channel's connect attempt to the specified time interval. --> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> <channel-definition id="my-secure-amf" class="mx.messaging.channels.SecureAMFChannel"> <!--<endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.SecureAMFEndpoint"/>--> <endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <add-no-cache-headers>false</add-no-cache-headers> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> I'm running with Tomcat 5.5.17 and Java 5. The BlazeDS docs say this is the best practice. Is there a better way? With this config, there seems to be 2-3 retries associated with each channel defined in the default-channels element so it always takes ~20s before the my-amf channel connects via a http request. Is there a way to override the 2-3 retries to say, 1 retry for each channel? Thanks in advance for answers.

    Read the article

  • Google slideshow shows a blank screen when calling from ajax

    - by ufk
    I'm having problems implementing google slideshow (http://www.google.com/uds/solutions/slideshow/index.html) to my web application by loading it using a jquery load() function. index.html: <script type="text/javascript" src="jquery-1.3.2.js"></script> <div id="moshe"></div> <script type="text/javascript"> $(document).ready(function(){ $('#moshe').load('test.html'); }); </script> test.html: <script type="text/javascript"> function load() { var samples = "http://dlc0421.googlepages.com/gfss.rss"; var options = { displayTime: 2000, transistionTime: 600, linkTarget : google.feeds.LINK_TARGET_BLANK }; new GFslideShow(samples, "slideshow", options); } google.load("feeds", "1"); google.setOnLoadCallback(load); </script> <div id="slideshow" class="gslideshow" style="width:300px;height:300px;position:relative; border: 2px solid blue">Loading...</div> When i execute the test.html, it loads the slideshow just fine. when i try to load using index.html that actually calls Jquery's $.load() function that loads the content of test.html into a specific div element, i see that the gallery is loading on that div, but when it's about to show images the entire page clears and all i have is a blank page. Any ideas ? a different version of index.html without using jquery: <script type="text/javascript"> function makeRequest(url) { var httpRequest; if (window.XMLHttpRequest) { // Mozilla, Safari, ... httpRequest = new XMLHttpRequest(); if (httpRequest.overrideMimeType) { httpRequest.overrideMimeType('text/xml'); // See note below about this line } } else if (window.ActiveXObject) { // IE try { httpRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { httpRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) {} } } if (!httpRequest) { alert('Giving up :( Cannot create an XMLHTTP instance'); return false; } httpRequest.onreadystatechange = function() { alertContents(httpRequest); }; httpRequest.open('GET', url, true); httpRequest.send(''); } function alertContents(httpRequest) { if (httpRequest.readyState == 4) { if (httpRequest.status == 200) { document.getElementById('moshe').innerHTML=httpRequest.responseText; } else { alert('There was a problem with the request.'); } } } makeRequest('test.html'); </script>

    Read the article

  • How can I best share Ant targets between projects?

    - by Rob Hruska
    Is there a well-established way to share Ant targets between projects? I have a solution currently, but it's a bit inelegant. Here's what I'm doing so far. I've got a file called ivy-tasks.xml hosted on a server on our network. This file contains, among other targets, boilerplate tasks for managing project dependencies with Ivy. For example: <project name="ant-ivy-tasks" default="init-ivy" xmlns:ivy="antlib:org.apache.ivy.ant"> ... <target name="ivy-download" unless="skip.ivy.download"> <mkdir dir="${ivy.jar.dir}"/> <echo message="Installing ivy..."/> <get src="http://repo1.maven.org/maven2/org/apache/ivy/ivy/${ivy.install.version}/ivy-${ivy.install.version}.jar" dest="${ivy.jar.file}" usetimestamp="true"/> </target> <target name="ivy-init" depends="ivy-download" description="-> Defines ivy tasks and loads global settings"> <path id="ivy.lib.path"> <fileset dir="${ivy.jar.dir}" includes="*.jar"/> </path> <taskdef resource="org/apache/ivy/ant/antlib.xml" uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/> <ivy:settings url="http://myserver/ivy/settings/ivysettings-user.xml"/> </target> ... </project> The reason this file is hosted is because I don't want to: Check the file into every project that needs it - this will result in duplication, making maintaining the targets harder. Have my build.xml depend on checking out a project from source control - this will make the build have more XML at the top-level just to access the file. What I do with this file in my projects' build.xmls is along the lines of: <property name="download.dir" location="download"/> <mkdir dir="${download.dir}"/> <echo message="Downloading import files to ${download.dir}"/> <get src="http://myserver/ivy/ivy-tasks.xml" dest="${download.dir}/ivy-tasks.xml" usetimestamp="true"/> <import file="${download.dir}/ivy-tasks.xml"/> The "dirty" part about this is that I have to do the above steps outside of a target, because the import task must be at the top-level. Plus, I still have to include this XML in all of the build.xml files that need it (i.e. there's still some amount of duplication). On top of that, there might be additional situations where I might have common (non-Ivy) tasks that I'd like imported. If I were to provide these tasks using Ivy's dependency management I'd still have problems, since by the time I'd have resolved the dependencies I would have to be inside of a target in my build.xml, and unable to import (due to the constraint mentioned above). Is there a better solution for what I'm trying to accomplish?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >