Search Results

Search found 7312 results on 293 pages for 'render quality'.

Page 103/293 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • UpdatePanel, JavaScript postback and changing querystring at same time in SharePoint Search Page

    - by Lee Dale
    Hi Guys, Been tearing my hear out with this one. Let me see if I can explain: I have a SharePoint results page on which I have a Search Results Core WebPart. Now I want to change the parameter in the querystring when I postback the page so that the WebPart returns different results for each parameter e.g. the querystring will be interactivemap.aspx?k=Country:Romania this will filter the results for Romania. First issue is I want to do this with javascript so I call: document.getElementById('aspnetForm').action = "interactivemap.aspx?k=Country:" + country; Nothing special here but the reason I need to call from Javascript is there is also a flash applet on this page from which the Javascript calls originate. When the javascript calls are made the page needs to PostBack but not reload the flash applet. I turned to ASP.Net AJAX for this so I wrapped the search results webpart in an update panel. Now if I use a button within the UpdatePanel to postback the UpdatePanel behaves as expected and does a partial render of the search results webpart not reloading the flash applet. Problem comes because I need postback the page from javscript. I called __doPostBack() as I have used this successully in the past. It works on it's own but fails when I first call the above Javascript before the __doPostBack() (I also tried calling click() on a hidden button) the code for the page is at the bottom. I think the problem comes with the scriptmanager not allowing a partial render when the form post action has changed. My questions are. A) Is there some other way to change the search results webpart parameter without using the querystring. or B) Is there a way around changing the querystring when doing an AJAX postback and getting a partial render. <asp:Content ContentPlaceHolderID="PlaceHolderFullContent" runat="server"> function update(country) { //__doPostBack('ContentUpdatePanel', ''); //document.getElementById('aspnetForm').action = "interactivemap.aspx?k=ArticleCountry:" + country; document.getElementById('ctl00_PlaceHolderFullContent_UpdateButton').click(); } Romania <div class="firstDataTitle"> <div class="datatabletitleOuterWrapper"> <div class="datatabletitle"> <span>Content</span></div> </div> <div class="datatableWrapper"> <div class="dataHolderWrapper"> <div class="datatable"> <div> <div class="searchMain"> <div class="searchZoneMain"> <asp:UpdatePanel runat="server" id="ContentUpdatePanel" UpdateMode="Conditional"> <ContentTemplate> <WebPartPages:webpartzone runat="server" AllowPersonalization="false" title="<%$Resources:sps,LayoutPageZone_BottomZone%>" id="BottomZone" orientation="Vertical" QuickAdd-GroupNames="Search" QuickAdd-ShowListsAndLibraries="false"><ZoneTemplate></ZoneTemplate></WebPartPages:webpartzone> <asp:Button id="UpdateButton" name="UpdateButton" runat="server" Text="Update"/> </ContentTemplate> </asp:UpdatePanel> </div> </div> </div> </div> </div> </div>

    Read the article

  • Marking multi-level nested forms as "dirty" in Rails

    - by Charles Kihe
    I have a three-level multi-nested form in Rails. The setup is like this: Projects have many Milestones, and Milestones have many Notes. The goal is to have everything editable within the page with JavaScript, where we can add multiple new Milestones to a Project within the page, and add new Notes to new and existing Milestones. Everything works as expected, except that when I add new notes to an existing Milestone (new Milestones work fine when adding notes to them), the new notes won't save unless I edit any of the fields that actually belong to the Milestone to mark the form "dirty"/edited. Is there a way to flag the Milestone so that the new Notes that have been added will save? Edit: sorry, it's hard to paste in all of the code because there's so many parts, but here goes: Models class Project < ActiveRecord::Base has_many :notes, :dependent => :destroy has_many :milestones, :dependent => :destroy accepts_nested_attributes_for :milestones, :allow_destroy => true accepts_nested_attributes_for :notes, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Milestone < ActiveRecord::Base belongs_to :project has_many :notes, :dependent => :destroy accepts_nested_attributes_for :notes, :allow_destroy => true, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Note < ActiveRecord::Base belongs_to :milestone belongs_to :project scope :newest, lambda { |*args| order('created_at DESC').limit(*args.first || 3) } end I'm using an jQuery-based, unobtrusive version of Ryan Bates' combo helper/JS code to get this done. Application Helper def add_fields_for_association(f, association, partial) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(partial, :f => builder) end end I render the form for the association in a hidden div, and then use the following JavaScript to find it and add it as needed. JavaScript function addFields(link, association, content, func) { var newID = new Date().getTime(); var regexp = new RegExp("new_" + association, "g"); var form = content.replace(regexp, newID); var link = $(link).parent().next().before(form).prev(); if (func) { func.call(); } return link; } I'm guessing the only other relevant piece of code that I can think of would be the create method in the NotesController: def create respond_with(@note = @owner.notes.create(params[:note])) do |format| format.js { render :json => @owner.notes.newest(3).all.to_json } format.html { redirect_to((@milestone ? [@project, @milestone, @note] : [@project, @note]), :notice => 'Note was successfully created.') } end end The @owner ivar is created in the following before filter: def load_milestone @milestone = @project.milestones.find(params[:milestone_id]) if params[:milestone_id] end def determine_owner @owner = load_milestone @owner ||= @project end Thing is, all this seems to work fine, except when I'm adding new notes to existing milestones. The milestone has to be "touched" in order for new notes to save, or else Rails won't pay attention.

    Read the article

  • Clipplanes, vertex shaders and hardware vertex processing in Direct3D 9

    - by Igor
    Hi, I have an issue with clipplanes in my application that I can reproduce in a sample from DirectX SDK (February 2010). I added a clipplane to the HLSLwithoutEffects sample: ... D3DXPLANE g_Plane( 0.0f, 1.0f, 0.0f, 0.0f ); ... void SetupClipPlane(const D3DXMATRIXA16 & view, const D3DXMATRIXA16 & proj) { D3DXMATRIXA16 m = view * proj; D3DXMatrixInverse( &m, NULL, &m ); D3DXMatrixTranspose( &m, &m ); D3DXPLANE plane; D3DXPlaneNormalize( &plane, &g_Plane ); D3DXPLANE clipSpacePlane; D3DXPlaneTransform( &clipSpacePlane, &plane, &m ); DXUTGetD3D9Device()->SetClipPlane( 0, clipSpacePlane ); } void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext ) { // Update the camera's position based on user input g_Camera.FrameMove( fElapsedTime ); // Set up the vertex shader constants D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = *g_Camera.GetWorldMatrix(); mView = *g_Camera.GetViewMatrix(); mProj = *g_Camera.GetProjMatrix(); mWorldViewProj = mWorld * mView * mProj; g_pConstantTable->SetMatrix( DXUTGetD3D9Device(), "mWorldViewProj", &mWorldViewProj ); g_pConstantTable->SetFloat( DXUTGetD3D9Device(), "fTime", ( float )fTime ); SetupClipPlane( mView, mProj ); } void CALLBACK OnFrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext ) { // If the settings dialog is being shown, then // render it instead of rendering the app's scene if( g_SettingsDlg.IsActive() ) { g_SettingsDlg.OnRender( fElapsedTime ); return; } HRESULT hr; // Clear the render target and the zbuffer V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) ); // Render the scene if( SUCCEEDED( pd3dDevice->BeginScene() ) ) { pd3dDevice->SetVertexDeclaration( g_pVertexDeclaration ); pd3dDevice->SetVertexShader( g_pVertexShader ); pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( D3DXVECTOR2 ) ); pd3dDevice->SetIndices( g_pIB ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, D3DCLIPPLANE0 ); V( pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, g_dwNumVertices, 0, g_dwNumIndices / 3 ) ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, 0 ); RenderText(); V( g_HUD.OnRender( fElapsedTime ) ); V( pd3dDevice->EndScene() ); } } When I rotate the camera I have different visual results when using hardware and software vertex processing. In software vertex processing mode or when using the reference device the clipping plane works fine as expected. In hardware mode it seems to rotate with the camera. If I remove the call to RenderText(); from OnFrameRender then hardware rendering also works fine. Further debugging reveals that the problem is in ID3DXFont::DrawText. I have this issue in Windows Vista and Windows 7 but not in Windows XP. I tested the code with the latest NVidia and ATI drivers in all three OSes on different PCs. Is it a DirectX issue? Or incorrect usage of clipplanes? Thanks Igor

    Read the article

  • Error while rendering report(.rdl).

    - by Sushavon Banerjee
    Hi, I have generated a .rdl report file. Now, when I am going to render the .rdl report file into pdf format,exception is throwing. The exception is : - "An error occurred during local report processing." The stack trace is follows : - " at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at SaltlakeSoft.APEX2.Controllers.TestPageController.RenderReport() in E:\Documents and Settings\Administrator\Desktop\afetbuild15thmayapex2\apex2\Controllers\TestPageController.cs:line 1626\r\n at lambda_method(ExecutionScope , ControllerBase , Object[] )\r\n at System.Web.Mvc.ActionMethodDispatcher.<c_DisplayClass1.b_0(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.<c_DisplayClassa.b_7()\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)" My code is as follows : - LocalReport report = new LocalReport(); report.ReportPath = @"E:\Report1.rdl"; List<Employee> employeeCollection = empRepository.FindAll().ToList(); ReportDataSource reportDataSource = new ReportDataSource("dataSource1",employeeCollection); report.DataSources.Clear(); report.DataSources.Add(reportDataSource); report.Refresh(); string reportType = "PDF"; string mimeType; string encoding; string fileNameExtension; string deviceInfo ="<DeviceInfo>" +"<OutputFormat>PDF</OutputFormat>" + "<PageWidth>8.5in</PageWidth>" + "<PageHeight>11in</PageHeight>" + "<MarginTop>0.5in</MarginTop>" +"<MarginLeft>1in</MarginLeft>" + "<MarginRight>1in</MarginRight>" +"<MarginBottom>0.5in</MarginBottom>" + "</DeviceInfo>"; Warning[] warnings; string[] streams; byte[] renderedBytes; renderedBytes = report.Render(reportType,deviceInfo,out mimeType,out encoding, out fileNameExtension, out streams, out warnings); Response.Clear(); Response.ContentType = mimeType; Response.AddHeader("content-disposition", "attachment; filename=foo." + fileNameExtension); Response.BinaryWrite(renderedBytes); Response.End(); Help me please... Thanks, Sushavon

    Read the article

  • Does anyone know how to appropriately deal with user timezones in rails 2.3?

    - by Amazing Jay
    We're building a rails app that needs to display dates (and more importantly, calculate them) in multiple timezones. Can anyone point me towards how to work with user timezones in rails 2.3(.5 or .8) The most inclusive article I've seen detailing how user time zones are supposed to work is here: http://wiki.rubyonrails.org/howtos/time-zones... although it is unclear when this was written or for what version of rails. Specifically it states that: "Time.zone - The time zone that is actually used for display purposes. This may be set manually to override config.time_zone on a per-request basis." Keys terms being "display purposes" and "per-request basis". Locally on my machine, this is true. However on production, neither are true. Setting Time.zone persists past the end of the request (to all subsequent requests) and also affects the way AR saves to the DB (basically treating any date as if it were already in UTC even when its not), thus saving completely inappropriate values. We run Ruby Enterprise Edition on production with passenger. If this is my problem, do we need to switch to JRuby or something else? To illustrate the problem I put the following actions in my ApplicationController right now: def test p_time = Time.now.utc s_time = Time.utc(p_time.year, p_time.month, p_time.day, p_time.hour) logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect logger.error p_time.inspect logger.error s_time.inspect jl = JunkLead.create! jl.date_at = s_time logger.error s_time.inspect logger.error jl.date_at.inspect jl.save! logger.error s_time.inspect logger.error jl.date_at.inspect render :nothing => true, :status => 200 end def test2 Time.zone = 'Mountain Time (US & Canada)' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end def test3 Time.zone = 'UTC' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end and they yield the following: Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:50) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:15:50 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 21ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test2 (for 98.202.196.203 at 2010-12-24 22:15:53) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Completed in 143ms (View: 1, DB: 3) | 200 OK [http://www.dealsthatmatter.com/test2] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:59) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Fri Dec 24 22:15:59 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Completed in 20ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test3 (for 98.202.196.203 at 2010-12-24 22:16:03) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Completed in 17ms (View: 0, DB: 2) | 200 OK [http://www.dealsthatmatter.com/test3] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:16:04) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:16:05 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 151ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] It should be clear above that the 2nd call to /test shows Time.zone set to Mountain, even though it shouldn't. Additionally, checking the database reveals that the test action when run after test2 saved a JunkLead record with a date of 2010-12-22 15:00:00, which is clearly wrong.

    Read the article

  • Disable antialiasing for a specific GDI device context

    - by Jacob Stanley
    I'm using a third party library to render an image to a GDI DC and I need to ensure that any text is rendered without any smoothing/antialiasing so that I can convert the image to a predefined palette with indexed colors. The third party library i'm using for rendering doesn't support this and just renders text as per the current windows settings for font rendering. They've also said that it's unlikely they'll add the ability to switch anti-aliasing off any time soon. The best work around I've found so far is to call the third party library in this way (error handling and prior settings checks ommitted for brevity): private static void SetFontSmoothing(bool enabled) { int pv = 0; SystemParametersInfo(Spi.SetFontSmoothing, enabled ? 1 : 0, ref pv, Spif.None); } // snip Graphics graphics = Graphics.FromImage(bitmap) IntPtr deviceContext = graphics.GetHdc(); SetFontSmoothing(false); thirdPartyComponent.Render(deviceContext); SetFontSmoothing(true); This obviously has a horrible effect on the operating system, other applications flicker from cleartype enabled to disabled and back every time I render the image. So the question is, does anyone know how I can alter the font rendering settings for a specific DC? Even if I could just make the changes process or thread specific instead of affecting the whole operating system, that would be a big step forward! (That would give me the option of farming this rendering out to a separate process- the results are written to disk after rendering anyway) EDIT: I'd like to add that I don't mind if the solution is more complex than just a few API calls. I'd even be happy with a solution that involved hooking system dlls if it was only about a days work. EDIT: Background Information The third-party library renders using a palette of about 70 colors. After the image (which is actually a map tile) is rendered to the DC, I convert each pixel from it's 32-bit color back to it's palette index and store the result as an 8bpp greyscale image. This is uploaded to the video card as a texture. During rendering, I re-apply the palette (also stored as a texture) with a pixel shader executing on the video card. This allows me to switch and fade between different palettes instantaneously instead of needing to regenerate all the required tiles. It takes between 10-60 seconds to generate and upload all the tiles for a typical view of the world. EDIT: Renamed GraphicsDevice to Graphics The class GraphicsDevice in the previous version of this question is actually System.Drawing.Graphics. I had renamed it (using GraphicsDevice = ...) because the code in question is in the namespace MyCompany.Graphics and the compiler wasn't able resolve it properly. EDIT: Success! I even managed to port the PatchIat function below to C# with the help of Marshal.GetFunctionPointerForDelegate. The .NET interop team really did a fantastic job! I'm now using the following syntax, where Patch is an extension method on System.Diagnostics.ProcessModule: module.Patch( "Gdi32.dll", "CreateFontIndirectA", (CreateFontIndirectA original) => font => { font->lfQuality = NONANTIALIASED_QUALITY; return original(font); }); private unsafe delegate IntPtr CreateFontIndirectA(LOGFONTA* lplf); private const int NONANTIALIASED_QUALITY = 3; [StructLayout(LayoutKind.Sequential)] private struct LOGFONTA { public int lfHeight; public int lfWidth; public int lfEscapement; public int lfOrientation; public int lfWeight; public byte lfItalic; public byte lfUnderline; public byte lfStrikeOut; public byte lfCharSet; public byte lfOutPrecision; public byte lfClipPrecision; public byte lfQuality; public byte lfPitchAndFamily; public unsafe fixed sbyte lfFaceName [32]; }

    Read the article

  • Problem using form builder & DOM manipulation in Rails with multiple levels of nested partials

    - by Chris Hart
    I'm having a problem using nested partials with dynamic form builder code (from the "complex form example" code on github) in Rails. I have my top level view "new" (where I attempt to generate the template): <% form_for (@transaction_group) do |txngroup_form| %> <%= txngroup_form.error_messages %> <% content_for :jstemplates do -%> <%= "var transaction='#{generate_template(txngroup_form, :transactions)}'" %> <% end -%> <%= render :partial => 'transaction_group', :locals => { :f => txngroup_form, :txn_group => @transaction_group }%> <% end -%> This renders the transaction_group partial: <div class="content"> <% logger.debug "in partial, class name = " + txn_group.class.name %> <% f.fields_for txn_group.transactions do |txn_form| %> <table id="transactions" class="form"> <tr class="header"><td>Price</td><td>Quantity</td></tr> <%= render :partial => 'transaction', :locals => { :tf => txn_form } %> </table> <% end %> <div>&nbsp;</div><div id="container"> <%= link_to 'Add a transaction', '#transaction', :class => "add_nested_item", :rel => "transactions" %> </div> <div>&nbsp;</div> ... which in turn renders the transaction partial: <tr><td><%= tf.text_field :price, :size => 5 %></td> <td><%= tf.text_field :quantity, :size => 2 %></td></tr> The generate_template code looks like this: def generate_html(form_builder, method, options = {}) options[:object] ||= form_builder.object.class.reflect_on_association(method).klass.new options[:partial] ||= method.to_s.singularize options[:form_builder_local] ||= :f form_builder.fields_for(method, options[:object], :child_index => 'NEW_RECORD') do |f| render(:partial => options[:partial], :locals => { options[:form_builder_local] => f }) end end def generate_template(form_builder, method, options = {}) escape_javascript generate_html(form_builder, method, options) end (Obviously my code is not the most elegant - I was trying to get this nested partial thing worked out first.) My problem is that I get an undefined variable exception from the transaction partial when loading the view: /Users/chris/dev/ss/app/views/transaction_groups/_transaction.html.erb:2:in _run_erb_app47views47transaction_groups47_transaction46html46erb_locals_f_object_transaction' /Users/chris/dev/ss/app/helpers/customers_helper.rb:29:in generate_html' /Users/chris/dev/ss/app/helpers/customers_helper.rb:28:in generate_html' /Users/chris/dev/ss/app/helpers/customers_helper.rb:34:in generate_template' /Users/chris/dev/ss/app/views/transaction_groups/new.html.erb:4:in _run_erb_app47views47transaction_groups47new46html46erb' /Users/chris/dev/ss/app/views/transaction_groups/new.html.erb:3:in _run_erb_app47views47transaction_groups47new46html46erb' /Users/chris/dev/ss/app/views/transaction_groups/new.html.erb:1:in _run_erb_app47views47transaction_groups47new46html46erb' /Users/chris/dev/ss/app/controllers/transaction_groups_controller.rb:17:in new' I'm pretty sure this is because the do loop for form_for hasn't executed yet (?)... I'm not sure that my approach to this problem is the best, but I haven't been able to find a better solution for dynamically adding form partials to the DOM. Basically I need a way to add records to a has_many model dynamically on a nested form. Any recommendations on a way to fix this particular problem or (even better!) a cleaner solution are appreciated. Thanks in advance. Chris

    Read the article

  • Uninitialized constant Item::Types

    - by Rasmus
    Hi! First of, im a newbie ruby programmer so please bare with me if this is a very dumb question. I get this uninitialized constant error when i submit my nested forms. order.rb class Order < ActiveRecord::Base has_many :items, :dependent => :destroy has_many :types, :through => :items accepts_nested_attributes_for :items accepts_nested_attributes_for :types validates_associated :items validates_associated :types end item.rb class Item < ActiveRecord::Base has_one :types belongs_to :order accepts_nested_attributes_for :types validates_associated :types end type.rb class Type < ActiveRecord::Base belongs_to :items belongs_to :orders end new.erb.html <% form_for @order do |f| %> <%= f.error_messages %> <% f.fields_for :items do |builder| %> <table border="0"> <th>Type</th> <th>Amount</th> <th>Text</th> <th>Price</th> <tr> <% f.fields_for :type do |m| %> <td> <%= m.collection_select :type, Type.find(:all, :order => "created_at DESC"), :id, :name, {:prompt => "Select a Type" }, {:id => "selector", :onchange => "type_change(this)"} %> </td> <% end %> <td> <%= f.text_field :amount, :id => "amountField", :onchange => "change_total_price()" %> </td> <td> <%= f.text_field :text, :id => "textField" %> </td> <td> <%= f.text_field :price, :class => "priceField", :onChange => "change_total_price()" %> </td> <td> <%= link_to_remove_fields "Remove Item", f %> </td> </tr> </table> <% end %> <p><%= link_to_add_fields "Add Item", f, :items %></p> <p> <%= f.label :total_price %><br /> <%= f.text_field :total_price, :class => "priceField", :id => "totalPrice" %> </p> <p><%= f.submit "Create"%></p> <% end %> <%= link_to 'Back', orders_path %> create method in orders_controller.rb def create @order = Order.new(params[:order]) respond_to do |format| if @order.save flash[:notice] = 'Post was successfully created.' format.html { redirect_to(@order) } format.xml { render :xml => @order, :status => :created, :location => @order } else format.html { render :action => "new" } format.xml { render :xml => @order.errors, :status => :unprocessable_entity } end end end Hopefully you can see what i cant

    Read the article

  • A RenderTargetView cannot be created from a NULL Resource

    - by numerical25
    I am trying to create my render target view but I get this error from direct X A RenderTargetView cannot be created from a NULL Resource To my knowledge it seems that I must fill the rendertarget pointer with data before passing it. But I am having trouble figure out how. Below is my declaration and implementation declaration #pragma once #include "stdafx.h" #include "resource.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: RECT m_screenRect; //direct3d Members ID3D10Device *m_pDevice; // The IDirect3DDevice10 // interface ID3D10Texture2D *m_pBackBuffer; // Pointer to the back buffer ID3D10RenderTargetView *m_pRenderTargetView; // Pointer to render target view IDXGISwapChain *m_pSwapChain; // Pointer to the swap chain RECT m_rcScreenRect; // The dimensions of the screen ID3DX10Font *m_pFont; // The font used for rendering text // Sprites used to hold font characters ID3DX10Sprite *m_pFontSprite; ATOM RegisterEngineClass(); void Present(); public: static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name void DrawTextString(int x, int y, D3DXCOLOR color, const TCHAR *strOutput); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); bool InitWindow(); bool InitDirectX(); bool InitInstance(); int Run(); RenderEngine() { m_screenRect.right = 800; m_screenRect.bottom = 600; } }; my implementation bool RenderEngine::InitDirectX() { //potential error. You did not set to zero memory and you did not set the scaling property DXGI_MODE_DESC bd; bd.Width = m_screenRect.right; bd.Height = m_screenRect.bottom; bd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; bd.RefreshRate.Numerator = 60; bd.RefreshRate.Denominator = 1; DXGI_SAMPLE_DESC sd; sd.Count = 1; sd.Quality = 0; DXGI_SWAP_CHAIN_DESC swapDesc; ZeroMemory(&swapDesc, sizeof(swapDesc)); swapDesc.BufferDesc = bd; swapDesc.SampleDesc = sd; swapDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapDesc.OutputWindow = m_hWnd; swapDesc.BufferCount = 1; swapDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD, swapDesc.Windowed = true; swapDesc.Flags = 0; HRESULT hr; hr = D3D10CreateDeviceAndSwapChain(NULL, D3D10_DRIVER_TYPE_HARDWARE, NULL, D3D10_CREATE_DEVICE_DEBUG, D3D10_SDK_VERSION , &swapDesc, &m_pSwapChain, &m_pDevice); if(FAILED(hr)) return false; // Create a render target view hr = m_pDevice->CreateRenderTargetView( m_pBackBuffer, NULL, &m_pRenderTargetView); // FAILS RIGHT HERE // if(FAILED(hr)) return false; return true; }

    Read the article

  • Can't seem to redirect from a ViewScoped constructor.

    - by Andrew
    I'm having trouble redirecting from a view scoped bean in the case that we don't have the required info for the page in question. The log entry in the @PostContruct is visible in the log right before a NPE relating to the view trying to render itself instead of following my redirect. Why is it ignoring the redirect? Here's my code: @ManagedBean public class WelcomeView { private String sParam; private String aParam; public WelcomeView() { super(); sParam = getURL_Param("surveyName"); aParam = getURL_Param("accountName"); project = fetchProject(sParam, aParam); } @PostConstruct public void redirectWithoutProject() { if (null == project) { try { logger.warn("NO project [" + sParam + "] for account [" + aParam + "]"); FacesContext fc = FacesContext.getCurrentInstance(); fc.getExternalContext().redirect("/errors/noSurvey.jsf"); return; } catch (Exception e) { e.printStackTrace(); } } } .... public boolean getAuthenticated() { if (project.getPasswordProtected()) { return enteredPassword.equals(project.getLoginPassword()); } else return true; } } Here's the stack trace: SEVERE: Error Rendering View[/participant/welcome.xhtml] javax.el.ELException: /templates/participant/welcome.xhtml @80,70 rendered="#{welcomeView.authenticated}": Error reading 'authenticated' on type com.MYCODE.general.controllers.participant.WelcomeView at com.sun.faces.facelets.el.TagValueExpression.getValue(TagValueExpression.java:107) at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:190) at javax.faces.component.UIComponentBase.isRendered(UIComponentBase.java:416) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1607) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1616) at javax.faces.render.Renderer.encodeChildren(Renderer.java:168) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:848) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1613) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1616) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1616) at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:380) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:126) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:127) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:313) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.MYCODE.general.filters.StatsFilter.doFilter(StatsFilter.java:28) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.authenticator.SingleSignOn.invoke(SingleSignOn.java:421) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:637) Caused by: java.lang.NullPointerException at com.MYCODE.general.controllers.participant.WelcomeView$$M$863c205f.getAuthenticated(WelcomeView.java:127) at com.MYCODE.general.controllers.participant.WelcomeView$$A$863c205f.getAuthenticated(<generated>) at com.MYCODE.general.controllers.participant.WelcomeView.getAuthenticated(WelcomeView.java:125) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at javax.el.BeanELResolver.getValue(BeanELResolver.java:62) at javax.el.CompositeELResolver.getValue(CompositeELResolver.java:53) at com.sun.faces.el.FacesCompositeELResolver.getValue(FacesCompositeELResolver.java:72) at org.apache.el.parser.AstValue.getValue(AstValue.java:118) at org.apache.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:186) at com.sun.faces.facelets.el.TagValueExpression.getValue(TagValueExpression.java:102) ... 33 more

    Read the article

  • Can I get the original page source (vs current DOM) with phantomjs/casperjs?

    - by supercoco
    I am trying to get the original source for a particular web page. The page executes some scripts that modify the DOM as soon as it loads. I would like to get the source before any script or user changes any object in the document. With Chrome or Firefox (and probably most browsers) I can either look at the DOM (debug utility F12) or look at the original source (right-click, view source). The latter is what I want to accomplish. Is it possible to do this with phantomjs/casperjs? Before getting to the page I have to log in. This is working fine with casperjs. If I browse to the page and render the results I know I am on the right page. casper.thenOpen('http://'+customUrl, function(response) { this.page.render('example.png'); // *** Renders correct page (current DOM) *** console.log(this.page.content); // *** Gets current DOM *** casper.download('view-source:'+customUrl, 'b.html', 'GET'); // *** Blank page *** console.log(this.getHTML()); // *** Gets current DOM *** this.debugPage(); // *** Gets current DOM *** utils.dump(response); // *** No BODY *** casper.download('http://'+customUrl, 'a.html', 'GET'); // *** Not logged in ?! *** }); I've tried this.download(url, 'a.html') but it doesn't seem to share the same context since it returns HTML as if I was not logged in, even if I run with cookies casperjs test.casper.js --cookies-file=cookies.txt. I believe I should keep analyzing this option. I have also tried casper.open('view-source:url') instead of casper.open('http://url') but it seems it doesn't recognize the url since I just get a blank page. I have looked at the raw HTTP Response I get from the server with a utility I have and the body of this message (which is HTML) is what I need but when the page loads in the browser the DOM has already been modified. I tried: casper.thenOpen('http://'+url, function(response) { ... } But the response object only contains the headers and some other information but not the body. ¿Any ideas? Here is the full code: phantom.casperTest = true; phantom.cookiesEnabled = true; var utils = require('utils'); var casper = require('casper').create({ clientScripts: [], pageSettings: { loadImages: false, loadPlugins: false, javascriptEnabled: true, webSecurityEnabled: false }, logLevel: "error", verbose: true }); casper.userAgent('Mozilla/5.0 (Macintosh; Intel Mac OS X)'); casper.start('http://www.xxxxxxx.xxx/login'); casper.waitForSelector('input#login', function() { this.evaluate(function(customLogin, customPassword) { document.getElementById("login").value = customLogin; document.getElementById("password").value = customPassword; document.getElementById("button").click(); }, { "customLogin": customLogin, "customPassword": customPassword }); }, function() { console.log('Can't login.'); }, 15000 ); casper.waitForSelector('div#home', function() { console.log('Login successfull.'); }, function() { console.log('Login failed.'); }, 15000 ); casper.thenOpen('http://'+customUrl, function(response) { this.page.render('example.png'); // *** Renders correct page (current DOM) *** console.log(this.page.content); // *** Gets current DOM *** casper.download('view-source:'+customUrl, 'b.html', 'GET'); // *** Blank page *** console.log(this.getHTML()); // *** Gets current DOM *** this.debugPage(); // *** Gets current DOM *** utils.dump(response); // *** No BODY *** casper.download('http://'+customUrl, 'a.html', 'GET'); // *** Not logged in ?! *** });

    Read the article

  • Iframe Javascript call to Flex

    - by Vince Lowe
    I have a flex application with an iframe layered on top. I want to make a call from the iframe to flex with javascript. So far i have tried this: This is the Object containing the swf embed in the ROOT document <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" id="IPRS_Dispatcher" width="1400" height="1000" codebase="http://fpdownload.macromedia.com/get/flashplayer/current/swflash.cab"> <param name="movie" value="DispatcherMain.swf" /> <param name="quality" value="high" /> <!-- <param name="bgcolor" value="${bgcolor}" /> --> <param name="allowScriptAccess" value="sameDomain" /> <param name='flashVars' value='strLang=english&strIPRSSrvHost=&strGPSSrvHost=192.168.1.130&strGPSSrvSoapPort=8081&strGPSSrvFwdPort=26000&strLoginMode=simple&strSOSSrvHost=192.168.1.80&strSOSSrvSoapPort=8082&strSOSSrvFwdPort=26001&strSOSLoginMode=simple&strUserSIP=&strUserPswd=&nDelayForMapReadySecs=10&nGPSUpdatesRateSecs=120&nGPSSubscriptionsIntervalMinutes=10&nLat=35.0&nLng=32.5&nZoomLevel=5&strClientServiceVersion=2.1.36.19&nPathDotsSize=1&nPathWidth=5&bHideAnnounce=false&bHideEmergencyPan=true&strMapMarkerLabelMode=name&key=ABQIAAAAYbXZyR09wFj6QsiYucHpGxQEO34WZEWuIFq1A7yobGXPE-K5exQV9ZYR6NIkF8LCR8wsYvlhOIYsfA' /> <embed id="IPRS_Dispatcher2" src="DispatcherMain.swf" flashVars='strLang=english&strIPRSSrvHost=&strGPSSrvHost=192.168.1.130&strGPSSrvSoapPort=8081&strGPSSrvFwdPort=26000&strLoginMode=simple&strSOSSrvHost=192.168.1.80&strSOSSrvSoapPort=8082&strSOSSrvFwdPort=26001&strSOSLoginMode=simple&strUserSIP=&strUserPswd=&nDelayForMapReadySecs=10&nGPSUpdatesRateSecs=120&nGPSSubscriptionsIntervalMinutes=10&nLat=35.0&nLng=32.5&nZoomLevel=5&strClientServiceVersion=2.1.36.19&nPathDotsSize=1&nPathWidth=5&bHideAnnounce=false&bHideEmergencyPan=true&strMapMarkerLabelMode=name&key=ABQIAAAAYbXZyR09wFj6QsiYucHpGxQEO34WZEWuIFq1A7yobGXPE-K5exQV9ZYR6NIkF8LCR8wsYvlhOIYsfA' width="1400" height="1000" name="IPRS_Dispatcher" align="middle" play="true" loop="false" quality="high" allowScriptAccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/go/getflashplayer"> <!-- bgcolor="${bgcolor}" --> </embed> </object> I have added addcallback for the function i want to expose ExternalInterface.addCallback("sendToFlash", callFromJavaScript); FYI public function callFromJavaScript(str):void { LogAddItem( 30, str); } In my IFRAME i have added the function function callToFlash(str) { var swf = parent.top.$("#IPRS_Dispatcher"); var bool = swf.sendToFlash(str); } Now getting error in chrome - Uncaught TypeError: Object [object Object] has no method 'sendToFlash' UPDATE 25/06/2012 - output from console.log(swf) [ <embed src=?"DispatcherMain.swf" width=?"100%" height=?"100%" align=?"middle" id=?"IPRS_Dispatcher" quality=?"high" name=?"IPRS_Dispatcher" wmode=?"opaque" allowfullscreen=?"true" allowscriptaccess=?"always" pluginspage=?"http:?/?/?www.adobe.com/?go/?getflashplayer" flashvars=?"strOEM=mt&strSplashImage=./?assets/?loadinglogo.jpg&strLang=english&strSelectableLangs=english,chinese, portuguese_brazil,german,french,spanish&strIPRSSrvHost=85.118.26.10&strGPSSrvHost=85.118.26.16&strGPSSrvSoapPort=8081&strGPSSrvFwdPort=26000&strLoginMode=simple&strUserSIP=&strUserPswd=&strSOSSrvHost=85.118.26.17&strSOSSrvSoapPort=8082&strSOSSrvFwdPort=26001&strClientServicePort=&strSOSLoginMode=simple&themeColor=a7c3e3&showRTTPriority=false&showGPSUpdateRate=true&nSamePosErrMeters=300&nDelayForMapReadySecs=10&nGPSUpdatesRateSecs=65535&nGPSSubscriptionsIntervalMinutes=10&nLat=48.311058&nLng=11.636753&nZoomLevel=13.0&strClientServiceVersion=2.1.36.04&bDispatcherEndsSessions=true&nSOSSubscriptionsIntervalMinutes=1&GPSKATime=20&SOSKATime=20&nPathDotsSize=2&nPathWidth=5&bHideAnnounce=false&bHideEmergencyPan=false&bHideDebugLog=false&showMutedColumn=false&strLogFilter=&strMapMarkerLabelMode=name&key=ABQIAAAAfJEcVYS6-jYp2UOUy8Wh5xSCeXAFBxztfWxjY5w1WzTnKjnSVRS7Uu5XoOIwTg2R_tq_c0QSCPxSHw" type=?"application/?x-shockwave-flash">? ]

    Read the article

  • Running ASP.NET Webforms and ASP.NET MVC side by side

    - by rajbk
    One of the nice things about ASP.NET MVC and its older brother ASP.NET WebForms is that they are both built on top of the ASP.NET runtime environment. The advantage of this is that, you can still run them side by side even though MVC and WebForms are different frameworks. Another point to note is that with the release of the ASP.NET routing in .NET 3.5 SP1, we are able to create SEO friendly URLs that do not map to specific files on disk. The routing is part of the core runtime environment and therefore can be used by both WebForms and MVC. To run both frameworks side by side, we could easily create a separate folder in your MVC project for all our WebForm files and be good to go. What this post shows you instead, is how to have an MVC application with WebForm pages  that both use a common master page and common routing for SEO friendly URLs.  A sample project that shows WebForms and MVC running side by side is attached at the bottom of this post. So why would we want to run WebForms and MVC in the same project?  WebForms come with a lot of nice server controls that provide a lot of functionality. One example is the ReportViewer control. Using this control and client report definition files (RDLC), we can create rich interactive reports (with charting controls). I show you how to use the ReportViewer control in a WebForm project here :  Creating an ASP.NET report using Visual Studio 2010. We can create even more advanced reports by using SQL reporting services that can also be rendered by the ReportViewer control. Now, consider the sample MVC application I blogged about called ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager. Assume you were given the requirement to add a UI to the MVC application where users could interact with a report and be given the option to export the report to Excel, PDF or Word. How do you go about doing it?   This is a perfect scenario to use the ReportViewer control and RDLCs. As you saw in the post on creating the ASP.NET report, the ReportViewer control is a Web Control and is designed to be run in a WebForm project with dependencies on, amongst others, a ScriptManager control and the beloved Viewstate.  Since MVC and WebForm both run under the same runtime, the easiest thing to is to add the WebForm application files (index.aspx, rdlc, related class files) into our MVC project. You can copy the files over from the WebForm project into the MVC project. Create a new folder in our MVC application called CommonReports. Add the index.aspx and rdlc file from the Webform project   Right click on the Index.aspx file and convert it to a web application. This will add the index.aspx.designer.cs file (this step is not required if you are manually adding a WebForm aspx file into the MVC project).    Verify that all the type names for the ObjectDataSources in code behind to point to the correct ProductRepository and fix any compiler errors. Right click on Index.aspx and select “View in browser”. You should see a screen like the one below:   There are two issues with our page. It does not use our site master page and the URL is not SEO friendly. Common Master Page The easiest way to use master pages with both MVC and WebForm pages is to have a common master page that each inherits from as shown below. The reason for this is most WebForm controls require them to be inside a Form control and require ControlState or ViewState. ViewMasterPages used in MVC, on the other hand, are designed to be used with content pages that derive from ViewPage with Viewstate turned off. By having a separate master page for MVC and WebForm that inherit from the Root master page,, we can set properties that are specific to each. For example, in the Webform master, we can turn on ViewState, add a form tag etc. Another point worth noting is that if you set a WebForm page to use a MVC site master page, you may run into errors like the following: A ViewMasterPage can be used only with content pages that derive from ViewPage or ViewPage<TViewItem> or Control 'MainContent_MyButton' of type 'Button' must be placed inside a form tag with runat=server. Since the ViewMasterPage inherits from MasterPage as seen below, we make our Root.master inherit from MasterPage, MVC.master inherit from ViewMasterPage and Webform.master inherits from MasterPage. We define the attributes on the master pages like so: Root.master <%@ Master Inherits="System.Web.UI.MasterPage"  … %> MVC.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="System.Web.Mvc.ViewMasterPage" … %> WebForm.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="NorthwindSales.Views.Shared.Webform" %> Code behind: public partial class Webform : System.Web.UI.MasterPage {} We make changes to our reports aspx file to use the Webform.master. See the source of the master pages in the sample project for a better understanding of how they are connected. SEO friendly links We want to create SEO friendly links that point to our report. A request to /Reports/Products should render the report located in ~/CommonReports/Products.aspx. Simillarly to support future reports, a request to /Reports/Sales should render a report in ~/CommonReports/Sales.aspx. Lets start by renaming our index.aspx file to Products.aspx to be consistent with our routing criteria above. As mentioned earlier, since routing is part of the core runtime environment, we ca easily create a custom route for our reports by adding an entry in Global.asax. public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}");   //Custom route for reports routes.MapPageRoute( "ReportRoute", // Route name "Reports/{reportname}", // URL "~/CommonReports/{reportname}.aspx" // File );     routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } With our custom route in place, a request to Reports/Employees will render the page at ~/CommonReports/Employees.aspx. We make this custom route the first entry since the routing system walks the table from top to bottom, and the first route to match wins. Note that it is highly recommended that you write unit tests for your routes to ensure that the mappings you defined are correct. Common Menu Structure The master page in our original MVC project had a menu structure like so: <ul id="menu"> <li> <%=Html.ActionLink("Home", "Index", "Home") %></li> <li> <%=Html.ActionLink("Products", "Index", "Products") %></li> <li> <%=Html.ActionLink("Help", "Help", "Home") %></li> </ul> We want this menu structure to be common to all pages/views and hence should reside in Root.master. Unfortunately the Html.ActionLink helpers will not work since Root.master inherits from MasterPage which does not have the helper methods available. The quickest way to resolve this issue is to use RouteUrl expressions. Using  RouteUrl expressions, we can programmatically generate URLs that are based on route definitions. By specifying parameter values and a route name if required, we get back a URL string that corresponds to a matching route. We move our menu structure to Root.master and change it to use RouteUrl expressions: <ul id="menu"> <li> <asp:HyperLink ID="hypHome" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=index%>">Home</asp:HyperLink></li> <li> <asp:HyperLink ID="hypProducts" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=products,action=index%>">Products</asp:HyperLink></li> <li> <asp:HyperLink ID="hypReport" runat="server" NavigateUrl="<%$RouteUrl:routename=ReportRoute,reportname=products%>">Product Report</asp:HyperLink></li> <li> <asp:HyperLink ID="hypHelp" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=help%>">Help</asp:HyperLink></li> </ul> We are done adding the common navigation to our application. The application now uses a common theme, routing and navigation structure. Conclusion We have seen how to do the following through this post Add a WebForm page from a WebForm project to an existing ASP.NET MVC application Use a common master page for both WebForm and MVC pages Use routing for SEO friendly links Use a common menu structure for both WebForm and MVC. The sample project is attached below. Version: VS 2010 RTM Remember to change your connection string to point to your Northwind database NorthwindSalesMVCWebform.zip

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • Routing to a Controller with no View in Angular

    - by Rick Strahl
    I've finally had some time to put Angular to use this week in a small project I'm working on for fun. Angular's routing is great and makes it real easy to map URL routes to controllers and model data into views. But what if you don't actually need a view, if you effectively need a headless controller that just runs code, but doesn't render a view?Preserve the ViewWhen Angular navigates a route and and presents a new view, it loads the controller and then renders the view from scratch. Views are not cached or stored, but displayed and then removed. So if you have routes configured like this:'use strict'; // Declare app level module which depends on filters, and services window.myApp = angular.module('myApp', ['myApp.filters', 'myApp.services', 'myApp.directives', 'myApp.controllers']). config(['$routeProvider', function($routeProvider) { $routeProvider.when('/map', { template: "partials/map.html ", controller: 'mapController', reloadOnSearch: false, animation: 'slide' }); … $routeProvider.otherwise({redirectTo: '/map'}); }]); Angular routes to the mapController and then re-renders the map.html template with the new data from the $scope filled in.But, but… I don't want a new View!Now in most cases this works just fine. If I'm rendering plain DOM content, or textboxes in a form interface that is all fine and dandy - it's perfectly fine to completely re-render the UI.But in some cases, the UI that's being managed has state and shouldn't be redrawn. In this case the main page in question has a Google Map on it. The map is  going to be manipulated throughout the lifetime of the application and the rest of the pages. In my application I have a toolbar on the bottom and the rest of the content is replaced/switched out by the Angular Views:The problem is that the map shouldn't be redrawn each time the Location view is activated. It should maintain its state, such as the current position selected (which can move), and shouldn't redraw due to the overhead of re-rendering the initial map.Originally I set up the map, exactly like all my other views - as a partial, that is rendered with a separate file, but that didn't work.The Workaround - Controller Only RoutesThe workaround for this goes decidedly against Angular's way of doing things:Setting up a Template-less RouteIn-lining the map view directly into the main pageHiding and showing the map view manuallyLet's see how this works.Controller Only RouteThe template-less route is basically a route that doesn't have any template to render. This is not directly supported by Angular, but thankfully easy to fake. The end goal here is that I want to simply have the Controller fire and then have the controller manage the display of the already active view by hiding and showing the map and any other view content, in effect bypassing Angular's view display management.In short - I want a controller action, but no view rendering.The controller-only or template-less route looks like this: $routeProvider.when('/map', { template: " ", // just fire controller controller: 'mapController', animation: 'slide' });Notice I'm using the template property rather than templateUrl (used in the first example above), which allows specifying a string template, and leaving it blank. The template property basically allows you to provide a templated string using Angular's HandleBar like binding syntax which can be useful at times. You can use plain strings or strings with template code in the template, or as I'm doing here a blank string to essentially fake 'just clear the view'. In-lined ViewSo if there's no view where does the HTML go? Because I don't want Angular to manage the view the map markup is in-lined directly into the page. So instead of rendering the map into the Angular view container, the content is simply set up as inline HTML to display as a sibling to the view container.<div id="MapContent" data-icon="LocationIcon" ng-controller="mapController" style="display:none"> <div class="headerbar"> <div class="right-header" style="float:right"> <a id="btnShowSaveLocationDialog" class="iconbutton btn btn-sm" href="#/saveLocation" style="margin-right: 2px;"> <i class="icon-ok icon-2x" style="color: lightgreen; "></i> Save Location </a> </div> <div class="left-header">GeoCrumbs</div> </div> <div class="clearfix"></div> <div id="Message"> <i id="MessageIcon"></i> <span id="MessageText"></span> </div> <div id="Map" class="content-area"> </div> </div> <div id="ViewPlaceholder" ng-view></div>Note that there's the #MapContent element and the #ViewPlaceHolder. The #MapContent is my static map view that is always 'live' and is initially hidden. It is initially hidden and doesn't get made visible until the MapController controller activates it which does the initial rendering of the map. After that the element is persisted with the map data already loaded and any future access only updates the map with new locations/pins etc.Note that default route is assigned to the mapController, which means that the mapController is fired right as the page loads, which is actually a good thing in this case, as the map is the cornerstone of this app that is manipulated by some of the other controllers/views.The Controller handles some UISince there's effectively no view activation with the template-less route, the controller unfortunately has to take over some UI interaction directly. Specifically it has to swap the hidden state between the map and any of the other views.Here's what the controller looks like:myApp.controller('mapController', ["$scope", "$routeParams", "locationData", function($scope, $routeParams, locationData) { $scope.locationData = locationData.location; $scope.locationHistory = locationData.locationHistory; if ($routeParams.mode == "currentLocation") { bc.getCurrentLocation(false); } bc.showMap(false,"#LocationIcon"); }]);bc.showMap is responsible for a couple of display tasks that hide/show the views/map and for activating/deactivating icons. The code looks like this:this.showMap = function (hide,selActiveIcon) { if (!hide) $("#MapContent").show(); else { $("#MapContent").hide(); } self.fitContent(); if (selActiveIcon) { $(".iconbutton").removeClass("active"); $(selActiveIcon).addClass("active"); } };Each of the other controllers in the app also call this function when they are activated to basically hide the map and make the View Content area visible. The map controller makes the map.This is UI code and calling this sort of thing from controllers is generally not recommended, but I couldn't figure out a way using directives to make this work any more easily than this. It'd be easy to hide and show the map and view container using a flag an ng-show, but it gets tricky because of scoping of the $scope. I would have to resort to storing this setting on the $rootscope which I try to avoid. The same issues exists with the icons.It sure would be nice if Angular had a way to explicitly specify that a View shouldn't be destroyed when another view is activated, so currently this workaround is required. Searching around, I saw a number of whacky hacks to get around this, but this solution I'm using here seems much easier than any of that I could dig up even if it doesn't quite fit the 'Angular way'.Angular nice, until it's notOverall I really like Angular and the way it works although it took me a bit of time to get my head around how all the pieces fit together. Once I got the idea how the app/routes, the controllers and views snap together, putting together Angular pages becomes fairly straightforward. You can get quite a bit done never going beyond those basics. For most common things Angular's default routing and view presentation works very well.But, when you do something a bit more complex, where there are multiple dependencies or as in this case where Angular doesn't appear to support a feature that's absolutely necessary, you're on your own. Finding information on more advanced topics is not trivial especially since versions are changing so rapidly and the low level behaviors are changing frequently so finding something that works is often an exercise in trial and error. Not that this is surprising. Angular is a complex piece of kit as are all the frameworks that try to hack JavaScript into submission to do something that it was really never designed to. After all everything about a framework like Angular is an elaborate hack. A lot of shit has to happen to make this all work together and at that Angular (and Ember, Durandel etc.) are pretty amazing pieces of JavaScript code. So no harm, no foul, but I just can't help feeling like working in toy sandbox at times :-)© Rick Strahl, West Wind Technologies, 2005-2013Posted in Angular  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • How to have other divs with a flash liquid layout that fits to the page?

    - by brybam
    Basically the majority of my content is flash based. I designed it using Flash Builder (Flex) and Its in a liquid layout, (everything is in percents) and if im JUST embedding the flash content it scales to the page fine, and i have the flash content set to have a padding of 50 px. I put a header div in fine with no problems, but I have 2 problems, the first being the footer div seems to cover up the buttom of the flash content in IE, but it looks just fine in chrome. How can I solve this? I'm using the stock embed code that Flex provides, I tried to edit the css style for the div which I think is #flashContent and give it a min width and min height but it didnt seem to work, actually anything I did to #flashContent didn't seem to do anything, maybe its not the div i need to be adding that attribute to... And my other problem is I dont even know where to start when it comes to placing a div thats 280width by 600height colum to the right side of the flash content. If i could specify a size for the flash content, and the float it left, and float the colum right, and clear it with the container div id be just fine....But remember the flash content is set to 100% Scale (well techically 100%x80% because it looked better that way). Does anyone know how I can start to deal with creating a more complex scaleable flash layouts that includes other divs? ALL WELL MAINTAINING IE SUPPORT? IE is ruining my life. Here's the code I'm using: (or if it will help you visualize what im trying to do here's the page where im working on setting this up http://apumpkinpatch.com/textmashnew/) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> <title>TextMixup</title> <meta name="google" value="notranslate"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link href="css.css" rel="stylesheet" type="text/css" /> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.5.2/jquery.min.js"></script> <script src="../appassets/scripts/jquery.titlealert.js"></script> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-19768131-2']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); function tabNotification() { $.titleAlert('New Message!', {interval:200,requireBlur:true,stopOnFocus:true}); } function joinNotification() { $.titleAlert('Joined Chat!', {interval:200,requireBlur:true,stopOnFocus:true}); } </script> <!-- BEGIN Browser History required section --> <link rel="stylesheet" type="text/css" href="history/history.css" /> <script type="text/javascript" src="history/history.js"></script> <!-- END Browser History required section --> <script type="text/javascript" src="swfobject.js"></script> <script type="text/javascript"> var swfVersionStr = "10.2.0"; var xiSwfUrlStr = "playerProductInstall.swf"; var flashvars = {}; var params = {}; params.quality = "high"; params.bgcolor = "#ffffff"; params.allowscriptaccess = "sameDomain"; params.allowfullscreen = "true"; var attributes = {}; attributes.id = "TextMixup"; attributes.name = "TextMixup"; attributes.align = "middle"; swfobject.embedSWF( "TextMixup.swf", "flashContent", "100%", "80%", swfVersionStr, xiSwfUrlStr, flashvars, params, attributes); swfobject.createCSS("#flashContent", "display:block;text-align:left;"); </script> </head> <body> <div id="homebar"><a href="http://apumpkinpatch.com"><img src="../appassets/images/logo/logoHor_130_30.png" alt="APumpkinPatch HOME" width="130" height="30" hspace="10" vspace="3" border="0"/></a> </div> <div id="topad"> <script type="text/javascript"><!-- google_ad_client = "pub-5824388356626461"; /* 728x90, textmash */ google_ad_slot = "1114351240"; google_ad_width = 728; google_ad_height = 90; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script> </div> <div id="mainContainer"> <div id="flashContent"> <p> To view this page ensure that Adobe Flash Player version 10.2.0 or greater is installed. </p> <script type="text/javascript"> var pageHost = ((document.location.protocol == "https:") ? "https://" : "http://"); document.write("<a href='http://www.adobe.com/go/getflashplayer'><img src='" + pageHost + "www.adobe.com/images/shared/download_buttons/get_flash_player.gif' alt='Get Adobe Flash player' /></a>" ); </script> </div> <noscript> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="100%" height="80%" id="TextMixup"> <param name="movie" value="TextMixup.swf" /> <param name="quality" value="high" /> <param name="bgcolor" value="#ffffff" /> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="true" /> <!--[if !IE]>--> <object type="application/x-shockwave-flash" data="TextMixup.swf" width="100%" height="80%"> <param name="quality" value="high" /> <param name="bgcolor" value="#ffffff" /> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="true" /> <!--<![endif]--> <!--[if gte IE 6]>--> <p> Either scripts and active content are not permitted to run or Adobe Flash Player version 10.2.0 or greater is not installed. </p> <!--<![endif]--> <a href="http://www.adobe.com/go/getflashplayer"> <img src="http://www.adobe.com/images/shared/download_buttons/get_flash_player.gif" alt="Get Adobe Flash Player" /> </a> <!--[if !IE]>--> </object> <!--<![endif]--> </object> </noscript> <div id="convosPreview">This is a div I would want to appear as a colum to the right of the flash content that can scale</div> <!---End mainContainer --> </div> <div id="footer"> <a href="../apps.html"><img src="../appassets/images/apps.png" hspace="5" vspace="5" alt="random chat app apumpkinpatch" width="228" height="40" border="0" /></a><a href="https://chrome.google.com/webstore/detail/hjmnobclpbhnjcpdnpdnkbgdkbfifbao?hl=en-US#"><img src="../appassets/images/chromeapp.png" alt="chrome app random video chat apumpkinpatch" width="115" height="40" vspace="5" border="0" /></a><br /><br /> <a href="http://spacebarup.com" target="_blank">©2011 Space Bar</a> | <a href="../tos.html">TOS & Privacy Policy</a> | <a href="../help.html">FAQ & Help</a> | <a href="../tips.html">Important online safety tips</a> | <a href="http://www.facebook.com/pages/APumpkinPatchcom/164279206963001?sk=app_2373072738" target="_blank">Discussion Boards</a><br /> <p>You must be at least 18 years of age to access this web site.<br />APumpkinPatch.com is not responsible for the actions of any visitors of this site.<br />APumpkinPatch.com does not endorse or claim ownership to any of the content that is broadcast through this site. </p><h2>A Pumpkin Patch is BRAND NEW and will be developed a lot over the next few months adding video chat games, chat rooms, and more! Check back often it's going to be a lot of fun!</h2> </div> </body> </html> myCSS: html, body { height:100%; } body { text-align:center; font-family: Helvetica, Arial, sans-serif; margin:0; padding:0; overflow:auto; text-align:center; background-color: #ffffff; } object:focus { outline:none; } #homebar { clear:both; text-align: left; width: 100%; height: 40px; background-color:#333333; color:#CCC; overflow:hidden; box-shadow: 0px 0px 14px rgba(0, 0, 0, 0.65); -moz-box-shadow: 0px 0px 14px rgba(0, 0, 0, 0.65); -webkit-box-shadow: 0px 0px 14px rgba(0, 0, 0, 0.65); margin-bottom: 10px; } #mainContainer { height:auto; width:auto; clear:both; } #flashContent { display:none; height:auto; float:left; min-height: 500px; min-width: 340px; } /**this is the div i want to appear as a column net to the scaleable flash content **/ #convosPreview { float:right; width:280px; height:600px; }

    Read the article

  • How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions

    - by Eric Z Goodnight
    Have a huge folder of images needing tweaks? A few hundred adjustments may seem like a big, time consuming job—but read one to see how Photoshop can do repetitive tasks automatically, even if you don’t know how to program! Photoshop Actions are a simple way to program simple routines in Photoshop, and are a great time saver, allowing you to re-perform tasks over and over, saving you minutes or hours, depending on the job you have to work on. See how any bunch of images and even some fairly complicated photo tweaking can be done automatically to even hundreds of images at once. When Can I use Photoshop Actions? Photoshop actions are a way of recording the tools, menus, and keys pressed while using the program. Each time you use a tool, adjust a color, or use the brush, it can be recorded and played back over any file Photoshop can open. While it isn’t perfect and can get very confused if not set up correctly, it can automate editing hundreds of images, saving you hours and hours if you have big jobs with complex edits. The image illustrated above is a template for a polaroid-style picture frame. If you had several hundred images, it would actually be a simple matter to use Photoshop Actions to create hundreds of new images inside the frame in almost no time at all. Let’s take a look at how a simple folder of images and some Image editing automation can turn lots of work into a simple and easy job. Creating a New Action Actions is a default part of the “Essentials” panel set Photoshop begins with as a default. If you can’t see the panel button under the “History” button, you can find Actions by going to Window > Actions or pressing Alt + F9. Click the in the Actions Panel, pictured in the previous illustration on the left. Choose to create a “New Set” in order to begin creating your own custom Actions. Name your action set whatever you want. Names are not relevant, you’ll simply want to make it obvious that you have created it. Click OK. Look back in the layers panel. You’ll see your new Set of actions has been added to the list. Click it to highlight it before going on. Click the again to create a “New Action” in your new set. If you care to name your action, go ahead. Name it after whatever it is you’re hoping to do—change the canvas size, tint all your pictures blue, send your image to the printer in high quality, or run multiple filters on images. The name is for your own usage, so do what suits you best. Note that you can simplify your process by creating shortcut keys for your actions. If you plan to do hundreds of edits with your actions, this might be a good idea. If you plan to record an action to use every time you use Photoshop, this might even be an invaluable step. When you create a new Action, Photoshop automatically begins recording everything you do. It does not record the time in between steps, but rather only the data from each step. So take your time when recording and make sure you create your actions the way you want them. The square button stops recording, and the circle button starts recording again. With these basics ready, we can take a look at a sample Action. Recording a Sample Action Photoshop will remember everything you input into it when it is recording, even specific photographs you open. So begin recording your action when your first photo is already open. Once your first image is open, click the record button. If you’re already recording, continue on. Using the File > Place command to insert the polaroid image can be easier for Actions to deal with. Photoshop can record with multiple open files, but it often gets confused when you try it. Keep your recordings as simple as possible to ensure your success. When the image is placed in, simply press enter to render it. Select your background layer in your layers panel. Your recording should be following along with no trouble. Double click this layer. Double clicking your background layer will create a new layer from it. Allow it to be renamed “Layer 0” and press OK. Move the “polaroid” layer to the bottom by selecting it and dragging it down below “Layer 0” in the layers panel. Right click “Layer 0” and select “Create Clipping Mask.” The JPG image is cropped to the layer below it. Coincidentally, all actions described here are being recorded perfectly, and are reproducible. Cursor actions, like the eraser, brush, or bucket fill don’t record well, because the computer uses your mouse movements and coordinates, which may need to change from photo to photo. Click the to set your Photograph layer to a “Screen” blending mode. This will make the image disappear when it runs over the white parts of the polaroid image. With your image layer (Layer 0) still selected, navigate to Edit > Transform > Scale. You can use the mouse to resize your Layer 0, but Actions work better with absolute numbers. Visit the Width and Height adjustments in the top options panel. Click the chain icon to link them together, and adjust them numerically. Depending on your needs, you may need to use more or less than 30%. Your image will resize to your specifications. Press enter to render, or click the check box in the top right of your application. + Click on your bottom layer, or “polaroid” in this case. This creates a selection of the bottom layer. Navigate to Image > Crop in order to crop down to your bottom layer selection Your image is now resized to your bottommost layer, and Photoshop is still recording to that effect. For additional effect, we can navigate to Image > Image Rotation > Arbitrary to rotate our image by a small tilt. Choosing 3 degrees clockwise , we click OK to render our choice. Our image is rotated, and this step is recorded. Photoshop will even record when you save your files. With your recording still going, find File > Save As. You can easily tell Photoshop to save in a new folder, other than the one you have been working in, so that your files aren’t overwritten. Navigate to any folder you wish, but do not change the filename. If you change the filename, Photoshop will record that name, and save all your images under whatever you type. However, you can change your filetype without recording an absolute filename. Use the pulldown tab and select a different filetype—in this instance, PNG. Simply click “Save” to create a new PNG based on your actions. Photoshop will record the destination and the change in filetype. If you didn’t edit the name of your file, it will always use the variable filename of any image you open. (This is very important if you want to edit hundreds of images at once!) Click File > Close or the red “X” in the corner to close your filetype. Photoshop can record that as well. Since we have already saved our image as a JPG, click “NO” to not overwrite your original image. Photoshop will also record your choice of “NO” for subsequent images. In your Actions panel, click the stop button to complete your action. You can always click the record button to add more steps later, if you want. This is how your new action looks with its steps expanded. Curious how to put it into effect? Read on to see how simple it is to use that recording you just made. Editing Lots of Images with Your New Action Open a large number of images—as many as you care to work with. Your action should work immediately with every image on screen, although you may have to test and re-record, depending on how you did. Actions don’t require any programming knowledge, but often can get confused or work in a counter-intuitive way. Record your action until it is perfect. If it works once without errors, it’s likely to work again and again! Find the “Play” button in your Actions Panel. With your custom action selected, click “Play” and your routine will edit, save, and close each file for you. Keep bashing “Play” for each open file, and it will keep saving and creating new files until you run out of work you need to do. And in mere moments, a complicated stack of work is done. Photoshop actions can be very complicated, far beyond what is illustrated here, and can even be combined with scripts and other actions, creating automated creation of potentially very complex files, or applying filters to an entire portfolio of digital photos. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: All images copyright Stephanie Pragnell and author Eric Z Goodnight, protected under Creative Commons. Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • How I can add JScroll bar to NavigableImagePanel which is an Image panel with an small navigation vi

    - by Sarah Kho
    Hi, I have the following NavigableImagePanel, it is under BSD license and I found it in the web. What I want to do with this panel is as follow: I want to add a JScrollPane to it in order to show images in their full size and let the users to re-center the image using the small navigation panel. Right now, the panel resize the images to fit them in the current panel size. I want it to load the image in its real size and let users to navigate to different parts of the image using the navigation panel. Source code for the panel: import java.awt.AWTEvent; import java.awt.BorderLayout; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GraphicsEnvironment; import java.awt.Image; import java.awt.Point; import java.awt.Rectangle; import java.awt.RenderingHints; import java.awt.Toolkit; import java.awt.event.ComponentAdapter; import java.awt.event.ComponentEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseMotionListener; import java.awt.event.MouseWheelEvent; import java.awt.event.MouseWheelListener; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.Arrays; import javax.imageio.ImageIO; import javax.swing.JFrame; import javax.swing.JOptionPane; import javax.swing.JPanel; import javax.swing.SwingUtilities; /** * @author pxt * */ public class NavigableImagePanel extends JPanel { /** * <p>Identifies a change to the zoom level.</p> */ public static final String ZOOM_LEVEL_CHANGED_PROPERTY = "zoomLevel"; /** * <p>Identifies a change to the zoom increment.</p> */ public static final String ZOOM_INCREMENT_CHANGED_PROPERTY = "zoomIncrement"; /** * <p>Identifies that the image in the panel has changed.</p> */ public static final String IMAGE_CHANGED_PROPERTY = "image"; private static final double SCREEN_NAV_IMAGE_FACTOR = 0.15; // 15% of panel's width private static final double NAV_IMAGE_FACTOR = 0.3; // 30% of panel's width private static final double HIGH_QUALITY_RENDERING_SCALE_THRESHOLD = 1.0; private static final Object INTERPOLATION_TYPE = RenderingHints.VALUE_INTERPOLATION_BILINEAR; private double zoomIncrement = 0.2; private double zoomFactor = 1.0 + zoomIncrement; private double navZoomFactor = 1.0 + zoomIncrement; private BufferedImage image; private BufferedImage navigationImage; private int navImageWidth; private int navImageHeight; private double initialScale = 0.0; private double scale = 0.0; private double navScale = 0.0; private int originX = 0; private int originY = 0; private Point mousePosition; private Dimension previousPanelSize; private boolean navigationImageEnabled = true; private boolean highQualityRenderingEnabled = true; private WheelZoomDevice wheelZoomDevice = null; private ButtonZoomDevice buttonZoomDevice = null; /** * <p>Defines zoom devices.</p> */ public static class ZoomDevice { /** * <p>Identifies that the panel does not implement zooming, * but the component using the panel does (programmatic zooming method).</p> */ public static final ZoomDevice NONE = new ZoomDevice("none"); /** * <p>Identifies the left and right mouse buttons as the zooming device.</p> */ public static final ZoomDevice MOUSE_BUTTON = new ZoomDevice("mouseButton"); /** * <p>Identifies the mouse scroll wheel as the zooming device.</p> */ public static final ZoomDevice MOUSE_WHEEL = new ZoomDevice("mouseWheel"); private String zoomDevice; private ZoomDevice(String zoomDevice) { this.zoomDevice = zoomDevice; } public String toString() { return zoomDevice; } } //This class is required for high precision image coordinates translation. private class Coords { public double x; public double y; public Coords(double x, double y) { this.x = x; this.y = y; } public int getIntX() { return (int)Math.round(x); } public int getIntY() { return (int)Math.round(y); } public String toString() { return "[Coords: x=" + x + ",y=" + y + "]"; } } private class WheelZoomDevice implements MouseWheelListener { public void mouseWheelMoved(MouseWheelEvent e) { Point p = e.getPoint(); boolean zoomIn = (e.getWheelRotation() < 0); if (isInNavigationImage(p)) { if (zoomIn) { navZoomFactor = 1.0 + zoomIncrement; } else { navZoomFactor = 1.0 - zoomIncrement; } zoomNavigationImage(); } else if (isInImage(p)) { if (zoomIn) { zoomFactor = 1.0 + zoomIncrement; } else { zoomFactor = 1.0 - zoomIncrement; } zoomImage(); } } } private class ButtonZoomDevice extends MouseAdapter { public void mouseClicked(MouseEvent e) { Point p = e.getPoint(); if (SwingUtilities.isRightMouseButton(e)) { if (isInNavigationImage(p)) { navZoomFactor = 1.0 - zoomIncrement; zoomNavigationImage(); } else if (isInImage(p)) { zoomFactor = 1.0 - zoomIncrement; zoomImage(); } } else { if (isInNavigationImage(p)) { navZoomFactor = 1.0 + zoomIncrement; zoomNavigationImage(); } else if (isInImage(p)) { zoomFactor = 1.0 + zoomIncrement; zoomImage(); } } } } /** * <p>Creates a new navigable image panel with no default image and * the mouse scroll wheel as the zooming device.</p> */ public NavigableImagePanel() { setOpaque(false); addComponentListener(new ComponentAdapter() { public void componentResized(ComponentEvent e) { if (scale > 0.0) { if (isFullImageInPanel()) { centerImage(); } else if (isImageEdgeInPanel()) { scaleOrigin(); } if (isNavigationImageEnabled()) { createNavigationImage(); } repaint(); } previousPanelSize = getSize(); } }); addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e)) { if (isInNavigationImage(e.getPoint())) { Point p = e.getPoint(); displayImageAt(p); } } } public void mouseClicked(MouseEvent e){ if (e.getClickCount() == 2) { resetImage(); } } }); addMouseMotionListener(new MouseMotionListener() { public void mouseDragged(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e) && !isInNavigationImage(e.getPoint())) { Point p = e.getPoint(); moveImage(p); } } public void mouseMoved(MouseEvent e) { //we need the mouse position so that after zooming //that position of the image is maintained mousePosition = e.getPoint(); } }); setZoomDevice(ZoomDevice.MOUSE_WHEEL); } /** * <p>Creates a new navigable image panel with the specified image * and the mouse scroll wheel as the zooming device.</p> */ public NavigableImagePanel(BufferedImage image) throws IOException { this(); setImage(image); } private void addWheelZoomDevice() { if (wheelZoomDevice == null) { wheelZoomDevice = new WheelZoomDevice(); addMouseWheelListener(wheelZoomDevice); } } private void addButtonZoomDevice() { if (buttonZoomDevice == null) { buttonZoomDevice = new ButtonZoomDevice(); addMouseListener(buttonZoomDevice); } } private void removeWheelZoomDevice() { if (wheelZoomDevice != null) { removeMouseWheelListener(wheelZoomDevice); wheelZoomDevice = null; } } private void removeButtonZoomDevice() { if (buttonZoomDevice != null) { removeMouseListener(buttonZoomDevice); buttonZoomDevice = null; } } /** * <p>Sets a new zoom device.</p> * * @param newZoomDevice specifies the type of a new zoom device. */ public void setZoomDevice(ZoomDevice newZoomDevice) { if (newZoomDevice == ZoomDevice.NONE) { removeWheelZoomDevice(); removeButtonZoomDevice(); } else if (newZoomDevice == ZoomDevice.MOUSE_BUTTON) { removeWheelZoomDevice(); addButtonZoomDevice(); } else if (newZoomDevice == ZoomDevice.MOUSE_WHEEL) { removeButtonZoomDevice(); addWheelZoomDevice(); } } /** * <p>Gets the current zoom device.</p> */ public ZoomDevice getZoomDevice() { if (buttonZoomDevice != null) { return ZoomDevice.MOUSE_BUTTON; } else if (wheelZoomDevice != null) { return ZoomDevice.MOUSE_WHEEL; } else { return ZoomDevice.NONE; } } //Called from paintComponent() when a new image is set. private void initializeParams() { double xScale = (double)getWidth() / image.getWidth(); double yScale = (double)getHeight() / image.getHeight(); initialScale = Math.min(xScale, yScale); scale = initialScale; //An image is initially centered centerImage(); if (isNavigationImageEnabled()) { createNavigationImage(); } } //Centers the current image in the panel. private void centerImage() { originX = (int)(getWidth() - getScreenImageWidth()) / 2; originY = (int)(getHeight() - getScreenImageHeight()) / 2; } //Creates and renders the navigation image in the upper let corner of the panel. private void createNavigationImage() { //We keep the original navigation image larger than initially //displayed to allow for zooming into it without pixellation effect. navImageWidth = (int)(getWidth() * NAV_IMAGE_FACTOR); navImageHeight = navImageWidth * image.getHeight() / image.getWidth(); int scrNavImageWidth = (int)(getWidth() * SCREEN_NAV_IMAGE_FACTOR); int scrNavImageHeight = scrNavImageWidth * image.getHeight() / image.getWidth(); navScale = (double)scrNavImageWidth / navImageWidth; navigationImage = new BufferedImage(navImageWidth, navImageHeight, image.getType()); Graphics g = navigationImage.getGraphics(); g.drawImage(image, 0, 0, navImageWidth, navImageHeight, null); } /** * <p>Sets an image for display in the panel.</p> * * @param image an image to be set in the panel */ public void setImage(BufferedImage image) { BufferedImage oldImage = this.image; this.image = image; //Reset scale so that initializeParameters() is called in paintComponent() //for the new image. scale = 0.0; firePropertyChange(IMAGE_CHANGED_PROPERTY, (Image)oldImage, (Image)image); repaint(); } /** * <p>resets an image to the centre of the panel</p> * */ public void resetImage() { BufferedImage oldImage = this.image; this.image = image; //Reset scale so that initializeParameters() is called in paintComponent() //for the new image. scale = 0.0; firePropertyChange(IMAGE_CHANGED_PROPERTY, (Image)oldImage, (Image)image); repaint(); } /** * <p>Tests whether an image uses the standard RGB color space.</p> */ public static boolean isStandardRGBImage(BufferedImage bImage) { return bImage.getColorModel().getColorSpace().isCS_sRGB(); } //Converts this panel's coordinates into the original image coordinates private Coords panelToImageCoords(Point p) { return new Coords((p.x - originX) / scale, (p.y - originY) / scale); } //Converts the original image coordinates into this panel's coordinates private Coords imageToPanelCoords(Coords p) { return new Coords((p.x * scale) + originX, (p.y * scale) + originY); } //Converts the navigation image coordinates into the zoomed image coordinates private Point navToZoomedImageCoords(Point p) { int x = p.x * getScreenImageWidth() / getScreenNavImageWidth(); int y = p.y * getScreenImageHeight() / getScreenNavImageHeight(); return new Point(x, y); } //The user clicked within the navigation image and this part of the image //is displayed in the panel. //The clicked point of the image is centered in the panel. private void displayImageAt(Point p) { Point scrImagePoint = navToZoomedImageCoords(p); originX = -(scrImagePoint.x - getWidth() / 2); originY = -(scrImagePoint.y - getHeight() / 2); repaint(); } //Tests whether a given point in the panel falls within the image boundaries. private boolean isInImage(Point p) { Coords coords = panelToImageCoords(p); int x = coords.getIntX(); int y = coords.getIntY(); return (x >= 0 && x < image.getWidth() && y >= 0 && y < image.getHeight()); } //Tests whether a given point in the panel falls within the navigation image //boundaries. private boolean isInNavigationImage(Point p) { return (isNavigationImageEnabled() && p.x < getScreenNavImageWidth() && p.y < getScreenNavImageHeight()); } //Used when the image is resized. private boolean isImageEdgeInPanel() { if (previousPanelSize == null) { return false; } return (originX > 0 && originX < previousPanelSize.width || originY > 0 && originY < previousPanelSize.height); } //Tests whether the image is displayed in its entirety in the panel. private boolean isFullImageInPanel() { return (originX >= 0 && (originX + getScreenImageWidth()) < getWidth() && originY >= 0 && (originY + getScreenImageHeight()) < getHeight()); } /** * <p>Indicates whether the high quality rendering feature is enabled.</p> * * @return true if high quality rendering is enabled, false otherwise. */ public boolean isHighQualityRenderingEnabled() { return highQualityRenderingEnabled; } /** * <p>Enables/disables high quality rendering.</p> * * @param enabled enables/disables high quality rendering */ public void setHighQualityRenderingEnabled(boolean enabled) { highQualityRenderingEnabled = enabled; } //High quality rendering kicks in when when a scaled image is larger //than the original image. In other words, //when image decimation stops and interpolation starts. private boolean isHighQualityRendering() { return (highQualityRenderingEnabled && scale > HIGH_QUALITY_RENDERING_SCALE_THRESHOLD); } /** * <p>Indicates whether navigation image is enabled.<p> * * @return true when navigation image is enabled, false otherwise. */ public boolean isNavigationImageEnabled() { return navigationImageEnabled; } /** * <p>Enables/disables navigation with the navigation image.</p> * <p>Navigation image should be disabled when custom, programmatic navigation * is implemented.</p> * * @param enabled true when navigation image is enabled, false otherwise. */ public void setNavigationImageEnabled(boolean enabled) { navigationImageEnabled = enabled; repaint(); } //Used when the panel is resized private void scaleOrigin() { originX = originX * getWidth() / previousPanelSize.width; originY = originY * getHeight() / previousPanelSize.height; repaint(); } //Converts the specified zoom level to scale. private double zoomToScale(double zoom) { return initialScale * zoom; } /** * <p>Gets the current zoom level.</p> * * @return the current zoom level */ public double getZoom() { return scale / initialScale; } /** * <p>Sets the zoom level used to display the image.</p> * <p>This method is used in programmatic zooming. The zooming center is * the point of the image closest to the center of the panel. * After a new zoom level is set the image is repainted.</p> * * @param newZoom the zoom level used to display this panel's image. */ public void setZoom(double newZoom) { Point zoomingCenter = new Point(getWidth() / 2, getHeight() / 2); setZoom(newZoom, zoomingCenter); } /** * <p>Sets the zoom level used to display the image, and the zooming center, * around which zooming is done.</p> * <p>This method is used in programmatic zooming. * After a new zoom level is set the image is repainted.</p> * * @param newZoom the zoom level used to display this panel's image. */ public void setZoom(double newZoom, Point zoomingCenter) { Coords imageP = panelToImageCoords(zoomingCenter); if (imageP.x < 0.0) { imageP.x = 0.0; } if (imageP.y < 0.0) { imageP.y = 0.0; } if (imageP.x >= image.getWidth()) { imageP.x = image.getWidth() - 1.0; } if (imageP.y >= image.getHeight()) { imageP.y = image.getHeight() - 1.0; } Coords correctedP = imageToPanelCoords(imageP); double oldZoom = getZoom(); scale = zoomToScale(newZoom); Coords panelP = imageToPanelCoords(imageP); originX += (correctedP.getIntX() - (int)panelP.x); originY += (correctedP.getIntY() - (int)panelP.y); firePropertyChange(ZOOM_LEVEL_CHANGED_PROPERTY, new Double(oldZoom), new Double(getZoom())); repaint(); } /** * <p>Gets the current zoom increment.</p> * * @return the current zoom increment */ public double getZoomIncrement() { return zoomIncrement; } /** * <p>Sets a new zoom increment value.</p> * * @param newZoomIncrement new zoom increment value */ public void setZoomIncrement(double newZoomIncrement) { double oldZoomIncrement = zoomIncrement; zoomIncrement = newZoomIncrement; firePropertyChange(ZOOM_INCREMENT_CHANGED_PROPERTY, new Double(oldZoomIncrement), new Double(zoomIncrement)); } //Zooms an image in the panel by repainting it at the new zoom level. //The current mouse position is the zooming center. private void zoomImage() { Coords imageP = panelToImageCoords(mousePosition); double oldZoom = getZoom(); scale *= zoomFactor; Coords panelP = imageToPanelCoords(imageP); originX += (mousePosition.x - (int)panelP.x); originY += (mousePosition.y - (int)panelP.y); firePropertyChange(ZOOM_LEVEL_CHANGED_PROPERTY, new Double(oldZoom), new Double(getZoom())); repaint(); } //Zooms the navigation image private void zoomNavigationImage() { navScale *= navZoomFactor; repaint(); } /** * <p>Gets the image origin.</p> * <p>Image origin is defined as the upper, left corner of the image in * the panel's coordinate system.</p> * @return the point of the upper, left corner of the image in the panel's coordinates * system. */ public Point getImageOrigin() { return new Point(originX, originY); } /** * <p>Sets the image origin.</p> * <p>Image origin is defined as the upper, left corner of the image in * the panel's coordinate system. After a new origin is set, the image is repainted. * This method is used for programmatic image navigation.</p>

    Read the article

  • Initializing and drawing a mesh using OpenTK

    - by Boreal
    I'm implementing a "Mesh" class to use in my OpenTK game. You pass in a vertex array and an index array, and then you can call Mesh.Draw() to draw it using a shader. I've heard VBO's and VAO's are the way to go for this approach, but nowhere have I found a guide that shows how to get Data Video Memory Shader. Can someone give me a quick rundown of how this works? EDIT: So far, I have this: struct Vertex { public Vector3 position; public Vector3 normal; public Vector3 color; public static int memSize = 9 * sizeof(float); public static byte[] memOffset = { 0, 3 * sizeof(float), 6 * sizeof(float) }; } class Mesh { private uint vbo; private uint ibo; // stores the numbers of vertices and indices private int numVertices; private int numIndices; public Mesh(int numVertices, Vertex[] vertices, int numIndices, ushort[] indices) { // set numbers this.numVertices = numVertices; this.numIndices = numIndices; // generate buffers GL.GenBuffers(1, out vbo); GL.GenBuffers(1, out ibo); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // send data to the buffers GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(Vertex.memSize * numVertices), vertices, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(ushort) * numIndices), indices, BufferUsageHint.StaticDraw); } public void Render() { // bind buffers GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // define offsets GL.VertexPointer(3, VertexPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[0])); GL.NormalPointer(NormalPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[1])); GL.ColorPointer(3, ColorPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[2])); // draw GL.DrawElements(BeginMode.Triangles, numIndices, DrawElementsType.UnsignedInt, (IntPtr)0); } } class Application : GameWindow { Mesh triangle; protected override void OnLoad(EventArgs e) { base.OnLoad(e); GL.ClearColor(0.1f, 0.2f, 0.5f, 0.0f); GL.Enable(EnableCap.DepthTest); GL.Enable(EnableCap.VertexArray); GL.Enable(EnableCap.NormalArray); GL.Enable(EnableCap.ColorArray); Vertex v0 = new Vertex(); v0.position = new Vector3(-1.0f, -1.0f, 4.0f); v0.normal = new Vector3(0.0f, 0.0f, -1.0f); v0.color = new Vector3(1.0f, 1.0f, 0.0f); Vertex v1 = new Vertex(); v1.position = new Vector3(1.0f, -1.0f, 4.0f); v1.normal = new Vector3(0.0f, 0.0f, -1.0f); v1.color = new Vector3(1.0f, 0.0f, 0.0f); Vertex v2 = new Vertex(); v2.position = new Vector3(0.0f, 1.0f, 4.0f); v2.normal = new Vector3(0.0f, 0.0f, -1.0f); v2.color = new Vector3(0.2f, 0.9f, 1.0f); Vertex[] va = { v0, v1, v2 }; ushort[] ia = { 0, 1, 2 }; triangle = new Mesh(3, va, 3, ia); } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); Matrix4 modelview = Matrix4.LookAt(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY); GL.MatrixMode(MatrixMode.Modelview); GL.LoadMatrix(ref modelview); triangle.Render(); SwapBuffers(); } } It doesn't draw anything.

    Read the article

  • Convert YouTube Videos to MP3 with YouTube Downloader

    - by DigitalGeekery
    Are you looking for a way to take the music videos you watch on YouTube and convert them to MP3? Today we take a look at an easy way to convert those YouTube videos to MP3 for free with YouTube Downloader. The YouTube Downloader functions in two steps. First, it downloads the video from YouTube in MP4 format, and then allows you to convert that MP4 file to MP3. Note: It also supports conversion conversion to some other formats such as AVI video, MOV, iPhone, PSP, 3GP, and WMV.   Installation and usage Download and Install YouTube Downloader. (See download link below) Open the YouTube Downloader by clicking on the desktop icon. Find a YouTube video you’d like to convert to MP3 and copy the URL. Paste the URL into the “Enter video URL” text box in YouTube Downloader. When you hover your mouse over the text box, the text box will auto-fill with the URL from your clipboard. Select the “Download video from YouTube” radio button and click “Ok.” Choose a folder to location to download your YouTube video and click “Save.” The video is downloaded in MP4 format. Now wait while the video is downloaded to your hard drive.   Select the “Convert video (previously downloaded) from file” radio button. Click the (…) button to the right of the “Select video file” text box to browse for and select the MP4 file you just downloaded. Then select “MPEG Audio Layer (MP3) from the “Convert to” drop down list. Select “OK” to begin the conversion. Choose the conversion quality by moving the slider to the right or left. The options are: Low (96kbps bite rate), Medium (128kbps bit rate), Optimal (192kbps bit rate), and High 256kbps bit rate). Here you can select the output volume as well. Click “OK” when finished. If there is a portion of the beginning or end of the video that you wish to cut out of the MP3, select the “Cut video” check box and choose a Start and End time. Click “OK” when finished. Note: The start and end time represent the audio portion of the MP3 you wish to keep. All portions before and after these times will be cut.   The conversion process will begin and should only take a few moments. Times will vary depending on the size of the video you’re converting. Conversion was successful! The MP3 you converted will be in the same directory you downloaded the video to. Now you’re ready to listen to your MP3 or import it to your Zune, iTunes, or music library. You may also want to delete the MP4 files after the conversion if you will no longer need them. Conclusion YouTube Downloader features a very simple interface that’s user friendly and easy to use. It comes in handy when you watch videos that look horrible, but the sound quality is good. Or if you just need to hear the audio of something posted and don’t need the video. It also allows you to download from Google Video, MySpace, and others. Download YouTube Downloader Similar Articles Productive Geek Tips Download YouTube Videos with Cheetah YouTube DownloaderWatch YouTube Videos in Cinema Style in FirefoxStop YouTube Videos from Automatically Playing in FirefoxRemove Unsuitable Comments from YouTubeImprove YouTube Video Viewing in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • Java Spotlight Episode 108: Patrick Curran and Heather VanCura on JCP.Next @jcp_org

    - by Roger Brinkley
    Interview with Patrick Curran and Heather VanCura on JCP.Next. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Welcome to the newly merged JCP EC! The November/December issue of Java Magazine is now out Red Hat announces intent to contribute to OpenJFX New OpenJDK JEPs: JEP 168: Network Discovery of Manageable Java Processes JEP 169: Value Objects Java EE 7 Survey Latest Java EE 7 Status GlassFish 4.0 Embedded (via @agoncal) Events Nov 13-17, Devoxx, Antwerp, Belgium Nov 20, JCP Public Meeting (see details below) Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India Feature InterviewPatrick Curran is Chair of the Java Community Process organization. In this role he oversees the activities of the JCP's Program Management Office including evolving the process and the organization, managing its membership, guiding specification leads and experts through the process, chairing Executive Committee meetings, and managing the JCP.org web site.Patrick has worked in the software industry for more than 25 years, and at Sun and then Oracle for 20 years. He has a long-standing record in conformance testing, and before joining the JCP he led the Java Conformance Engineering team in Sun's Client Software Group. He was also chair of Sun's Conformance Council, which was responsible for defining Sun's policies and strategies around Java conformance and compatibility.Patrick has participated actively in several consortia and communities including the W3C (as a member of the Quality Assurance Working Group and co-chair of the Quality Assurance Interest Group), and OASIS (as co-chair of the Test Assertions Guidelines Technical Committee). Patrick's blog is here.Heather VanCura manages the JCP Program Office and is responsible for the day-to-day nurturing, support, and leadership of the community. She oversees the JCP.org web site, JSR management and posting, community building, events, marketing, communications, and growth of the membership through new members and renewals.  Heather has a front row seat for studying trends within the community and recommending changes. Several changes to the program in recent years have included enabling broader participation, increased transparency and agility in JSR development.  When Heather joined the PMO staff in a community building marketing manager role for the JCP program, she was responsible for establishing the JCP brand logo programs, the JCP.org site, and engaging the community in online surveys and usability studies. She also developed marketing reward programs,  campaigns, sponsorships, and events for the JCP program, including the community gathering at the annual JavaOne Conference.   Before arriving at the JCP community in 2000, Heather worked with various technology companies.  Heather enjoys speaking at conferences, such as Devoxx, Java Zone, and the JavaOne Conferences. She maintains the JCP Blog, Twitter feed (@jcp_org) and Facebook page.  Heather resides in the San Francisco Bay Area, California USA. JCP Executive Committee Public Meeting Details Date & Time Tuesday November 20, 2012, 3:00 - 4:00 pm PST Location Teleconference Dial-in +1 (866) 682-4770 Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password: JCPEC Agenda JSR 355 (the EC merge) implementation report JSR 358 (JCP.next.3) status report 2.8 status update and community audit program Discussion/Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate. September 2012 EC meeting PMO report with JCP 2.8 statistics.JSR 358 Project page What’s Cool Sweden: Hot Java in the Winter GE Engergy using Invoke Daynamic for embedded development

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >