Search Results

Search found 50879 results on 2036 pages for 'web services'.

Page 193/2036 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Worth website that a web developer must surf daily?

    - by I Like PHP
    Hello All, this may be not a right place to ask this question, but i can get best answer only from here, so i m posting here. i m a web developer and working with technology PHP,MySQL, JavaScript,jQuery,AJAX, CSS, HTML, JSON i daily surf few websites regarding web development , i know there are a lot of website with very good knowledge but we are not aware of that so i think we have to share with each other.i m mentioning some useful links that we must surf daily for gaining knowledge and do better/fast development. please you also suggest some good links which you surf regularly and best in their field. i surf belows links regularly - [Stack Overflow][1] // No doubt, it is best - [Delicious][2]// best social bookmarking website - [Smashing Magazine][3] // Best site to improve knowledege for a web developer - [Net tuts][4] // Best tutorail wesbsite with full explanation - [Official PHP site ][5] // i think nothing to mention about it( just superb) - [Javascript Debugger][6] // U can filter your javascript/jquery code here - [jQuery Official site][7] // best to learn jQuery i m waiting for you great response. i also need any good and trusted website on mysql, i think mysql officail website is very confusing, i had to search a lot to find a single thing, if u have any good regarding mysql then share please. Thanks alwayz.

    Read the article

  • Keeping up with Technology

    - by kennedysteve
    If you're like me, you have a hard time keeping up with all the technologies out there. The reality is there's too many new technologies (languages, methodologies,  tools, etc). One of the ways I try to keep up with everything is by using good ol' RSS feeds in conjunction with Google Reader. Google Reader is both an online aggregator of RSS feeds, and it also has a good companion app on Google Android. The nicest part of Google Reader for me is the "All Listings" view which gives me a reverse chronological view of ALL articles (mixed together) regardless of the actual RSS feed.  This way, I get to see the newest articles first. I can then choose to hide the articles I've viewed, etc. Here is a list of my RSS feeds. Admittedly, some of these are all over the spectrum. But you might find one or two interesting. .NET Rocks! RSS = http://feeds.feedburner.com/netRocksFullMp3Downloads Main Web Site = http://www.dotnetrocks.com Channel 9 RSS = http://channel9.msdn.com/Feeds/RSS Main Web Site = http://channel9.msdn.com/ CodePlex  RSS = http://www.codeplex.com/site/feeds/rss Main Web Site = http://www.codeplex.com/site/feeds/rss Connected Show Developer Podcast! RSS = http://feeds.connectedshow.com/ConnectedShow Main Web Site = http://www.ConnectedShow.com/ dnrTV RSS = http://feeds.feedburner.com/DnrtvWmv?format=xml Main Web Site = http://dnrtv.com ebookshare RSS = http://www.ebookshare.net/feed/ Main Web Site = http://www.ebookshare.net Geekswithblogs.net RSS = http://feeds.feedburner.com/geekswithblogs Main Web Site = http://geekswithblogs.net/mainfeed.aspx Gmail Blog RSS = http://feeds.feedburner.com/OfficialGmailBlog?format=xml Main Web Site = http://gmailblog.blogspot.com/ Google Mobile Blog RSS = http://feeds.feedburner.com/OfficialGoogleMobileBlog Main Web Site = http://googlemobile.blogspot.com/ Herding Code RSS = http://feeds.feedburner.com/herdingcode Main Web Site = http://herdingcode.com LearnVisualStudio.NET Videos RSS = http://www.learnvisualstudio.net/videos.rss Main Web Site = http://www.learnvisualstudio.net/ Microsoft Learning Upcoming = Microsoft Learning Upcoming Titles RSS = http://learning.microsoft.com/rss/en-US/upcomingtitles?brand=Learning Main Web Site = http://learning.microsoft.com:80/rss/en-US/upcomingtitles?brand=Learning MS On-demand Webcasts RSS = http://www.microsoft.com/communities/rss.aspx?&Title=On-Demand+Webcasts&RssTitle=Microsoft+Webcasts%3A+On-Demand+Webcasts&CMTYSvcSource=MSCOMMedia&WebNewsURL=http%3A%2F%2Fwww.microsoft.com%2Fevents%2FEventDetails.aspx&CMTYRawShape=list&Params=+%0D%0A%09~CMTYDataSvcParams%5E%0D%0A%09~arg+Name%3D'EventType'+Value%3D'OnDemandWebcast'%2F%5E%0D%0A%09~arg+Name%3D'ProviderID'+Value%3D'A6B43178-497C-4225-BA42-DF595171F04C'%2F%5E%0D%0A%09~arg+Name%3D'StartDate'+Value%3D'06%2F30%2F2006'%2F%5E%0D%0A%09~arg+Name%3D'EndDate'+Value%3D'Now%2B0'%2F%5E%0D%0A%09~%2FCMTYDataSvcParams%5E+&NumberOfItems=100 Main Web Site = http://www.microsoft.com/events/default.mspx MS Podcasts for Devs RSS = http://www.microsoft.com/events/podcasts/default.aspx?podcast=rss&audience=Audience-e5381407-359f-4922-97d0-0237af790eee&pageId=x40 Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?audience=Audience-e5381407-359f-4922-97d0-0237af790eee&pageId=x40&WT.rss_ev=f MSDN Blogs RSS = http://blogs.msdn.com/b/mainfeed.aspx?Type=BlogsOnly Main Web Site = http://blogs.msdn.com/b/ MSDN Radio RSS = http://www.microsoft.com/events/podcasts/default.aspx?topic=&audience=&view=&pageId=x73&seriesID=Series-b9139976-8d48-4249-9b89-ccd17891de1e.xml&podcast=rss&type=wma Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?seriesID=Series-b9139976-8d48-4249-9b89-ccd17891de1e.xml&pageId=x73&WT.rss_ev=f O'Reilly Deal of the Day RSS = http://feeds.feedburner.com/oreilly/ebookdealoftheday Main Web Site = http://oreilly.com O'Reilly New RSS = http://feeds.feedburner.com/oreilly/newbooks Main Web Site = http://oreilly.com/ Safari Books Online RSS = http://my.safaribooksonline.com/rss Main Web Site = http://my.safaribooksonline.com/ ScottGu's Blog RSS = http://weblogs.asp.net/scottgu/rss.aspx Main Web Site = http://weblogs.asp.net/scottgu/default.aspx SourceForge Community Blog RSS = http://sourceforge.net/blog/feed/ Main Web Site = http://sourceforge.net/blog Stack Overflow RSS = http://blog.stackoverflow.com/feed/ Main Web Site = http://blog.stackoverflow.com Stepcase Lifehack RSS = http://www.lifehack.org/feed/ Main Web Site = http://www.lifehack.org TechNet Radio RSS = http://www.microsoft.com/events/podcasts/default.aspx?topic=&audience=&view=&pageId=x73&seriesID=Series-cc4e3db2-9212-43c5-a57b-d43fa31e6452.xml&podcast=rss&type=wma Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?seriesID=Series-cc4e3db2-9212-43c5-a57b-d43fa31e6452.xml&pageId=x73&WT.rss_ev=f Wrox All New Titles RSS = http://www.wrox.com/WileyCDA/feed/RSS_WROX_ALLNEW.xml Main Web Site = http://www.wrox.com

    Read the article

  • ASP.NET MVC, Web API, Razor, e Open Source (Código Aberto)

    - by Leniel Macaferi
    A Microsoft tornou o código fonte da ASP.NET MVC disponível sob uma licença open source (de código aberto) desde a primeira versão V1. Nós também integramos uma série de grandes tecnologias de código aberto no produto, e agora entregamos jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js e JSON.NET como parte integrante dos lançamentos da ASP.NET MVC. Estou muito animado para anunciar hoje que também iremos liberar o código fonte da ASP.NET Web API e ASP.NET Web Pages (também conhecido como Razor) sob uma licença open source (Apache 2.0), e que iremos aumentar a transparência do desenvolvimento de todos os três projetos hospedando seus repositórios de código no CodePlex (usando o novo suporte ao Git anunciado na semana passada - em Inglês). Isso permitirá um modelo de desenvolvimento mais aberto, onde toda a comunidade será capaz de participar e fornecer feedback nos checkins (envios de código), corrigir bugs, desenvolver novos recursos, e construir e testar os produtos diariamente usando a versão do código-fonte e testes mais atualizada possível. Nós também pela primeira vez permitiremos que os desenvolvedores de fora da Microsoft enviem correções e contribuições de código que a equipe de desenvolvimento da Microsoft irá rever para potencial inclusão nos produtos. Nós anunciamos uma abordagem de desenvolvimento semelhantemente aberta com o Windows Azure SDK em Dezembro passado, e achamos que essa abordagem é um ótimo caminho para estreitar as relações, pois permite um excelente ciclo de feedback com os desenvolvedores - e, finalmente, permite a entrega de produtos ainda melhores, como resultado. Muito importante - ASP.NET MVC, Web API e o Razor continuarão a ser totalmente produtos suportados pela Microsoft que são lançados tanto independentemente, bem como parte do Visual Studio (exatamente da mesma maneira como é feito hoje em dia). Eles também continuarão a ser desenvolvidos pelos mesmos desenvolvedores da Microsoft que os constroem hoje (na verdade, temos agora muito mais desenvolvedores da Microsoft trabalhando na equipe da ASP.NET). Nosso objetivo com o anúncio de hoje é aumentar ainda mais o ciclo de feedback/retorno sobre os produtos, para nos permitir oferecer produtos ainda melhores. Estamos realmente entusiasmados com as melhorias que isso trará. Saiba mais Agora você pode navegar, sincronizar e construir a árvore de código fonte da ASP.NET MVC, Web API, e Razor através do website http://aspnetwebstack.codeplex.com.  O repositório Git atual no site refere-se à árvore de desenvolvimento do marco RC (release candidate/candidata a lançamento) na qual equipe vem trabalhando nas últimas semanas, e esta mesma árvore contém ambos o código fonte e os testes, e pode ser construída e testada por qualquer pessoa. Devido aos binários produzidos serem bin-deployable (DLLs instaladas diretamente na pasta bin sem demais dependências), isto permite a você compilar seus próprios builds e experimentar as atualizações do produto, tão logo elas sejam adicionadas no repositório. Agora você também pode contribuir diretamente para o desenvolvimento dos produtos através da revisão e envio de feedback sobre os checkins de código, enviando bugs e ajudando-nos a verificar as correções tão logo elas sejam enviadas para o repositório, sugerindo e dando feedback sobre os novos recursos enquanto eles são implementados, bem como enviando suas próprias correções ou contribuições de código. Note que todas as submissões de código serão rigorosamente analisadas ??e testadas pelo Time da ASP.NET MVC, e apenas aquelas que atenderem a um padrão elevado de qualidade e adequação ao roadmap (roteiro) definido para as próximas versões serão incorporadas ao código fonte do produto. Sumário Todos nós da equipe estamos realmente entusiasmados com o anúncio de hoje - isto é algo no qual nós estivemos trabalhando por muitos anos. O estreitamento no relacionamento entre a comunidade e os desenvolvedores nos permitirá construir produtos ainda melhores levando a ASP.NET para o próximo nível em termos de inovação e foco no cliente. Obrigado! Scott P.S. Além do blog, eu uso o Twitter para disponibilizar posts rápidos e para compartilhar links. Meu apelido no Twitter é: @scottgu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Web.config WordPress rewrite rules next to Magento

    - by Flo
    I've installed Magento on IIS in folder: E:\mydomain\wwwroot (I already have it all running correctly). I have no deeper folder magento, I placed all files directly in the wwwroot folder, so: wwwroot\app wwwroot\downloader wwwroot\errors wwwroot\includes etc... UPDATE: since I'm on IIS my .htaccess is ignored completely and my web.config rules are used instead. Here's my web.config in folder e:\mydomain\wwwroot: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Magento SEO: remove index.php from URL"> <match url="^(?!index.php)([^?#]*)(\\?([^#]*))?(#(.*))?" /> <conditions> <add input="{URL}" pattern="^/(media|skin|js)/" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" ignoreCase="false" negate="true" /> </conditions> <action type="Rewrite" url="index.php/{R:0}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Next, I wanted to install WordPress. I unzipped all files in folder e:\mydomain\wwwroot\wordpress Browsed to www.mydomain.com/wordpress/wp-admin/install.php, where I configured everything for my database. Everything was installed correctly. I then navigate to http://www.mydomain.com/wordpress/wp-login.php where I type my credentials. I seem to be logged in and am redirected to http://www.mydomain.com/wordpress/wp-admin/ But there I receive an empty page. I enabled detailed error message in IIS following this article: http://www.iis.net/learn/troubleshoot/diagnosing-http-errors/how-to-use-http-detailed-errors-in-iis I also checkec with Fiddler and see that I receive a 500 error: GET /wordpress/wp-admin/ HTTP/1.1 Host: www.mydomain.com Connection: keep-alive Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36 Referer: http://www.mydomain.com/wordpress/wp-login.php Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,nl;q=0.6 Cookie: wordpress_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7C2d8edb4fc6618f290fadb49b035cad31; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7Cbf822163926b8b8df16d0f1fefb6e02e HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: PHP/5.4.14 X-Powered-By: ASP.NET Date: Sun, 06 Oct 2013 12:56:03 GMT Content-Length: 0 My WordPress web.config in folder e:\mydomain\wwwroot\wordpress contains: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="wordpress" patternSyntax="Wildcard"> <match url="*"/> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true"/> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true"/> </conditions> <action type="Rewrite" url="index.php"/> </rule></rules> </rewrite> </system.webServer> </configuration> I also want my WordPress articles to be available on www.mydomain.com/blog instead of www.mydomain.com/wordpress Ofcourse my admin links for Magento and Wordpress should also work. How can I configure my web.config files to achieve the above?

    Read the article

  • SSIS Dashboard v0.4

    - by Davide Mauri
    Following the post on SSISDB script on Gist, I’ve been working on a HTML5 SSIS Dashboard, in order to have a nice looking, user friendly and, most of all, useful, SSIS Dashboard. Since this is a “spare-time” project, I’ve decided to develop it using Python since it’s THE data language (R aside), it’s a beautiful & powerful, well established and well documented and with a rich ecosystem around. Plus it has full support in Visual Studio, through the amazing Python Tools For Visual Studio plugin, I decided also to use Flask, a very good micro-framework to create websites, and use the SB Admin 2.0 Bootstrap admin template, since I’m all but a Web Designer. The result is here: https://github.com/yorek/ssis-dashboard and I can say I’m pretty satisfied with the work done so far (I’ve worked on it for probably less than 24 hours). Though there’s some features I’d like to add in t future (historical execution time, some charts, connection with AzureML to do prediction on expected execution times) it’s already usable. Of course I’ve tested it only on my development machine, so check twice before putting it in production but, give the fact that, virtually, there is not installation needed (you only need to install Python), and that all queries are based on standard SSISDB objects, I expect no big problems. If you want to test, contribute and/or give feedback please fell free to do it…I would really love to see this little project become a community project! Enjoy!

    Read the article

  • Basket Analysis with #dax in #powerpivot and #ssas #tabular

    - by Marco Russo (SQLBI)
    A few days ago I published a new article on DAX Patterns web site describing how to implement Basket Analysis in DAX. This topic is a very classical one and is also covered in the many-to-many revolution white paper. It has been also discussed in several blog posts, listed here in historical order: Simple Basket Analysis in DAX by Chris Webb PowerPivot, basket analysis and the hidden many to many by Alberto Ferrari Applied Basket Analysis in Power Pivot using DAX by Gerhard Brueckl As usual, in DAX Patterns we try to present the required DAX formulas in a way that is easy to adapt to specific models. We also try to show a good implementation from a performance point of view. Further optimizations are always possible in DAX. However, in order to keep the model simple to adapt in different scenarios, we avoid presenting optimizations that would require particular assumptions or restrictions on the data model. I hope you will find the Basket Analysis pattern useful. Even if you do not need it today, reading the DAX formula is a good exercise to check your knowledge of evaluation contexts in DAX. For example, describing how does it work the following expression is not a trivial task! [Orders with Both Products] := CALCULATE (     DISTINCTCOUNT ( Sales[SalesOrderNumber] ),     CALCULATETABLE (         SUMMARIZE ( Sales, Sales[SalesOrderNumber] ),         ALL ( Product ),         USERELATIONSHIP ( Sales[ProductCode], 'Filter Product'[Filter ProductCode] )     ) ) The good news is that you can use the patterns even if you do not really understand all the details of the DAX formulas you are using! Any feedback on this new pattern is very welcome.

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • Is a Mission Oriented Architecture (MOA) a better way to describe things than SOA?

    - by Brian Langbecker
    I might sound like a troll, but I would like to seriously understand this deeper. The place I work at has started to use the term MOA, versus SOA as we believe it drives more clarity and want to compare it to the true goals of SOA. A Mission Oriented Architecture is an approach whereby an application is broken down into various business mission elements, with the database, file assets, batch and real time functionality all tightly coupled in terms of delivering that piece of the functionality. The mission allows the developers to focus on a specific piece of functionality to get it right, and to build it with the ability for that piece to scale as an independent entity within the overall application. By tightly coupling the data, file assets and business logic you achieve the goals of working on a very large problem in bite size pieces. Some definitions of SOA mix it up with what is essentially a method call on a web service versus a true "service". As an architect, I have always found it fun getting everyone on the same page regarding SOA. Is it better to call it a "mission" versus a "service"?

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • ADF Mobile - Update through Web Service (with ADF Business Components)

    - by Shay Shmeltzer
    In my previous blog entry I went over the basics of exposing ADF Business Components through service interfaces, and developing a simple ADF Mobile application that access and fetches data from those services. In this entry we'll dive a bit deeper  and address an update scenario through these web service interfaces. You can see the full demo video at the end of the post. In the first steps I show how to add an explicit method execution to fetch a specific record we want to update on the second page of a flow. For an update you'll be invoking a service method and passing the record you want to update as a parameter. As in many other Web services scenarios, we need to provide a complete object of specific type to the method. The ADF Web service data control helps you here by offering an object of this type that you can drag and drop into your page. The next step is to make sure to fill that object with the values you want to update. In the demo we do this through  coding in a backing bean that shows how to use the AdfmfJavaUtilities utility. The code gets the value from one field, gets a pointer to the parallel update field, and then copy from one to the other. At the end of the bean we manually execute the call to the update method on the Web service. Here is the demo: &amp;amp;amp;amp;&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; Here is the code used in the backing bean in the demo above. package a.mobile;import oracle.adfmf.amx.event.ActionEvent;import javax.el.MethodExpression;import javax.el.ValueExpression;import oracle.adfmf.amx.event.ActionEvent;import oracle.adfmf.framework.api.AdfmfJavaUtilities;import oracle.adfmf.framework.model.AdfELContext;public class backing {    public backing() {    }    public void copyAndUpdate(ActionEvent actionEvent) {        // Add event code here...        AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext();        ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName.inputValue}", String.class);        ValueExpression ve3 =            AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName1.inputValue}", String.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        MethodExpression me = AdfmfJavaUtilities.getMethodExpression("#{bindings.updateDepartmentsView1.execute}", Object.class, new Class[] {});         me.invoke(adfELContext, new Object[] {});        }    }

    Read the article

  • How should a JEE application store credentials for logging in to an external system?

    - by FGreg
    I am in a situation where I have a Web Application (WAR) that is accessing a REST service provided by another application. The REST service uses Basic HTTP Authentication. So that means the application calling the REST service needs to store user credentials somehow. To further complicate things, this is an enterprise, so there are different 'regions' the application moves through which will have different credentials for the same service (think local development, development region, integration region, user test region, production, etc...) My first instinct is that the credentials should be stored by the JEE container and the application should ask the container for the credentials (probably via JNDI?). I'm beginning to read about Java Authentication and Authorization Service (JAAS) but I'm not sure if that is the appropriate solution to this problem. How should a JEE application store credentials for logging in to an external system? A few more details about my WAR. It is a Spring-Integration project that has no front-end. The container I am working with is Websphere. I am using JEE 5 and Spring 4.0.1. To this point I have not needed to consider spring-security... does this situation mean I should re-evaluate that decision?

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • Our Look at Opera 10.50 Web Browser

    - by Asian Angel
    Everyone has been talking about the newest version of Opera recently but perhaps you have not looked at it too closely yet. Today we will take a look at 10.50 and let you see what this “new browser” is all about. The New Engines Carakan JavaScript Engine: Runs web applications up to 7 times faster than its predecessor Futhark Vega Graphics Library: Enables super fast and smooth graphics on everything from tab switching to webpage animation Presto 2.5: Provides support for HTML5, CSS2.1 and the latest CSS3 standards A Look at the Features Available If you have installed or used older versions of Opera before then the default look after a clean install will probably seem rather different. The main differences in appearance are mainly located within the “glass border” areas of the browser. The “Speed Dial” setup looks and works just as well as in previous versions. You can set a favorite wallpaper or image as your background and choose the number of “dials” using the “Configure Speed Dial Command”. One of the “standout” differences is the “O Button”. All of the menus have been condensed into this single access point but it only takes a few moments to find what you are looking for. If you have used the style before in earlier versions of Opera some of the items have been moved around. For those who prefer the “Menu Bar” that can be easily restored using the “Show Menu Bar Command”. If desired you can actually “extend” the “Tab Bar” downwards to display thumbnails of your open tabs. Just use your mouse to grab the bottom of the “Tab Bar” and adjust it to suit your personal needs. The only problem with this feature is that it will quickly use up a good sized portion of your available UI and browser window space. The “Password Manager” is ready to access when needed…the background for the button will turn a shiny metallic blue when you open a webpage that you have “Login Information” saved for. One of the new features is a small “Recycle Bin Button” in the upper right corner. Clicking on this will display a list of recently closed tabs letting you have easy access to any tabs that you may have accidentally closed. This is definitely a great feature to have as an easy access button. For those who were used to how the “Zoom Feature” looked before it has a new “look” to it. Instead of the pop-up menu-type listing of “view sizes” present before you now have a slider button that you can use to adjust the zooming level. For our default setup here the “Sidebar Panels” available were: “Bookmarks, Widgets, Unite, Notes, Downloads, History, & Panels”. Additional panels such as “Links, Windows, Search, Info, etc.” are available if you want and/or need them (accessible using the “Panels Plus Sign Button”). The “Opera Link Button” makes it easy for you to synchronize your “Speed Dial, Bookmarks, Personal Bar, Custom Searches, History & Notes”. Note: “Opera Link” requires an account and can be signed up for using the link provided below. Want to share files with your family and friends? “Unite” allows you to do that and more. With “Unite” you can: “Stream Music, Show Photo Galleries, Share Files and/or Folders, & host webpages directly from your browser”. We have a more in-depth look at “Unite” in our article here. Note: Use of “Unite” requires an Opera account. Got a slow internet connection? “Opera Turbo” can help with that by running the web traffic through their “compression servers” to speed up your web browsing. Keep in mind that “Opera Turbo” will not engage if you are accessing a secure website (i.e. your bank’s website) thus preserving your security. Note: “Opera Turbo” can be set up to automatically detect slow internet connections (i.e. crowded Wi-Fi in a cafe). Opera has a built-in “Private Browsing Mode” now for those who prefer anonymous browsing and want to keep the “history records clean” on their computer. To access it go to “Tabs and windows” and select “New private tab” or “New private window” as desired. When you open your new “Private Tab or Window” you will see the following message with details on how Opera will handle browsing information and a large “door hanger symbol”. Notice that the one tab is locked into “Private Browsing Mode” while the others are still working in “Regular Browsing Mode”. Very nice! A miniature version of the “door hanger symbol” will be present on any tab that is locked into “Private Browsing Mode”. If you are using Windows 7 then you will love how things look from your “Taskbar”. Here you can see four very nice looking thumbnails for the tabs that we had open. All that you have to do is click on the desired thumbnail… The “Context Menu” looks just as lovely as the thumbnails and definitely has some terrific functionality built into it. Add Enhanced Aero Capability If you love “Aero” and want more for your new Opera install then we have the perfect theme for you. The theme’s name is Z1-AV69 and once you have downloaded it you will need to place it in the “Skins Subfolder” in Opera’s “Program Files Folder”. Note: For our example we used version 1.10 but version 2.00 is now available (link provided below). Once you have restarted Opera, go to the “O Menu” and select “Appearance”. When the “Appearance Window” opens click on “Z1-Glass Skin” and then click “OK”. All of a sudden you will have more “Aero Goodness” to enjoy. Compare this screenshot with the one at the top of this article…the only part that is not transparent now is the browser window area itself. Want even more “Aero Goodness”? Right click on the “Tab Bar” and set “Tab Bar Placement” to “Left”. Note: You can achieve the same effect by setting the “Tab Bar Placement” to “Right”. With the “Speed Dial” visible you will be able to see your wallpaper with ease. While this is obviously not for everyone it does make for a great visual trick. Portable Versions Perhaps you need this wonderful new version of Opera to go with you wherever you do during the day. Not a problem…just visit the Opera USB website to choose a version that works best for you. You can select from “Zip or Exe” setup files and if needed update an older portable version using a “Zipped Update Files Package”. If you are updating an older version keep in mind that you will need to delete the old “OperaUSB.exe. File” due to changes with the new setup files. During our tests updating older portable versions went well for the most part but we did experience a few “odd UI quirks” here and there…so we recommend setting up a clean install if possible. Conclusion The new 10.50 release is a pleasure to use and is a recommended install for your system. Whether you are considering trying Opera for the first time or have been using it for a bit we think that you will pleased with everything that the 10.50 release has to offer. For those who would like to add User Scripts to Opera be certain to look at our how-to article here. Links Download Opera 10.50 for your location (Windows) Get the latest Snapshot versions for Linux & Mac Sign up for an Opera Link account View In-Depth detail on Opera 10.50’s features Download the Z1-AV69 Aero Theme Download Portable Opera 10.50 Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageSet Up User Scripts in Opera BrowserScan Files for Viruses Before You Download With Dr.WebTurn Your Computer into a File, Music, and Web Server with Opera UniteSet the Default Browser on Ubuntu From the Command Line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • Learning content for MCSDs: Web Applications and Windows Store Apps using HTML5

    Recently, I started again to learn for various Microsoft certifications. First candidate on my way to MSCD: Web Applications is the Exam 70-480: Programming in HTML5 with JavaScript and CSS3. Motivation to go for a Microsoft exam I guess, this is quite personal but let me briefly describe my intentions to go that exam. First, I'm doing web development since the 1990's. Working with HTML, CSS and Javascript is happening almost daily in my workspace. And honestly, I do not only do 'pure' web development but already integrated several HTML/CSS/Javascript frontend UIs into an existing desktop application (written in Visual FoxPro) inclusive two-way communication and data exchange. Hm, might be an interesting topic for another blog article here... Second, this exam has a very interesting aspect which is listed at the bottom of the exam's details: Credit Toward Certification When you pass Exam 70-480: Programming in HTML5 with JavaScript and CSS3, you complete the requirements for the following certification(s): Programming in HTML5 with JavaScript and CSS3 Specialist Exam 70-480: Programming in HTML5 with JavaScript and CSS3: counts as credit toward the following certification(s): MCSD: Web Applications MCSD: Windows Store Apps using HTML5 So, passing one single exam will earn you specialist certification straight-forward, and opens the path to higher levels of certifications. Preparations and learning path Well, due to a newsletter from Microsoft Learning (MSL) I caught interest in picking up the circumstances and learning materials for this particular exam. As of writing this article there is a promotional / voucher code available which enables you to register for this exam for free! Simply register yourself with or log into your existing account at Prometric, choose the exam for a testing facility near to you and enter the voucher code HTMLJMP (available through 31.03.2013 or while supplies last). Hurry up, there are restrictions... As stated above, I'm already very familiar with web development and the programming flavours involved into this. But of course, it is always good to freshen up your knowledge and reflect on yourself. Microsoft is putting a lot of effort to attract any kind of developers into the 'App Development'. Whether it is for the Windows 8 Store or the Windows Phone 8 Store, doesn't really matter. They simply need more apps. This demand for skilled developers also comes with a nice side-effect: Lots and lots of material to study. During the first couple of hours, I could easily gather high quality preparation material - again for free! Following is just a small list of starting points. If you have more resources, please drop me a message in the comment section, and I'll be glad to update this article accordingly. Developing HTML5 Apps Jump Start This is an accelerated jump start video course on development of HTML5 Apps for Windows 8. There are six modules that are split into two video sessions per module. Very informative and intense course material. This is packed stuff taken from an official preparation course for exam 70-480. Developing Windows Store Apps with HTML5 Jump Start Again, an accelerated preparation video course on Windows 8 Apps. There are six modules with two video sessions each which will catapult you to your exam. This is also related to preps for exam 70-481. Programming Windows 8 Apps with HTML, CSS, and JavaScript Kraig Brockschmidt delves into the ups and downs of Windows 8 App development over 800+ pages. Great eBook to read, study, and to practice the samples - best of all, it's for free. codeSHOW() This is a Windows 8 HTML/JS project with the express goal of demonstrating simple development concepts for the Windows 8 platform. Code, code and more code... absolutely great stuff to study and practice. Microsoft Virtual Academy I already wrote about the MVA in a previous article. Well, if you haven't registered yourself yet, now is the time. The list is not complete for sure, but this might keep you busy for at least one or even two weeks to go through the material. Please don't hesitate to add more resources in the comment section. Right now, I'm already through all videos once, and digging my way through chapter 4 of Kraig's book. Additional material - Pluralsight Apart from those free online resources, I also following some courses from the excellent library of Pluralsight. They already have their own section for Windows 8 development, but of course, you get companion material about HTML5, CSS and Javascript in other sections, too. Introduction to Building Windows 8 Applications Building Windows 8 Applications with JavaScript and HTML Selling Windows 8 Apps HTML5 Fundamentals Using HTML5 and CSS3 HTML5 Advanced Topics CSS3 etc... Interesting to see that Michael Palermo provides his course material on multiple platforms. Fantastic! You might also pay a visit to his personal blog. Hm, it just came to my mind that Aaron Skonnard of Pluralsight publishes so-called '24 hours Learning Paths' based on courses available in the course library. Would be interested to see a combination for Windows 8 App development using HTML5, CSS3 and Javascript in the future. Recommended workspace environment Well, you might have guessed it but this requires Windows 8, Visual Studio 2012 Express or another flavour, and a valid Developers License. Due to an MSDN subscription I working on VS 2012 Premium with some additional tools by Telerik. Honestly, the fastest way to get you up and running for Windows 8 App development is the source code archive of codeSHOW(). It does not only give you all source code in general but contains a couple of SDKs like Bing Maps, Microsoft Advertising, Live ID, and Telerik Windows 8 controls... for free! Hint: Get the Windows Phone 8 SDK as well. Don't worry, while you are studying the material for Windows 8 you will be able to leverage from this knowledge to development for the phone platform, too. It takes roughly one to two hours to get your workspace and learning environment, at least this was my time frame due to slow internet connection and an aged spare machine. ;-) Oh, before I forget to mention it, as soon as you're done, go quickly to the Windows Store and search for ClassBrowserPlus. You might not need it ad hoc for your development using HTML5, CSS and Javascript but I think that it is a great developer's utility that enables you to view the properties, methods and events (along with help text) for all Windows 8 classes. It's always good to look behind the scenes and to explore how it is made. Idea: Start/join a learning group The way you learn new things or intensify your knowledge in a certain technology is completely up to your personal preference. Back in my days at the university, we used to meet once or twice a week in a small quiet room to exchange our progress, questions and problems we ran into. In general, I recommend to any software craftsman to lift your butt and get out to exchange with other developers. Personally, I like this approach, as it gives you new points of view and an insight into others' own experience with certain techniques and how they managed to solve tricky issues. Just keep it relaxed and not too formal after all, and you might a have a good time away from your dull office desk. Give your machine a break, too.

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Accessing your web server via IPv6

    Being able to run your systems on IPv6, have automatic address assignment and the ability to resolve host names are the necessary building blocks in your IPv6 network infrastructure. Now, that everything is in place it is about time that we are going to enable another service to respond to IPv6 requests. The following article will guide through the steps on how to enable Apache2 httpd to listen and respond to incoming IPv6 requests. This is the fourth article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Surfing the web - IPv6 style Enabling IPv6 connections in Apache 2 is fairly simply. But first let's check whether your system has a running instance of Apache2 or not. You can check this like so: $ service apache2 status Apache2 is running (pid 2680). In case that you got a 'service unknown' you have to install Apache to proceed with the following steps: $ sudo apt-get install apache2 Out of the box, Apache binds to all your available network interfaces and listens to TCP port 80. To check this, run the following command: $ sudo netstat -lnptu | grep "apache2\W*$"tcp6       0      0 :::80                   :::*                    LISTEN      28306/apache2 In this case Apache2 is already binding to IPv6 (and implicitly to IPv4). If you only got a tcp output, then your HTTPd is not yet IPv6 enabled. Check your Listen directive, depending on your system this might be in a different location than the default in Ubuntu. $ sudo nano /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default# This is also true if you have upgraded from before 2.2.9-3 (i.e. from# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and# README.Debian.gzNameVirtualHost *:80Listen 80<IfModule mod_ssl.c>    # If you add NameVirtualHost *:443 here, you will also have to change    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl    # to <VirtualHost *:443>    # Server Name Indication for SSL named virtual hosts is currently not    # supported by MSIE on Windows XP.    Listen 443</IfModule><IfModule mod_gnutls.c>    Listen 443</IfModule> Just in case that you don't have a ports.conf file, look for it like so: $ cd /etc/apache2/$ fgrep -r -i 'listen' ./* And modify the related file instead of the ports.conf. Which most probably might be either apache2.conf or httpd.conf anyways. Okay, please bear in mind that Apache can only bind once on the same interface and port. So, eventually, you might be interested to add another port which explicitly listens to IPv6 only. In that case, you would add the following in your configuration file: Listen 80Listen [2001:db8:bad:a55::2]:8080 But this is completely optional... Anyways, just to complete all steps, you save the file, and then check the syntax like so: $ sudo apache2ctl configtestSyntax OK Ok, now let's apply the modifications to our running Apache2 instances: $ sudo service apache2 reload * Reloading web server config apache2   ...done. $ sudo netstat -lnptu | grep "apache2\W*$"                                                                                               tcp6       0      0 2001:db8:bad:a55:::8080 :::*                    LISTEN      5922/apache2    tcp6       0      0 :::80                   :::*                    LISTEN      5922/apache2 There we have two daemons running and listening to different TCP ports. Now, that the basics are in place, it's time to prepare any website to respond to incoming requests on the IPv6 address. Open up any configuration file you have below your sites-enabled folder. $ ls -al /etc/apache2/sites-enabled/... $ sudo nano /etc/apache2/sites-enabled/000-default <VirtualHost *:80 [2001:db8:bad:a55::2]:8080>        ServerAdmin [email protected]        ServerName server.ios.mu        ServerAlias server Here, we have to check and modify the VirtualHost directive and enable it to respond to the IPv6 address and port our web server is listening to. Save your changes, run the configuration test and reload Apache2 in order to apply your modifications. After successful steps you can launch your favourite browser and navigate to your IPv6 enabled web server. Accessing an IPv6 address in the browser That looks like a successful surgery to me... Note: In case that you received a timeout, check whether your client is operating on IPv6, too.

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 5

    - by nikolaosk
    ?his is the fifth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line. There are 4 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.You can find the fourth here.  In this new post we will build on the previous posts and we will demonstrate how to display the details of a poster when the user clicks on an individual poster photo/link. We will add a FormView control on a web form and will bind data from the database. FormView is a great web server control for displaying the details of a single record. 1) Launch Visual Studio and open your solution where your project lives2) Add a new web form item on the project.Make sure you include the Master Page.Name it PosterDetails.aspx 3) Open the PosterDetails.aspx page. We will add some markup in this page. Have a look at the code below <asp:Content ID="Content2" ContentPlaceHolderID="FeaturedContent" runat="server">    <asp:FormView ID="posterDetails" runat="server" ItemType="PostersOnLine.DAL.Poster" SelectMethod ="GetPosterDetails">        <ItemTemplate>            <div>                <h1><%#:Item.PosterName %></h1>            </div>            <br />            <table>                <tr>                    <td>                        <img src="<%#:Item.PosterImgpath %>" border="1" alt="<%#:Item.PosterName %>" height="300" />                    </td>                    <td style="vertical-align: top">                        <b>Description:</b><br /><%#:Item.PosterDescription %>                        <br />                        <span><b>Price:</b>&nbsp;<%#: String.Format("{0:c}", Item.PosterPrice) %></span>                        <br />                        <span><b>Poster Number:</b>&nbsp;<%#:Item.PosterID %></span>                        <br />                    </td>                </tr>            </table>        </ItemTemplate>    </asp:FormView></asp:Content> I set the ItemType property to the Poster entity class and the SelectMethod to the GetPosterDetails method.The Item binding expression is available and we can retrieve properties of the Poster object.I retrieve the name, the image,the description and the price of each poster. 4) Now we need to write the GetPosterDetails method.In the code behind of the PosterDetails.aspx page we type public IQueryable<Poster> GetPosterDetails([QueryString("PosterID")]int? posterid)        {                    PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (posterid.HasValue && posterid > 0)            {                query = query.Where(p => p.PosterID == posterid);            }            else            {                query = null;            }            return query;        } I bind the value from the query string to the posterid parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable PosterID.If there is a matching poster it is fetched from the database.If not,there is no data at all coming back from the database. 5) I run my application and then click on the "Midfielders" link.Then click on the first poster that appears from the left (Kenny Dalglish) and click on it to see the details. Have a look at the picture below to see the results.   You can see that now I have all the details of the poster in a new page.?ake sure you place breakpoints in the code so you can see what is really going on. Hope it helps!!!

    Read the article

  • SharePoint Web Services. Using UserPofileService.GetUserProfileByName. After SP upgrade... failing.

    - by Steve
    The below web services code has worked properly for me for over a year. We have updated our SharePoint servers, and now the below code throws an exception (at the bottom line of code) "Object reference not set to an instance of an object" UserProfileWS.UserProfileService userProfileService = new UserProfileWS.UserProfileService(); userProfileService.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials; string serviceloc = "/_vti_bin/UserProfileService.asmx"; userProfileService.Url = _webUrl + serviceloc; UserProfileWS.PropertyData[] info = userProfileService.GetUserProfileByName(null); EDIT: The service is still there. I browse http:///_vti_bin/UserProfileService.asmx, and the information for the service is still there, including the full description of the GetUserProfileByName call. EDIT2: This does appear to be due to a change in SharePoint. I loaded a previous version of my software (known to be working), and it exhibits the same erroneous behavior.

    Read the article

  • How do I solve this error, "error while trying to deserialize parameter"

    - by Paul Rowland
    I have a web service that is working fine in one environment but not in another. The web service gets document meta data from SharePoint, it running on a server where I cant debug but with logging I confirmed that the method enters and exits successfully. What could be the reason for the errors? The error message is, The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://CompanyName.com.au/ProjectName:GetDocumentMetaDataResponse. The InnerException message was 'Error in line 1 position 388. 'Element' 'CustomFields' from namespace 'http://CompanyName.com.au/ProjectName' is not expected. Expecting element 'Id'.'. Please see InnerException for more details. The InnerException was System.ServiceModel.Dispatcher.NetDispatcherFaultException was caught Message="The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://CompanyName.com.au/ProjectName:GetDocumentMetaDataResponse. The InnerException message was 'Error in line 1 position 388. 'Element' 'CustomFields' from namespace 'http://CompanyName.com.au/ProjectName' is not expected. Expecting element 'Id'.'. Please see InnerException for more details." Source="mscorlib" Action="http://schemas.microsoft.com/net/2005/12/windowscommunicationfoundation/dispatcher/fault" StackTrace: Server stack trace: at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.DeserializeParameterPart(XmlDictionaryReader reader, PartInfo part, Boolean isRequest) at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.DeserializeParameter(XmlDictionaryReader reader, PartInfo part, Boolean isRequest) at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.DeserializeParameters(XmlDictionaryReader reader, PartInfo[] parts, Object[] parameters, Boolean isRequest) at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, String action, MessageDescription messageDescription, Object[] parameters, Boolean isRequest) at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeBodyContents(Message message, Object[] parameters, Boolean isRequest) at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeReply(Message message, Object[] parameters) at System.ServiceModel.Dispatcher.ProxyOperationRuntime.AfterReply(ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at CompanyName.ProjectName.External.Sharepoint.WebServiceProxies.SharepointProjectNameSiteService.ProjectNameSiteSoap.GetDocumentMetaData(GetDocumentMetaDataRequest request) at CompanyName.ProjectName.External.Sharepoint.WebServiceProxies.SharepointProjectNameSiteService.ProjectNameSiteSoapClient.CompanyName.ProjectName.External.Sharepoint.WebServiceProxies.SharepointProjectNameSiteService.ProjectNameSiteSoap.GetDocumentMetaData(GetDocumentMetaDataRequest request) in D:\Source\TFSRoot\ProjectName\trunk\CodeBase\External\CompanyName.ProjectName.External.Sharepoint.WebServiceProxies\Service References\SharepointProjectNameSiteService\Reference.cs:line 2141 at CompanyName.ProjectName.External.Sharepoint.WebServiceProxies.SharepointProjectNameSiteService.ProjectNameSiteSoapClient.GetDocumentMetaData(ListSummaryDto listSummary, FileCriteriaDto criteria, List`1 customFields) in D:\Source\TFSRoot\ProjectName\trunk\CodeBase\External\CompanyName.ProjectName.External.Sharepoint.WebServiceProxies\Service References\SharepointProjectNameSiteService\Reference.cs:line 2150 at CompanyName.ProjectName.Services.Shared.SharepointAdapter.GetDocumentMetaData(ListSummaryDto listSummary, FileCriteriaDto criteria, List`1 customFields) in D:\Source\TFSRoot\ProjectName\trunk\CodeBase\Services\CompanyName.ProjectName.Services\Shared\SharepointAdapter.cs:line 260 at CompanyName.ProjectName.Services.Project.ProjectDocumentService.SetSharepointDocumentData(List`1 sourceDocuments) in D:\Source\TFSRoot\ProjectName\trunk\CodeBase\Services\CompanyName.ProjectName.Services\Project\ProjectDocumentService.cs:line 1963 at CompanyName.ProjectName.Services.Project.ProjectDocumentService.GetProjectConversionDocumentsImplementation(Int32 projectId) in D:\Source\TFSRoot\ProjectName\trunk\CodeBase\Services\CompanyName.ProjectName.Services\Project\ProjectDocumentService.cs:line 3212 InnerException: System.Runtime.Serialization.SerializationException Message="Error in line 1 position 388. 'Element' 'CustomFields' from namespace 'http://CompanyName.com.au/ProjectName' is not expected. Expecting element 'Id'." Source="System.Runtime.Serialization" StackTrace: at System.Runtime.Serialization.XmlObjectSerializerReadContext.ThrowRequiredMemberMissingException(XmlReaderDelegator xmlReader, Int32 memberIndex, Int32 requiredIndex, XmlDictionaryString[] memberNames) at System.Runtime.Serialization.XmlObjectSerializerReadContext.GetMemberIndexWithRequiredMembers(XmlReaderDelegator xmlReader, XmlDictionaryString[] memberNames, XmlDictionaryString[] memberNamespaces, Int32 memberIndex, Int32 requiredIndex, ExtensionDataObject extensionData) at ReadFileMetaDataDtoFromXml(XmlReaderDelegator , XmlObjectSerializerReadContext , XmlDictionaryString[] , XmlDictionaryString[] ) at System.Runtime.Serialization.ClassDataContract.ReadXmlValue(XmlReaderDelegator xmlReader, XmlObjectSerializerReadContext context) at System.Runtime.Serialization.XmlObjectSerializerReadContext.ReadDataContractValue(DataContract dataContract, XmlReaderDelegator reader) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator reader, String name, String ns, DataContract& dataContract) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator xmlReader, Int32 id, RuntimeTypeHandle declaredTypeHandle, String name, String ns) at ReadArrayOfFileMetaDataDtoFromXml(XmlReaderDelegator , XmlObjectSerializerReadContext , XmlDictionaryString , XmlDictionaryString , CollectionDataContract ) at System.Runtime.Serialization.CollectionDataContract.ReadXmlValue(XmlReaderDelegator xmlReader, XmlObjectSerializerReadContext context) at System.Runtime.Serialization.XmlObjectSerializerReadContext.ReadDataContractValue(DataContract dataContract, XmlReaderDelegator reader) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator reader, String name, String ns, DataContract& dataContract) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator xmlReader, Int32 id, RuntimeTypeHandle declaredTypeHandle, String name, String ns) at ReadMetaDataSearchResultsDtoFromXml(XmlReaderDelegator , XmlObjectSerializerReadContext , XmlDictionaryString[] , XmlDictionaryString[] ) at System.Runtime.Serialization.ClassDataContract.ReadXmlValue(XmlReaderDelegator xmlReader, XmlObjectSerializerReadContext context) at System.Runtime.Serialization.XmlObjectSerializerReadContext.ReadDataContractValue(DataContract dataContract, XmlReaderDelegator reader) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator reader, String name, String ns, DataContract& dataContract) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator xmlReader, Int32 id, RuntimeTypeHandle declaredTypeHandle, String name, String ns) at ReadGetDocumentMetaDataResponseBodyFromXml(XmlReaderDelegator , XmlObjectSerializerReadContext , XmlDictionaryString[] , XmlDictionaryString[] ) at System.Runtime.Serialization.ClassDataContract.ReadXmlValue(XmlReaderDelegator xmlReader, XmlObjectSerializerReadContext context) at System.Runtime.Serialization.XmlObjectSerializerReadContext.ReadDataContractValue(DataContract dataContract, XmlReaderDelegator reader) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator reader, String name, String ns, DataContract& dataContract) at System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(XmlReaderDelegator xmlReader, Type declaredType, DataContract dataContract, String name, String ns) at System.Runtime.Serialization.DataContractSerializer.InternalReadObject(XmlReaderDelegator xmlReader, Boolean verifyObjectName) at System.Runtime.Serialization.XmlObjectSerializer.ReadObjectHandleExceptions(XmlReaderDelegator reader, Boolean verifyObjectName) at System.Runtime.Serialization.DataContractSerializer.ReadObject(XmlDictionaryReader reader, Boolean verifyObjectName) at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.DeserializeParameterPart(XmlDictionaryReader reader, PartInfo part, Boolean isRequest) InnerException:

    Read the article

  • How to use delegate to perform callback between caller and web service helper class?

    - by codemonkie
    I have 2 classes A and B, where they belongs to the same namespace but resides in seperate files namely a.cs and b.cs, where class B essentially is a helper wrapping a web service call as follow: public class A { public A() // constructor { protected static B b = new B(); } private void processResult1(string result) { // come here when result is successful } private void processResult2(string result) { // come here when result is failed } static void main() { b.DoJobHelper(...); } } public class B { private com.nowhere.somewebservice ws; public B() { this.ws = new com.nowhere.somewebservice(); ws.JobCompleted += new JobCompletedEventHandler(OnCompleted); } void OnCompleted(object sender, JobCompletedEventArgs e) { string s = e.Result; Guid taskID = (Guid)e.UserState; switch (s) { case "Success": // Call processResult1(); break; case "Failed": // Call processResult2(); break; default: break; } } public void DoJobHelper() { Object userState = Guid.NewGuid(); ws.DoJob(..., userState); } } (1) I have seen texts on the net on using delegates for callbacks but failed to apply that to my case. All I want to do is to call the appropriate processResult() method upon OnCompleted() event, but dunno how to and where to declare the delegate: public delegate void CallBack(string s); (2) There is a sender object passed in to OnCompleted() but never used, did I miss anything there? Or how can I make good use of sender? Any helps appreciated.

    Read the article

  • [C#] How to use delegate to perform callback between caller and web service helper class?

    - by codemonkie
    I have 2 classes A and B, where they belongs to the same namespace but resides in seperate files namely a.cs and b.cs, where class B essentially is a helper wrapping a web service call as follow: public class A { public A() // constructor { protected static B b = new B(); } private void processResult1(string result) { // come here when result is successful } private void processResult2(string result) { // come here when result is failed } static void main() { b.DoJobHelper(...); } } public class B { private com.nowhere.somewebservice ws; public B() { this.ws = new com.nowhere.somewebservice(); ws.JobCompleted += new JobCompletedEventHandler(OnCompleted); } void OnCompleted(object sender, JobCompletedEventArgs e) { string s; Guid taskID = (Guid)e.UserState; switch (s) { case "Success": // Call processResult1(); break; case "Failed": // Call processResult2(); break; default: break; } } public void DoJobHelper() { Object userState = Guid.NewGuid(); ws.DoJob(..., userState); } } (1) I have seen texts on the net on using delegates for callbacks but failed to apply that to my case. All I want to do is to call the appropriate processResult() method upon OnCompleted() event, but dunno how to and where to declare the delegate: public delegate void CallBack(string s); (2) There is a sender object passed in to OnCompleted() but never used, did I miss anything there? Or how can I make good use of sender? Any helps appreciated.

    Read the article

  • ASMX Web Service online works when all of the code is in one file without code-behind

    - by Ben McCormack
    I have an ASMX Web Service that has its code entirely in a code-behind file, so that the entire contents of the .asmx file is: <%@ WebService Language="C#" CodeBehind="~/App_Code/AddressValidation.cs" Class="AddressValidation" %> On my test machine (Windows XP with IIS 5), I set up a virtual directory just for this ASP.NET 2.0 solution and everything works great. All my code is separated nicely and it just works. However, when we deployed this solution to our Windows Server 2003 development environment, we noticed that the code only compiled when all of the code was dropped directly into the .asmx file, meaning that the solution didn't work with code-behind. We can't figure out why this is happening. One thing that's different about our setup in our development environment is that instead of creating a separate virual directory just for this solution, we dropped it into an existing directory that runs a classic ASP application. So here we have a folder with an ASP.NET 2.0 application within a directory that contains a classic ASP application. Granted, everything in the ASP.NET 2.0 application works if all of the code is within the .asmx file and not in code-behind, but we'd really like to know why it's not recognizing the code-behind files and compiling it correctly.

    Read the article

  • About Web service ,how to use Ajax to call a specific member function of a class?

    - by Liu chwen
    I'm trying to build a web service by PHP. In my case, I called the getINFO(), but the return value on client side always null. Have no idea to solve this problem.. Here's the SOAPserver code(WS.WEB_s.php): require("WEB_s.php"); ini_set("soap.wsdl_cache_enabled", 0); $server = new SoapServer('wsdl/WEB_s.wsdl'); $server->setClass("WEB_s"); $server->handle(); Where the main Class is(WEB_s.php): final class WEB_s { public function getINFO(){ $JsonOutput = '{"key":"value",...}'; return $JsonOutput; } public function setWAN($setCommand,$newConfigfilePath){ $bOutput; return $bOutput; } } And Client side: $(document).ready(function(){ $('#qqq').button().click(function(){ var soapMessage = LoginSoap($('#uid').val(),$('#pwd').val()); alert('soapMessage'); $.ajax({ //url: 'libraries/WS.WEB_s.php/WEB_s/getINFO',//success , return null //url: 'libraries/WS.WEB_s.php/', //success , return null url: 'libraries/WS.WEB_s.php/getINFO',//success , return null type: 'GET', timeout: (10* 1000), contentType: "text/xml", dataType: "xml", success: function( data,textStatus,jqXHR){ alert('Server success(' + data+')('+ textStatus + ')(' + jqXHR + ')'); }, error: function (request, status, error) { alert('Server Error(' + status+')->'+error); }, complete: function (jqXHR, textStatus) { alert('Server success(' + jqXHR+')('+ textStatus + ')'); } }); }); }); The following is the corresponding WSDL file : http://codepaste.net/95wq9b

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >