Search Results

Search found 58546 results on 2342 pages for 'bpel http basic authentic'.

Page 55/2342 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Google I/O 2012 - Getting Started with Google+ History API [CONF]

    Google I/O 2012 - Getting Started with Google+ History API [CONF] Timothy Jordan, Daniel Dulitz Google+ history presents new opportunities to increase traffic to your site and engagement with your content by allowing users to connect their Google profile to your site. This session will explore the value of Google+ history and review basic implementation. Special guests will be on hand to describe their early success with this new service. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 92 6 ratings Time: 33:56 More in Science & Technology

    Read the article

  • Noob-Friendly Guides to WSGI?

    - by Johnny McKenzie
    world! I have recently been delving into server-side code web development with python, and I have hit a brick wall; you see, I know little about server side code and HTTP (other than the v. basics with php shudder), and all of the docs for wsgi that I have found seem to be for people already well established in the field. Are there any n00b happy guides for server-side scripting (the theory of), or on wsgi out there. Http would be helpful, video tuts are also greatly appreciated. Thanks in advance.

    Read the article

  • Should I implement slugs with my already fairly long URLs?

    - by Earlz
    I'm considering implementing slugs in my blog. My blog uses MongoDB. One of the side-effects of using MongoDB is that it uses relatively long hex string IDs. Example before: http://lastyearswishes.com/blog/view/5070f025d1f1a5760fdfafac after: http://lastyearswishes.com/blog/view/5070f025d1f1a5760fdfafac/improvements-on-barelymvc Of course, that's a relatively short title.. I have some longer ones, but intend to limit the maximum character limit for slugs to something reasonable. At what point does a URL become so long that it hurts SEO instead of improves it? In this case, should I leave my URLs alone, or add slugs?

    Read the article

  • Google I/O 2010 - A beginner's guide to Android

    Google I/O 2010 - A beginner's guide to Android Google I/O 2010 - A beginner's guide to Android Android 101 Reto Meier This session will introduce some of the basic concepts involved in Android development. Starting with an overview of the SDK APIs available to developers, we will work through some simple code examples that explore some of the more common user features including using sensors, maps, and geolocation. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 54:34 More in Science & Technology

    Read the article

  • HTTP 303 redirection and robots.txt

    - by Ian Dickinson
    On a site I'm working on, we're using the HTTP 303 redirect pattern (see this article for background) to distinguish between information and non-information resources. So: some URL's under /id get redirected to dynamically-created pages under /doc. These dynamic pages are built from a database, and contain links to other /doc/ resources, so in general we don't want them to be crawled. Our robots.txt contains: Disallow: /doc However, we do want the non-redirected pages under /id to get indexed by Google et al: Allow: /id So the question I have, which I can't find an answer to so far, is: if an allowed /id page 303-redirects to a /doc page, will it still be blocked by robots.txt? If yes, we're OK, but otherwise I'm going to disallow all /id resources in the robots file, as having the crawler hammer the db would be worse than losing search indexing for the /id pages.

    Read the article

  • Google crawler not found an error inside of the <head> tag

    - by inckka
    I've found a crawler error in my site and it is listed as a page not found(404) link. Heres the broken link http://mydomain.com/blog/comments/feed/ I'm using Google web master tools and found that broken link coming from my web site pages' head tag. here's actual code where that link situated. <head> <link rel="alternate" type="application/rss+xml" title="My Domain Blog &raquo; Feed" href="http://www.my-domain.com/blog/feed/" /> </head> So Google report this link as a not found. Actually this link target is not an exact page or a location. But essential for the blog feeds. Anyway I have to fix this and remove from the Google crawler error's list. But haven't got any idea, because cannot redirect or do a 404 header with this link target. Have anyone got an idea of fixing this?

    Read the article

  • "X-Robots-Tag: noindex" on an HTTP 301 response

    - by Peter O.
    I understand that a resource with X-Robots-Tag: noindex forces some search engines, including Google, not to index the resource further. I also understand that an HTTP 301 response causes search engines to use the redirected URL instead of the original URL to refer to the resource. But what happens if both "X-Robots-Tag: noindex" and status code 301 occur on the same response? It's likely that the original URL will no longer be indexed, but will that cause the redirected URL to no longer be indexed too? This possibility is not mentioned in the X-Robots-Tag specification.

    Read the article

  • RequestContextHolder.currentRequestAttributes() and accessing HTTP Session

    - by Umesh Awasthi
    Need to access HTTP session for fetching as well storing some information.I am using Spring-MVC for my application and i have 2 options here. User Request/ Session in my Controller method and do my work Use RequestContextHolde to access Session information. I am separating some calculation logic from Controller and want to access Session information in this new layer and for that i have 2 options Pass session or Request object to other method in other layer and perform my work. use RequestContextHolder.currentRequestAttributes() to access request/ session and perform my work. I am not sure which is right way to go? with second approach, i can see that method calling will be more clean and i need not to pass request/ session each time.

    Read the article

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • Authorization pop-up requested by http://localhost:51675 every time I run Firefox

    - by user10711
    Using Ubuntu 10.04. Whenever I run Firefox I get a pop up requesting authorisation. It says 'a user name and password are being requested by http://localhost:51675. The site says "server" I have tried all passwords I know and nothing is accepted. If I click 'cancel' it disappears but re-appears after about 5 minutes. This whole 'experience' is accompanied by a great deal of hard disc activity. Can anyone help with this?

    Read the article

  • VPS Server OS differences

    - by silvercover
    I have two VPS servers. one of them is running Linux and the other is Windows one. I've uploaded same file to their public_html folders and could see them in my browser via static IP address of each one like http://178.63.165.178/getorder/file.xml and http://178.63.165.178/getorder/file.xml. On the other side there is a device called SMSPrinter that configured to read those XML files using GPRS and need static IP address to reach destination server. unfortunately this device can only read file from windows server and could not reach the file on Linux server. There is no note in this device manual suggesting Windows server or specific OS! I've also set file permission on Linux server to 777 to have no limitation. what could be the cause of our problem? Thanks.

    Read the article

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • SSL issue and redirects from https to http

    - by Asghar
    I have a site www.example.com for which i purchased SSL cert and installed. And it was working fine, I also have a subdomain with app.example.com which was not on SSL. Both www.example.com and app.example.com are on same IP address. At later we decided to put SSL only on app.frostbox.com and then i configured SSL with app.frostbox.com and it worked fine, Now the issue is that Google is indexing my site as https://www.example.com/ and when users hits the web , Invalid security warning is issued and when user allow security issue they are shown my app.example.com contents. Note: I have my SSL configuration files in /etc/httpd/conf.d/ssl.conf The contents of the ssl.conf are below. http://pastebin.com/GCWhpQJq NOTE: I tried solutions in .httaccess but none of those worked. Like redirecting 301 redirects etc

    Read the article

  • HTTP(S) based file server

    - by Michael
    I've got a server running Ubuntu 10.04. I've already gotten openssh for ssh and sftp on it. I've been looking for a web-based (http, or preferably https) file server, perhaps a web-front-end to an (S)FTP server, that allows access to a specific folder, and also allows uploads. It requires user authentication, preferably using PAM. This web-based solution is for users that are not allowed to use FTP software / browser extension and don't have flash / java browser plugins within their corporate environments. So far I have looked into: Webmin: Includes a file manager, however it uses Java, and I'm looking for a plugin-free implementation. Apache2: I was able to set up https and PAM authentication, but the barebone implementation doesn't include file upload (as far as I'm aware of). HFS: Haven't tried it out because it is for Windows/wine only, and I don't want to run it under wine.

    Read the article

  • Please help for change wallpaper on terminal where image from http (ubuntu 12.04)

    - by Yan Fachmi
    I need to change the background of my desktop in Ubuntu 12. 04 with a command in terminal, in order to make a script with bash. Does anyone know how to do it? but i want the image from internet... i know if i use local image would like this : gsettings set org.gnome.desktop.background picture-uri file:///home/icorner/wallpaper/curr.jpg but if i use something like this wont work gsettings set org.gnome.desktop.background picture-uri http://www.sergiuhelldragoon.com/wp-content/uploads/2012/04/dota_2_wallpaper_1_1280x800_by_zadelim.jpg Please anyone or somebody?... Thanks & regards, Yan Fachmi

    Read the article

  • Significant number of non-HTTP requests hitting my site

    - by Mark Westling
    I'm seeing a significant number of non-HTTP requests hitting a site I just launched. They show up in the server (nginx) logs as non-ASCII and get rejected (correctly) with a 400 status. Here are some lines from the log: 95.132.198.189 - - [09/Jan/2011:13:53:30 -0500] "œ$A\x10õœ²É9J" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:57:42 -0500] "#§i²¸oYi á¹„\x13VJ—x·—œ\x04N \x1DÔvbÛè½\x10§¬\x1E0œ_^¼+\x09ÜÅ\x08DÌÃiJeT€¿æ]œr\x1EëîyIÐ/ßýúê5Ǹ" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:58:33 -0500] "¯Ú%ø=Œ›D@\x12¼\x1C†ÄÀe\x015mˆàd˜Û%pÛÿ" 400 173 "-" "-" What should I make of this? Is this some sort of scripted attack? Or could these be correct requests that have somehow been garbled? They're not affecting the performance of the site and I'm not seeing any other signs of attacks (e.g., no strange POSTs) so at this point I'm more curious than afraid.

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • deny-uncovered-http-methods in Servlet 3.1

    - by reza_rahman
    Servlet 3.1 is a relatively minor release included in Java EE 7. However, the Java EE foundational API still contains some very important changes. One such set of features are the security enhancements done in Servlet 3.1 such as the new deny-uncovered-http-methods option. Servlet 3.1 co-spec lead Shing Wai Chan outlines the use case for the feature and shows you how to use it in a recent code example driven post. You can also check out the official specification yourself or try things out with the newly released Java EE 7 SDK.

    Read the article

  • https (SSL) instead of http

    - by user1332729
    I am building myself a new website, out of privacy and security concerns I am contemplating trying to make it https only. It will be mobile-friendly using media queries but I am concerned--especially for mobile users--about the increased bandwidth. How much will doing so increase my bandwidth or slow load times? For pages where I'm not transferring sensitive information, should I leave external links (to a jQuery library, or a web font for instance) in http? Simply put, I have read articles saying the entire web would be more secure if everything was SSL but my actual knowledge of implementation is limited to payment gateways and log-in pages and such. I apologize for the open-ended nature of the question but anything, even just simple answers to the specific questions is welcomed.

    Read the article

  • How to disable proxy requests once a server has been added to spammers "open proxy" list?

    - by Matt
    Hello all, I've just started in a new company, and have been going over the setup of their Apache webserver conf files... only to find that they've had their apache servers set up as open proxies available to all the world for the last two months. I've already set ProxyRequests Off in the httpd.conf file and restarted the web server, but the access log file is still growing at a horrendous rate (about a gig a day). I noticed that another question was posted on here about this (http://serverfault.com/questions/63715/apache-hit-with-proxy-request), but their access log was supposedly returning 404 errors, while mine appears to be returning 403 and 404 codes... Is this correct? Here are a few lines out of my access log: 87.118.118.124 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 HTTP/1.0" 404 219 "http://www.c5interlude.ru/torrent/viewtopic.php?p=2501" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=300x250&section=790074 HTTP/1.0" 404 200 "http://www.newbiegamer.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Alexa Toolbar)" 122.224.55.222 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1" 403 214 "http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar" "Mozilla/4.0" 58.55.21.40 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.cpx24.com/ad1.js HTTP/1.0" 404 204 "http://thebighits.com/?id=aibux" "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" 122.226.223.188 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.reduxmedia.com/st?ad_type=iframe&ad_size=160x600&section=798636 HTTP/1.0" 404 200 "http://www.gvvu.com" "Mozilla/4.0 (compatible; MSIE 5.5; AOL 6.0; Windows 98; Win 9x 4.90)" 84.51.109.31 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.kslp.ru/forum/index.php HTTP/1.0" 404 213 "http://www.kslp.ru/forum/index.php" "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0 ; .NET CLR 2.0.50215; SL Commerce Client v1.0; Tablet PC 2.0" 122.224.48.49 - - [16/Mar/2010:10:56:36 -0400] "GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1" 403 214 "http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe" "Mozilla/4.0" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=728x90&section=657624 HTTP/1.0" 404 200 "http://www.raiseanimals.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Alexa Toolbar)" And my corresponding error log entries: [Tue Mar 16 10:56:36 2010] [error] [client 87.118.118.124] File does not exist: C:/public_html/torrent, referer: http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.newbiegamer.com [Tue Mar 16 10:56:36 2010] [error] [client 122.224.55.222] (22)Invalid argument: Cannot map GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1 to file, referer: http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar [Tue Mar 16 10:56:36 2010] [error] [client 58.55.21.40] File does not exist: C:/public_html/ad1.js, referer: http://thebighits.com/?id=aibux [Tue Mar 16 10:56:36 2010] [error] [client 122.226.223.188] File does not exist: C:/public_html/st, referer: http://www.gvvu.com [Tue Mar 16 10:56:36 2010] [error] [client 84.51.109.31] File does not exist: C:/public_html/forum, referer: http://www.kslp.ru/forum/index.php [Tue Mar 16 10:56:36 2010] [error] [client 122.224.48.49] (22)Invalid argument: Cannot map GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1 to file, referer: http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.raiseanimals.com Does this in fact look like the server is blocking them correctly, and is there anything else that I could do better to cut down on my access log size? (perhaps block these requests from the server completely?) Thanks! Matt

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • How Can I Bypass the X-Frame-Options: SAMEORIGIN HTTP Header?

    - by Daniel Coffman
    I am developing a web page that needs to display, in an iframe, a report served by another company's SharePoint server. They are fine with this. The page we're trying to render in the iframe is giving us X-Frame-Options: SAMEORIGIN which causes the browser (at least IE8) to refuse to render the content in a frame. First, is this something they can control or is it something SharePoint just does by default? If I ask them to turn this off, could they even do it? Second, can I do something to tell the browser to ignore this http header and just render the frame?

    Read the article

  • Linux Server hacked?

    - by user115848
    I'm trying to determine if this linex webserver/openfire server has been compromised by some form of malware or a hacker. Can you please help me determine if this server has been hacked? The snippet of logs below are from the linux server running apache. A few days ago the moodle site, which is installed on the server, started to render the apache default page. Also the access logs show some activity im not sure of. Please see logs below. 85.190.0.3 - - [02/Apr/2012:13:31:01 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:13:31:01 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 99.41.69.92 - - [02/Apr/2012:13:33:35 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "curl/7.18.0 (x86_64-pc-linux-gnu) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.1" 212.34.151.92 - - [02/Apr/2012:14:01:46 -0600] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "-" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 212.34.151.92 - - [02/Apr/2012:14:01:46 -0600] "POST /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "http://173.164.35.181/phpmyadmin/scripts/setup.php\r" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 82.223.140.4 - - [02/Apr/2012:14:05:03 -0600] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "-" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 82.223.140.4 - - [02/Apr/2012:14:05:04 -0600] "POST /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "_http://173.164.35.181/phpmyadmin/scripts/setup.php\r" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 10.0.0.100 - - [02/Apr/2012:14:25:35 -0600] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110330 CentOS/3.6-1.el5.centos Firefox/3.6.15" 10.0.0.100 - - [02/Apr/2012:14:25:38 -0600] "GET /favicon.ico HTTP/1.1" 404 295 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110330 CentOS/3.6-1.el5.centos Firefox/3.6.15" 50.17.41.60 - - [02/Apr/2012:14:27:29 -0600] "HEAD /icons/apache_pb.gif HTTP/1.0" 200 - "-" "Mozilla/5.0 (compatible; NetcraftSurveyAgent/1.0; [email protected])" 85.190.0.3 - - [02/Apr/2012:14:42:33 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:14:42:33 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:14:42:33 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:14:42:36 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 66.233.63.54 - - [02/Apr/2012:15:12:19 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0" 70.114.161.135 - - [02/Apr/2012:15:17:12 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" 99.41.69.231 - - [02/Apr/2012:15:52:21 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "curl/7.18.0 (x86_64-pc-linux-gnu) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.1" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 10.0.0.253 - - [02/Apr/2012:16:01:45 -0600] "GET / HTTP/1.1" 403 5043 "-" "WWW-Mechanize/1.0.0 (http://rubyforge.org/projects/mechanize/)" 10.0.0.253 - - [02/Apr/2012:16:02:27 -0600] "GET / HTTP/1.1" 403 5043 "-" "WWW-Mechanize/1.0.0 (http://rubyforge.org/projects/mechanize/)" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 89.135.124.125 - - [02/Apr/2012:16:20:47 -0600] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "_http://173.164.35.181/phpmyadmin/scripts/setup.php" "Opera" 89.135.124.125 - - [02/Apr/2012:16:20:48 -0600] "POST /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "_http://173.164.35.181/phpmyadmin/scripts/setup.php" "Opera" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "GET http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "POST http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-"

    Read the article

  • NSURLConnection and Basic HTTP Authentication

    - by Justin Galzic
    I need to invoke an initial GET HTTP request with Basic Authentication. This would be the first time the request is sent to the server and I already have the username & password so there's no need for a challenge from the server for authorization. First question: 1) Does NSUrlConnection have to be set as synchronous to do Basic Auth? According to the answer on this post, it seems that you can't do Basic Auth if you opt for the async route. 2) Anyone know of any some sample code that illustrates Basic Auth on a GET request without the need for a challenge response? Apple's documentation shows an example but only after the server has issued the challenge request to the client. I'm kind of new the networking portion of the SDK and I'm not sure which of the other classes I should use to get this working. (I see the NSURLCredential class but it seems that it is used only with NSURLAuthenticationChallenge after the client has requested for an authorized resource from the server).

    Read the article

  • What is the fastest way to send 100,000 HTTP requests in Python?

    - by Igor G.
    Hello, I am opening a file which has 100,000 url's. I need to send an http request to each url and print the status code. I am using Python 2.6, and so far looked at the many confusing ways Python implements threading/concurrency. I have even looked at the python concurrence library, but cannot figure out how to write this program correctly. Has anyone come across a similar problem? I guess generally I need to know how to perform thousands of tasks in Python as fast as possible - I suppose that means 'concurrently'. Thank you, Igor

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >