Search Results

Search found 2242 results on 90 pages for 'discussion'.

Page 69/90 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)

    - by arungupta
    Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop. public class TestServlet extends HttpServlet {    protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws IOException, ServletException {     ServletInputStream input = request.getInputStream();       byte[] b = new byte[1024];       int len = -1;       while ((len = input.read(b)) != -1) {          . . .        }   }} If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream. This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners - ReadListener and WriteListener interfaces. These are then registered using ServletInputStream.setReadListener and ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking. The updated doGet in our case will look like: AsyncContext context = request.startAsync();ServletInputStream input = request.getInputStream();input.setReadListener(new MyReadListener(input, context)); Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O. At most one ReadListener can be registered on ServletIntputStream and similarly at most one WriteListener can be registered on ServletOutputStream. ServletInputStream.isReady and ServletInputStream.isFinished are new methods to check the status of non-blocking I/O read. ServletOutputStream.canWrite is a new method to check if data can be written without blocking.  MyReadListener implementation looks like: @Overridepublic void onDataAvailable() { try { StringBuilder sb = new StringBuilder(); int len = -1; byte b[] = new byte[1024]; while (input.isReady() && (len = input.read(b)) != -1) { String data = new String(b, 0, len); System.out.println("--> " + data); } } catch (IOException ex) { Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex); }}@Overridepublic void onAllDataRead() { System.out.println("onAllDataRead"); context.complete();}@Overridepublic void onError(Throwable t) { t.printStackTrace(); context.complete();} This implementation has three callbacks: onDataAvailable callback method is called whenever data can be read without blocking onAllDataRead callback method is invoked data for the current request is completely read. onError callback is invoked if there is an error processing the request. Notice, context.complete() is called in onAllDataRead and onError to signal the completion of data read. For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only. The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards. The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here. Here are some more references for you: Java EE 7 Specification Status Servlet Specification Project JSR Expert Group Discussion Archive Servlet 3.1 Javadocs

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • “Apparently, you signed a software services agreement without fully understanding it.”

    - by Dave Ballantyne
    I am not a lawyer. Let me say that again, I am not a lawyer. Todays Dilbert has prompted me to post about my recent experience with SqlServer licensing. I'm in the technical realm and rarely have much to do with purchasing and licensing.  I say “I need” , budget realities will state weather I actually get.  However, I do keep my ear to the ground and due to my community involvement, I know, or at least have an understanding of, some licensing restrictions. Due to a misunderstanding, Microsoft Licensing stated that we needed licenses for our standby servers.  I knew that that was not the case,  and a quick tweet confirmed this. So after composing an email stating exactly what the machines in question were used for ie Log shipped to and used in a disaster recover scenario only,  and posting several Technet articles to back this up, we saved 2 enterprise edition licences, a not inconsiderable cost. However during this discussion, I was made aware of another ‘legalese’ document that could completely override the referenced articles, and anything I knew, or thought i knew, about SqlServer licensing. Personally, I had no knowledge of this.  The “Purchase Use Rights” agreement would appear to be the volume licensing equivalent of the “End User License Agreement” , click throughs we all know and ignore.  Here is a direct quote from Microsoft licensing, when asked for clarification. “Thanks for your email. Just to give some background on the Product Use Rights (PUR), licenses acquired through volume licensing are bound by the most recent PUR at the time of license acquisition. The link for the current PUR and PUR archive is http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. Further to this, products acquired through boxed product or pre-installed on hardware (OEM) are bound by the End User License Agreement (EULA). The PUR will explain limitations, license requirements and rulings on areas like multiplexing, virtualization, processor licensing, etc. When an article will appear on a Microsoft site or blog describing the licensing of a product, it will be using the PUR as a base. Due to the writing style or language used by the person writing areas of the website or technical blogs, the PUR is what you should use as a rule and not any of the other media. The PUR is updated quarterly and will reference every product available at that time working on the latest version unless otherwise stated. The crux of this is that the PUR is written after extensive discussions between the different branches of Microsoft (legal, technical, etc) and the wording is then approved. This is not always the case for some pages explaining licensing as they are merely intended to advise and not subject to the intense scrutiny as the PUR.” So, exactly what does that mean ? My take :  This is a living document, “updated quarterly” , though presumably this could be done on a whim and a fancy.  It could state , you are only licensed if ,that during install you stand in a corner juggling and that photographic evidence is required. A plainly ridiculous demand but,  what else could it override or new requirements could it state that change your existing understanding of the product or your legal usage of it. As i say, im not a lawyer, but are you checking the PURA prior to purchase ?

    Read the article

  • JTF Tranlsation Festival 2011

    - by user13133135
    ?????????????????????? (MT) ??????????????????????? JTF ????????????????????????????????????????????????? ???5??!???21?JTF???????? ? ??:2011?11?29?(?)9:30~20:30(??9:00) ? ??:??????????(????)?(??) ? ??:(?)?????? ??:JTF?????????? ? http://www.jtf.jp/jp/festival/festival_top.html ????????????????????????????????MT ????????????????????????????????????????????????? 90 ???!??(!?)?????????????????????????????????????????????????????????????????????????????? ????????????????? http://www.jtf.jp/jp/festival/festival_program.html#koen_04 ?????????????????????????????? English:  It's been a while since the last post... I have been working on machine translation (MT) and post editing (PE) for Japanese.  Last year was my first step in MT+PE area, and I would take this year as an advanced step.  I plan to talk over Post editing 2011 (Advanced Step) on November 27 at JTF Translation Festival.  ?5 days before application due? 21st JTF Translation Festival ? Date:Nov 29, 2011 Tuesday 9:30~20:30(Gate open: 9:00) ? Place:Arcadia Ichigaya Tokyo ? http://www.jtf.jp/jp/festival/festival_top.html In this session, I would like to expand the thought on "how to best utilize MT and PE" either from the view of Client and Translator.  I will show some examples of post editing as a guideline to know what is the best way and most effective way to do post-edit for Japanese.  Also, I will discuss what is the best practice for MT users (Client). The session lasts 90 minutes... sound a little long for me, but I want to spend more time for discussion than last year.  It would be great to exchange thought or experiences about MT and PE.  What is your concerns or problems in the daily work with MT ?  If you have some, please bring them to my session at JTF Translation Festival.  Here is my session details (Japanese): http://www.jtf.jp/jp/festival/festival_program.html#koen_04 Here is the outline of my session: What is the advantage of MT ? Does it solve all the problems about cost, resource, and quality ?  Well, it is not a magic.  So, you cannot expect all at once.  When you have a problem, there are 3 options... 1. Be patient and wait until everything is ready, 2. Run a workaround using anything available now, 3. Find out something completely new and spend time and money. This time, I will focus Option 2 - do something with what we already have.  That is, I will discuss how we can best utilize MT in our daily business.  My view is two ways: From Client point of view, and From Translator point of view Looking forward to meeting many people and exchanging thoughts and information!

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by milomir.vojvodic
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialized partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate Offices Places are limited, Register from this Link Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.  Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] and Milomir Vojvodic at [email protected] v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Is Your Company Social on the Inside?

    - by Mike Stiles
    As we talk about the extension of social from an outbound-facing marketing tool to a platform that will reach across the entire enterprise, servicing multiple functions of that enterprise, it might be time to take a look at how social can be effectively employed for internal communications. Remember the printed company newsletter? Yeah, nobody reads it. Remember the emailed company newsletter? Yeah, nobody reads it. Why not? Shouldn’t your employees care about the company more than anything else in life and be voraciously hungry for any information related to it? The more realistic prospect is that a company’s employees don’t behave much differently at work where information is concerned than they do in their personal lives. They “tune in” to information that’s immediately relevant to them, that peaks their interest, and/or that’s presented in a visually engaging way. That currently makes an internal social platform the most ideal way to communicate within the organization. It not only facilitates more immediate, more targeted (and thus more relevant) messaging from the company out to employees, it sets a stage for employees to communicate with each other and efficiently get answers to questions from peers. It’s a collaboration tool on steroids. If you build such an internal social portal and you do it right, will employees use it? Considering social media has officially been declared more addictive than cigarettes, booze and sex…probably. But what does it mean to do an internal social platform “right”? The bar has been set pretty high. Your employees are used to Twitter and Facebook, and would roll their eyes at anything less simple or harder to navigate than those. All the Facebook best practices would apply to your internal social as well, including the importance of managing posting frequency, using photos and video, moderation & response, etc. And don’t worry, you won’t be the first to jump in. WPP's global digital agency Possible has its own social network called Colab. Nestle has “The Nest.” Red Robin’s got one. I myself got an in-depth look at McGraw-Hill’s internal social platform at Blogwell NYC. Some of these companies are building their own platforms, others are buying them off the shelf or customizing readymade solutions. But you won’t be the last either. Prescient Digital Media and the IABC learned 39% of companies don’t offer employees any social tools. Not a social network, not discussion forums, not even IM. And a great many continue to ban the use of Facebook and Twitter on the premises. That’s pretty astonishing since social has become as essential a modern day communications tool as the telephone. But such holdouts will pay a big price for being mired in fear while competitors exploit social connections unchallenged. Fish where the fish are. If social has become the way people communicate and take in information, let that be the way communication is trafficked in the organization.

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • What's new in Servlet 3.1 ? - Java EE 7 moving forward

    - by arungupta
    Servlet 3.0 was released as part of Java EE 6 and made huge changes focused at ease-of-use. The idea was to leverage the latest language features such as annotations and generics and modernize how Servlets can be written. The web.xml was made as optional as possible. Servet 3.1 (JSR 340), scheduled to be part of Java EE 7, is an incremental release focusing on couple of key features and some clarifications in the specification. The main features of Servlet 3.1 are explained below: Non-blocking I/O - Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. Non-blocking I/O allow to build scalable applications. TOTD #188 provide more details about how non-blocking I/O can be done using Servlet 3.1. HTTP protocol upgrade mechanism - Section 14.42 in the HTTP 1.1 specification (RFC 2616) defines an upgrade mechanism that allows to transition from HTTP 1.1 to some other, incompatible protocol. The capabilities and nature of the application-layer communication after the protocol change is entirely dependent upon the new protocol chosen. After an upgrade is negotiated between the client and the server, the subsequent requests use the new chosen protocol for message exchanges. A typical example is how WebSocket protocol is upgraded from HTTP as described in Opening Handshake section of RFC 6455. The decision to upgrade is made in Servlet.service method. This is achieved by adding a new method: HttpServletRequest.upgrade and two new interfaces: javax.servlet.http.HttpUpgradeHandler and javax.servlet.http.WebConnection. TyrusHttpUpgradeHandler shows how WebSocket protocol upgrade is done in Tyrus (Reference Implementation for Java API for WebSocket). Security enhancements Applying run-as security roles to #init and #destroy methods Session fixation attack by adding HttpServletRequest.changeSessionId and a new interface HttpSessionIdListener. You can listen for any session id changes using these methods. Default security semantic for non-specified HTTP method in <security-constraint> Clarifying the semantics if a parameter is specified in the URI and payload Miscellaneous ServletResponse.reset clears any data that exists in the buffer as well as the status code, headers. In addition, Servlet 3.1 will also clears the state of calling getServletOutputStream or getWriter. ServletResponse.setCharacterEncoding: Sets the character encoding (MIME charset) of the response being sent to the client, for example, to UTF-8. Relative protocol URL can be specified in HttpServletResponse.sendRedirect. This will allow a URL to be specified without a scheme. That means instead of specifying "http://anotherhost.com/foo/bar.jsp" as a redirect address, "//anotherhost.com/foo/bar.jsp" can be specified. In this case the scheme of the corresponding request will be used. Clarification in HttpServletRequest.getPart and .getParts without multipart configuration. Clarification that ServletContainerInitializer is independent of metadata-complete and is instantiated per web application. A complete replay of What's New in Servlet 3.1: An Overview from JavaOne 2012 can be seen here (click on CON6793_mp4_6793_001 in Media). Each feature will be added to the JSR subject to EG approval. You can share your feedback to [email protected]. Here are some more references for you: Servlet 3.1 Public Review Candidate Downloads Servlet 3.1 PR Candidate Spec Servlet 3.1 PR Candidate Javadocs Servlet Specification Project JSR Expert Group Discussion Archive Java EE 7 Specification Status Several features have already been integrated in GlassFish 4 Promoted Builds. Have you tried any of them ? Here are some other Java EE 7 primers published so far: Concurrency Utilities for Java EE (JSR 236) Collaborative Whiteboard using WebSocket in GlassFish 4 (TOTD #189) Non-blocking I/O using Servlet 3.1 (TOTD #188) What's New in EJB 3.2 ? JPA 2.1 Schema Generation (TOTD #187) WebSocket Applications using Java (JSR 356) Jersey 2 in GlassFish 4 (TOTD #182) WebSocket and Java EE 7 (TOTD #181) Java API for JSON Processing (JSR 353) JMS 2.0 Early Draft (JSR 343) And of course, more on their way! Do you want to see any particular one first ?

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Stateful vs. Stateless Webservices

    - by chrsk
    Imagine a more complex CRUD application which has a three-tier-architecture and communicates over webservices. The client starts a conversation to the server and doing some wizard like stuff. To process the wizard the client needs feedback given by the server. We started a discussion about stateful or stateless webservices for this approach. I made some research combined with my own experience, which points me to the question mentioned later. Stateless webservices having the following properties (in our case): + high scalability + high availability + high speed + rapid testing - bloated contract - implementing more logic on server-side But we can cross out the first two points, our application doesn't needs high scalability and availability. So we come to the stateful webservice. I've read a bunch of blogs and forum posts and the most invented point implementing a stateful webservice was: + simplifies contract (protocol) - bad testing - runs counter to the basic architecture of http But doesn't almost all web applications have these bad points? Web applications uses cookies, query strings, session ids, and all the stuff to avoid the statelessness of http. So why is it that bad for webservices?

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • VS 2010 debugger not loading symbols when attaching to NUnit

    - by Dave Hanna
    (I just posted this in the NUnit discussion group on groups.google.com) Under VS 2008, I would run my tests under NUnit, and, if I needed to debug, I would attach the VS2008 debugger to the running Nunit process (Debug - Attach to Process), and set any breakpoints on code I wanted to examine. When I hit the Run buttion in NUnit, it would hit the breakpoint. (BTW, if it matters, this was running NUnit 2.5.2). I just upgraded to NUnit 2.5.4 and VS 2010. When I set a breakpoint, then attach to NUnit, I get a little warning symbol on the breakpoint dot, and hovering over it gives the tooltip "Breakpoint will not be hit. No symbols are currently loaded". Going to the Debug - Windows - Modules window shows a whole bunch of Windows and NUnit modules loaded, with the Symbol Status of "Skipped loading symbols", and then 1 module with a funny name that changes each time (r1euhmh5 right now), and Symbol Status of "No symbols loaded". (There is no trace of a module with a name remotely like my DLL under test). Right clicking the funny filename (assuming that to be some mapping from my DLL under test), and clicking Load Symbols From - Symbol Path, and navigating to the bin\debug folder, then clicking the pdb file of my DLL under test, I get the message "A matching symbol was not found in this folder". (The top of the Open dialog box has a line that says "Original location: r1euhmh5.pdb") So what's changed? And how do I go about debugging/breakpointing under VS 2010/NUnit 2.5.4 (Or is it possible I screwed something up when I decided to go through my VS2010 options and set some of them to more advanced levels than I knew what I was doing?) I appreciate any help.

    Read the article

  • How do you load and save content from a Silverlight 4 RichTextBox control?

    - by AnthonyWJones
    I've been reviewing the features of the RichTextBox control in Silverlight 4. What I've yet to find is any examples of Loading and Saving content in the RichTextBox. Anyone come across any or can shed some light on it? The control has a BlocksCollection in which I guess one could use the XamlReader to load a bunch of markup assuming that markup has a single top level node of type Block. Then add that block to the Blocks collection. Seems a shame that the RichTextBox bothers to have a "collection" in this case, why not simply a top-level Block item? Never-the-less that still leaves saving the content of a RichTextBox, I have no idea where to begin with that one? I'm sure I must be missing the obvious here but unless loading and saving data in to and from RichTextBox is at least possible if not easy I can't see how we can actually put it to use. Edit Thanks to DaveB's answer I found discussion of something called the DocumentPersister. However no reference to this class can be found in the MSDN documentation nor can I find it in the installed dlls via object browser search. Anyone, anyone at all?

    Read the article

  • ASP.NET MVC in a subfolder (only) on godaddy

    - by Anthony Potts
    Okay, I have read many of the routing posts concerning putting asp.net mvc on godaddy. However, I have not come to a solution to my current problem. I am trying to publish an ASP.NET MVC application to a subfolder on godaddy. I have upgraded the account to use IIS 7 and I have included the MVC dlls in \bin\ deployment method. However, I suspect that my route is not correct. Currently, my routes are set up with the standard out of the box route: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } I have a subdomain set up so that it looks like office.domain.com. The subdomain is pointing at a folder "/office/" which is right off the root folder. (There is not an MVC application installed in the root folder). All of my application has been placed in this 'office' folder. When I hover over the links however, the 'office' portion shows up in the link as well. e.g. Hovering over a link to the customer controller, index action yields "office.domain.com/office/Customer" as the target. This link then gets a 404 when I attempt to go to it. What should my route be to fix this? Is there something I have neglected in setting up the subdomain in godaddy? Is this something I just can't do in godaddy's domain management "tool". Do I need to set up a virtual directory for this instead of just a directory? Update: I changed the IIS settings in godaddy to use integrated pipeline mode, per this discussion and I am no longer getting 404 errors. The application worked just fine as suggested it would.

    Read the article

  • Manually iterating over a selection of XML elements (C#, XDocument)

    - by user316117
    What is the “best practice” way of manually iterating (i.e., one at a time with a “next” button) over a set of XElements in my XDocument? Say I select the set of elements I want thusly: var elems = from XElement el in m_xDoc.Descendants() where (el.Name.LocalName.ToString() == "q_a") select el; I can use an IEnumerator to iterate over them, i.e., IEnumerator m_iter; But when I get to the end and I want to wrap around to the beginning if I call Reset() on it, it throws a NotSupportedException. That’s because, as the Microsoft C# 2.0 Specification under chapter 22 "Iterators" says "Note that enumerator objects do not support the IEnumerator.Reset method. Invoking this method causes a System.NotSupportedException to be thrown ." So what IS the right way of doing this? And what if I also want to have bidirectional iteration, i.e., a “back” button, too? Someone on a Microsoft discussion forum said I shouldn’t be using IEnumerable directly anyway. He said there was a way to do what I want with LINQ but I didn’t understand what. Someone else suggested dumping the XElements into a List with ToList(), which I think would work, but I wasn’t sure it was “best practice”. Thanks in advance for any suggestions!

    Read the article

  • Linking to MSVC DLL from MinGW

    - by IndigoFire
    I'm trying to link the LizardTech GeoExpress DSDK into my own application. I use gcc so that we can compile on for platforms. On Linux and Mac this works easily: they provide a static library (libltidsdk.a) and headers and all that we have to do is use them. Compiling for windows isn't so easy. They've built the library using Microsoft Visual Studio, and we use MinGW. I've read the MinGW FAQ, and I'm running into the problems below. The library is all C++, so my first question: is this even possible? Just linking against the dll as provided yields "undefined reference" errors for all of the C++ calls (constructors, desctructors, methods, etc). Based on the MinGW Wiki: http://www.mingw.org/wiki/MSVC%5Fand%5FMinGW%5FDLLs I should be able to use the utility reimp to convert a .lib into something useable. I've tried all of the .lib files provided by LizardTech, and they all give "invalid or corrupt import library". I've tried both version 0.4 and 0.3 of the reimp utility. Using the second method described in the wiki, I've run pexport and dlltool over the dll to get a .a archive, but that produces the same undefined references. BTW: I have read the discussion below. It left some ambiguity as to whether this is possible, and given the MinGW Wiki page it seems like this should be doable. If it is impossible, that's all I need to know. If it can be done, I'd like to know how I can get this to happen. stackoverflow.com/questions/1796209/how-to-link-to-vs2008-generated-libs-from-g Thanks!

    Read the article

  • Relation between [[Prototype]] and prototype in JavaScript

    - by Claudiu
    From http://www.jibbering.com/faq/faq_notes/closures.html : Note: ECMAScript defines an internal [[prototype]] property of the internal Object type. This property is not directly accessible with scripts, but it is the chain of objects referred to with the internal [[prototype]] property that is used in property accessor resolution; the object's prototype chain. A public prototype property exists to allow the assignment, definition and manipulation of prototypes in association with the internal [[prototype]] property. The details of the relationship between to two are described in ECMA 262 (3rd edition) and are beyond the scope of this discussion. What are the details of the relationship between the two? I've browsed through ECMA 262 and all I've read there is stuff like: The constructor’s associated prototype can be referenced by the program expression constructor.prototype, Native ECMAScript objects have an internal property called [[Prototype]]. The value of this property is either null or an object and is used for implementing inheritance. Every built-in function and every built-in constructor has the Function prototype object, which is the initial value of the expression Function.prototype Every built-in prototype object has the Object prototype object, which is the initial value of the expression Object.prototype (15.3.2.1), as the value of its internal [[Prototype]] property, except the Object prototype object itself. From this all I gather is that the [[Prototype]] property is equivalent to the prototype property for pretty much any object. Am I mistaken?

    Read the article

  • How to scrape a _private_ google group?

    - by John
    Hi there, I'd like to scrape the discussion list of a private google group. It's a multi-page list and I might have to this later again so scripting sounds like the way to go. Since this is a private group, I need to login in my google account first. Unfortunately I can't manage to login using wget or ruby Net::HTTP. Surprisingly google groups is not accessible with the Client Login interface, so all the code samples are useless. My ruby script is embedded at the end of the post. The response to the authentication query is a 200-OK but no cookies in the response headers and the body contains the message "Your browser's cookie functionality is turned off. Please turn it on." I got the same output with wget. See the bash script at the end of this message. I don't know how to workaround this. am I missing something? Any idea? Thanks in advance. John Here is the ruby script: # a ruby script require 'net/https' http = Net::HTTP.new('www.google.com', 443) http.use_ssl = true path = '/accounts/ServiceLoginAuth' email='[email protected]' password='topsecret' # form inputs from the login page data = "Email=#{email}&Passwd=#{password}&dsh=7379491738180116079&GALX=irvvmW0Z-zI" headers = { 'Content-Type' => 'application/x-www-form-urlencoded', 'user-agent' => "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/6.0"} # Post the request and print out the response to retrieve our authentication token resp, data = http.post(path, data, headers) puts resp resp.each {|h, v| puts h+'='+v} #warning: peer certificate won't be verified in this SSL session Here is the bash script: # A bash script for wget CMD="" CMD="$CMD --keep-session-cookies --save-cookies cookies.tmp" CMD="$CMD --no-check-certificate" CMD="$CMD --post-data='[email protected]&Passwd=topsecret&dsh=-8408553335275857936&GALX=irvvmW0Z-zI'" CMD="$CMD --user-agent='Mozilla'" CMD="$CMD https://www.google.com/accounts/ServiceLoginAuth" echo $CMD wget $CMD wget --load-cookies="cookies.tmp" http://groups.google.com/group/mygroup/topics?tsc=2

    Read the article

  • Web crawler update strategy

    - by superb
    I want to crawl useful resource (like background picture .. ) from certain websites. It is not a hard job, especially with the help of some wonderful projects like scrapy. The problem here is I not only just want crawl this site ONE TIME. I also want to keep my crawl long running and crawl the updated resource. So I want to know is there any good strategy for a web crawler to get updated pages? Here's a coarse algorithm I've thought of. I divided the crawl process into rounds. Each round URL repository will give crawler a certain number (like , 10000) of URLs to crawl. And then next round. The detailed steps are: crawler add start URLs to URL repository crawler ask URL repository for at most N URL to crawl crawler fetch the URLs, and update certain information in URL repository, like the page content, the fetch time and whether the content has been changed. just go back to step 2 To further specify that, I still need to solve following question: How to decide the "refresh-ness" of a web page, which indicates the probability that this web page has been updated ? Since that is an open question, hopefully it will brought some fruitful discussion here.

    Read the article

  • Mercurial hg Subrepository Problem - "abort: unknown revision'

    - by Tex
    Note: I asked this yesterday over at kiln.stackexchange.com, but haven't gotten an answer, and it's holding up my work. So I figured I'd give it a shot here. My main mercurial repository has a bunch of subrepositories in it. During initial setup, I made a mistake in my .hgsub. Namely, I pointed two subrepositories to the same directory. What I should have had: sites/1=sites/1 sites/2=sites/2 sites/3=sites/3 What I actually had: sites/1=sites/1 sites/2=sites/2 sites/2=sites/3 Stupid copy/paste error. I committed the incorrect .hgsub, not realizing my error. A few revisions later, while adding a some new subrespositories to .hgsub, I noticed the mistake and fixed it inside .hgsub. I committed and kept rolling along. I've committed a reasonable amount of work that I'd prefer not to redo since I 'fixed' the mistake in .hgsub. Now we come to the actual problem: I've made some changes inside the subrepository sites/3, and when I try to commit the main repository, I get the following error: abort: unknown revision 'LongGUIDLookingString' I found this discussion, which seems to address the same problem I'm having, but I can't quite work out how bos fixed it. What do I need to do in order to fix this?

    Read the article

  • Custom jQuery Gallery Thumbnail Behavior

    - by steve
    I recently got some help on here from SLaks (thank you) on the behavior of my custom gallery. I'm now trying to fix the way the thumbnails function. I've been toying with it for about an hour but I can't get it to work. Live version of the code: http://www.studioimbrue.com . Currently the code is as follows: $('.thumbscontainer ul li a').click(function() { var li_index = $(this).parents('ul').children('li').index($(this).parent("li")); $(this).parents('.thumbscontainer').parent().find('.captions ul li').fadeOut(); $(this).parents('.thumbscontainer').parent().find('.captions ul li:eq('+li_index+')').fadeIn(); }); $('.container .captions li').click(function() { var nextLi = $(this).fadeOut().next().fadeIn(); if (nextLi.length === 0) //If we're the last one, nextLi = $(this).siblings(':first-child').fadeIn(); }); The only problem is that when the gallery image is clicked, it goes to the next image in the series, yet the thumbnails do not change to the next one in the list. You can take a look at my previous question to see our discussion. Thanks

    Read the article

  • Threading 101: What is a Dispatcher?

    - by Water Cooler v2
    Once upon a time, I remembered this stuff by heart. Over time, my understanding has diluted and I mean to refresh it. As I recall, any so called single threaded application has two threads: a) the primary thread that has a pointer to the main or DllMain entry points; and b) For applications that have some UI, a UI thread, a.k.a the secondary thread, on which the WndProc runs, i.e. the thread that executes the WndProc that recieves messages that Windows posts to it. In short, the thread that executes the Windows message loop. For UI apps, the primary thread is in a blocking state waiting for messages from Windows. When it recieves them, it queues them up and dispatches them to the message loop (WndProc) and the UI thread gets kick started. As per my understanding, the primary thread, which is in a blocking state, is this: C++ while(getmessage(/* args &msg, etc. */)) { translatemessage(&msg, 0, 0); dispatchmessage(&msg, 0, 0); } C# or VB.NET WinForms apps: Application.Run( new System.Windows.Forms() ); Is this what they call the Dispatcher? My questions are: a) Is my above understanding correct? b) What in the name of hell is the Dispatcher? c) Point me to a resource where I can get a better understanding of threads from a Windows/Win32 perspective and then tie it up with high level languages like C#. Petzold is sparing in his discussion on the subject in his epic work. Although I believe I have it somewhat right, a confirmation will be relieving.

    Read the article

  • Why isn't jQuery automatically appending the JSONP callback?

    - by Aseem Kishore
    The $.getJSON() documentation states: If the specified URL is on a remote server, the request is treated as JSONP instead. See the discussion of the jsonp data type in $.ajax() for more details. The $.ajax() documentation for the jsonp data type states (emphasis mine): Loads in a JSON block using JSONP. Will add an extra "?callback=?" to the end of your URL to specify the callback. So it seems that if I call $.getJSON() with a cross-domain URL, the extra "callback=?" parameter should automatically get added. (Other parts of the documentation support this interpretation.) However, I'm not seeing that behavior. If I don't add the "callback=?" explicitly, jQuery incorrectly makes an XMLHttpRequest (which returns null data since I can't read the response cross-domain). If I do add it explicitly, jQuery correctly makes a <script> request. Here's an example: var URL = "http://www.geonames.org/postalCodeLookupJSON" + "?postalcode=10504&country=US"; function alertResponse(data, status) { alert("data: " + data + ", status: " + status); } $.getJSON(URL, alertResponse); // alerts "data: null, status: success" $.getJSON(URL + "&callback=?", alertResponse); // alerts "data: [object Object], status: undefined" So what's going on? Am I misunderstanding the documentation or forgetting something? It goes without saying that this isn't a huge deal, but I'm creating a web API and I purposely set the callback parameter to "callback" in the hopes of tailoring it nicely to jQuery usage. Thanks!

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >