Search Results

Search found 26662 results on 1067 pages for 'dynamic content'.

Page 42/1067 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • How to calculate Content-Length for a file download within Kohana PHP?

    - by moritzd
    I'm trying to send a file from within a Kohana model to the browser, but as soon as I add a Content-Length header, the file doesn't start downloading right away. Now the problem seems to be that Kohana is already outputting buffers. An ob_clean at the begin of the script doesn't help this though. Also adding ob_get_length() to the Content-Length isn't helping since this just returns 0. The getFileSize() function returns the right number: if I run the script outside of Kohana, it works. I read that exit() still calls all destructors and it might be that something is outputted by Kohana afterwards, but I can't find out what exactly. Hope someone can help me out here... This is the piece of code I'm using: public function download() { header("Expires: ".gmdate("D, d M Y H:i:s",time()+(3600*7))." GMT\n"); header("Content-Type: ".$this->getFileType()."\n"); header("Content-Transfer-Encoding: binary\n"); header("Last-Modified: " . gmdate("D, d M Y H:i:s",$this->getCreateTime()) . " GMT\n"); header("Content-Length: ".($this->getFileSize()+ob_get_length().";\n"); header('Content-Disposition: attachment; filename="'.basename($this->getFileName())."\"\n\n"); ob_end_flush(); readfile($this->getFilePath()); exit(); }

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • Supporting users if they're not on your site

    - by Roger Hart
    Have a look at this Read Write Web article, specifically the paragraph in bold and the comments. Have a wry chuckle, or maybe weep for the future of humanity - your call. Then pause, and worry about information architecture. The short story: Read Write Web bumps up the Google rankings for "Facebook login" at the same time as Facebook makes UI changes, and a few hundred users get confused and leave comments on Read Write Web complaining about not being able to log in to their Facebook accounts.* Blindly clicking the first Google result is not a navigation behaviour I'd anticipated for folks visiting big names sites like Facebook. But then, I use Launchy and don't know where any of my files are, depend on Firefox auto-complete, view Facebook through my IM client, and don't need a map to find my backside with both hands. Not all our users behave in the same way, which means not all of our architecture is within our control, and people can get to your content in all sorts of ways. Even if the Read Write Web episode is a prank of some kind (there are, after all, plenty of folks who enjoy orchestrated trolling) it's still a useful reminder. Your users may take paths through and to your content you cannot control, and they are unlikely to deconstruct their assumptions along the way. I guess the meaningful question is: can you still support those users? If they get to you from Google instead of your front door, does what they find still make sense? Does your information architecture still work if your guests come in through the bathroom window? Ok, so here they broke into the house next door - you can't be expected to deal with that. But the rest is well worth thinking about. Other off-site interaction It's rarely going to be as funny as the comments at Read Write Web, but your users are going to do, say, and read things they think of as being about you and your products, in places you don't control. That's good. If you pay attention to it, you get data. Your users get a better experience. There are easy wins, too. Blogs, forums, social media &c. People may look for and find help with your product on blogs and forums, on Twitter, and what have you. They may learn about your brand in the same way. That's fine, it's an interaction you can be part of. It's time-consuming, certainly, but you have the option. You won't get a blogger to incorporate your site navigation just in case your users end up there, but you can be there when they do. Again, Anne Gentle, Gordon McLean and others have covered this in more depth than I could. Direct contact Sales people, customer care, support, they all talk to people. Are they sending links to your content? if so, which bits? Do they know about all of it? Do they have the content they need to support them - messaging that funnels sales, FAQ that are realistically frequent, detailed examples of things people want to do, that kind of thing. Are they sending links because users can't find the good stuff? Are they sending précis of your content, or re-writes, or brand new stuff? If so, does that mean your content isn't up to scratch, or that you've got content missing? Direct sales/care/support interactions are enormously valuable, and can help you know what content your users find useful. You can't have a table of contents or a "See also" in a phonecall, but your content strategy can support more interactions than browsing. *Passing observation about Facebook. For plenty if folks, it is  the internet. Its services are simple versions of what a lot of people use the internet for, and they're aggregated into one stop. Flickr, Vimeo, Wordpress, Twitter, LinkedIn, and all sorts of games, have Facebook doppelgangers that are not only friendlier to entry-level users, they're right there, behind only one layer of authentication. As such, it could own a lot of interaction convention. Heavy users may well not be tech-savvy, and be quite change averse. That doesn't make this episode not dumb, but I'm happy to go easy on 'em.

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • How would you mask data returned in a Dynamic Data for Entities website?

    - by David Stratton
    I'm doing this in Visual Studio 2008, not 2010, in case there is a relevant difference between the two versions of the Dynamic Data websites. How would I mask data in the automatically generated tables in a Dynamic Data for Entities website? The scenario is we have one table where we want to allow users to ENTER sensitive data, but not VIEW sensitive data, so... (In the list below, I'm using "template" to mean "The web page generated automatically based on the schema and action. I'm sure that's the wrong terminology, but the meaning should be clear.) The "Insert" template should have the field's textbox available for the user to type a value in. The "Edit" template should have the field's textbox blanked out (empty string) regardless of what was in the field in the database in the first place, but the user should be able to type in new data and have it save The "View" template should either have the data for this field masked, or non-visible. The auto-generated table showing the list of records should also have this field masked or non-visible. I can do this easily with standard Web Forms, but I'm having a hard time figuring this out in the Dynamic Data site I'm working on. Masking data is such a common task, I have to believe Microsoft thought of this and provided a way to do it...

    Read the article

  • Flash CS 4 / AS 2 + Dynamic Text + Html + Embedded fonts...

    - by AndreMiranda
    I have this scenario: 1) I have a dynamic text that receives its data from a XML. 2) The texts showed get theirs style from a CSS file. 3) My dynamic text has a html 'span' tag and it's formatted according to the CSS class that it's passed via the XML file. Ok... so far so good. It's something like: _root.txt.txtDica = "<span class='"+ node.childNodes[_global.auxCont].attributes.myStyle +"'>" + node.childNodes[_global.auxCont].attributes.myText+ "</span>"; The problem is that the swf has a poor quality text. So, I've been looking around, found some things about embedded fonts, Flash TextField() and etc. But, nothing seems to work. Does anyone know how can I generate this html tag in a dynamic text with a good quality? It's worth to say that I'm using regular fonts, such as Verdana and Arial. Thanks a lot!

    Read the article

  • What are some ways to accomplish a dynamic array?

    - by Ted
    I'm going to start working on a new game and one of the things I'd like to accomplish is a dynamic array sort of system that would hold map data. The game will be top-down 2d and made with XNA 4.0 and C#. You will begin in a randomized area which will essentially be tile based. As such a 2 dimensional array would be one way to accomplish this by holding numerical values which would correspond to a list of textures and that would be how it would draw this randomly created map. The problem is I would kind of only like to create the area around where you start and they could venture in which ever direction they wanted to. This would mean I'd have to populate the map array with more randomized data in the direction they go. I could make a really large array and use the center of it and the rest would be in anticipation of new content to be made, but that just seems very inefficient. I suppose when they start a new game I could have a one time map creation process that would go through and create a large randomly generated map array, but holding all of in memory at all times seems also inefficient. Perhaps if there was a way that I'd only hold parts of that map data in memory at one time and somehow not hold the rest in memory. In the end I only need to have a chunk of the map somewhat close to them in memory so perhaps some of you might have suggestions on good ways to approach this kind of randomized map and dynamic array problem. It wouldn't need to be a dynamic array type of thing if I made it so that it pulled in map data nearby that is needed and then once off the screen and not needed it could somehow get rid of that memory that way I wouldn't have a huge array taking up a bunch of memory.

    Read the article

  • How to cache dynamic javascript/jquery/ajax/json content with Akamai

    - by Starfs
    Trying to wrap my head around how things are cached on a CDN and it is new territory for me. In the document we received about sending in environment requests, it says "Dynamically-generated content will not benefit much from EdgeSuite". I feel like this is a simplified statement and there has to be a way to make it so you cache dynamically generated content if the tools are configured correctly. The site we are working with runs off a wordpress database, and uses javascript and ajax to build the pages, based on the json objects that php scripts have generated. The process - user's browser this URL, browser talks to edgesuite tools which will have cached certain pre-defined elements, and then requests from the host web server anything that is not cached, once edgesuite has compiled a combination of the two, it sends that information back to the browser. Can we not simply cache all json objects (and of course images, js, css) and therefore the web browser never has to hit the host server's database, at which point in essence, we have cached our dynamic content? Does anyone have any pointers on the most efficient configuration for this type of system -- Akamai/CDN -- to served javascript/ajax/json generated pages that ideally already hit pre-cached json data? Any and all feedback is welcome!

    Read the article

  • Publish Static Content to WebLogic

    - by James Taylor
    Most people know WebLogic has a built in web server. Typically this is not an issue as you deploy java applications and WebLogic publishes to the web. But what if you just want to display a simple static HTML page. In WebLogic you can develop a simple web application to display static HTML content. In this example I used WLS 10.3.3. I want to display 2 files, an HTML file, and an xsd for reference. Create a directory of your choice, this is what I will call the document root. mkdir /u01/oracle/doc_root Copy the static files to this directory  In the document root directory created in step 1 create the directory WEB-INF mkdir WEB-INF In the WEB-INF directory create a file called web.xml with the following content <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"> <web-app> </web-app> Login to the WebLogic console to deploy application Click on Deployments Click on Lock & Edit Click Install and set the path to the directory created in step 1 Leave default "Install this deployment as an application" and click Next Select a Managed Server to deploy to and click Next Accept the defaults and click Finish  Deployment completes successfully, now click the Activate Changes You should now see the application started in the deployments You can now access your static content via the following URL http://localhost:7001/doc_root/helloworld.html

    Read the article

  • Building a project in VS that depends on a static and dynamic library

    - by fg nu
    Noob noobin'. I would appreciate some very careful handholding in setting up an example in Visual Studio 2010 Professional where I am trying to build a project which links: a previously built static library, for which the VS project folder is "C:\libjohnpaul\" a previously built dynamic library, for which the VS project folder is "C:\libgeorgeringo\" These are listed as Recipes 1.11, 1.12 and 1.13 in the C++ Cookbook. The project fails to compile for me with unresolved dependencies (see details below), and I can't figure out why. Project 1: Static Library The following are the header and source files that were compiled in this project. I was able to compile this project fine in VS2010, to the named standard library "libjohnpaul.lib" which lives in the folder ("C:/libjohnpaul/Release/"). // libjohnpaul/john.hpp #ifndef JOHN_HPP_INCLUDED #define JOHN_HPP_INCLUDED void john( ); // Prints "John, " #endif // JOHN_HPP_INCLUDED // libjohnpaul/john.cpp #include <iostream> #include "john.hpp" void john( ) { std::cout << "John, "; } // libjohnpaul/paul.hpp #ifndef PAUL_HPP_INCLUDED #define PAUL_HPP_INCLUDED void paul( ); // Prints " Paul, " #endif // PAUL_HPP_INCLUDED // libjohnpaul/paul.cpp #include <iostream> #include "paul.hpp" void paul( ) { std::cout << "Paul, "; } // libjohnpaul/johnpaul.hpp #ifndef JOHNPAUL_HPP_INCLUDED #define JOHNPAUL_HPP_INCLUDED void johnpaul( ); // Prints "John, Paul, " #endif // JOHNPAUL_HPP_INCLUDED // libjohnpaul/johnpaul.cpp #include "john.hpp" #include "paul.hpp" #include "johnpaul.hpp" void johnpaul( ) { john( ); paul( ); Project 2: Dynamic Library Here are the header and source files for the second project, which also compiled fine with VS2010, and the "libgeorgeringo.dll" file lives in the directory "C:\libgeorgeringo\Debug". // libgeorgeringo/george.hpp #ifndef GEORGE_HPP_INCLUDED #define GEORGE_HPP_INCLUDED void george( ); // Prints "George, " #endif // GEORGE_HPP_INCLUDED // libgeorgeringo/george.cpp #include <iostream> #include "george.hpp" void george( ) { std::cout << "George, "; } // libgeorgeringo/ringo.hpp #ifndef RINGO_HPP_INCLUDED #define RINGO_HPP_INCLUDED void ringo( ); // Prints "and Ringo\n" #endif // RINGO_HPP_INCLUDED // libgeorgeringo/ringo.cpp #include <iostream> #include "ringo.hpp" void ringo( ) { std::cout << "and Ringo\n"; } // libgeorgeringo/georgeringo.hpp #ifndef GEORGERINGO_HPP_INCLUDED #define GEORGERINGO_HPP_INCLUDED // define GEORGERINGO_DLL when building libgerogreringo.dll # if defined(_WIN32) && !defined(__GNUC__) # ifdef GEORGERINGO_DLL # define GEORGERINGO_DECL _ _declspec(dllexport) # else # define GEORGERINGO_DECL _ _declspec(dllimport) # endif # endif // WIN32 #ifndef GEORGERINGO_DECL # define GEORGERINGO_DECL #endif // Prints "George, and Ringo\n" #ifdef __MWERKS__ # pragma export on #endif GEORGERINGO_DECL void georgeringo( ); #ifdef __MWERKS__ # pragma export off #endif #endif // GEORGERINGO_HPP_INCLUDED // libgeorgeringo/ georgeringo.cpp #include "george.hpp" #include "ringo.hpp" #include "georgeringo.hpp" void georgeringo( ) { george( ); ringo( ); } Project 3: Executable that depends on the previous libraries Lastly, I try to link the aforecompiled static and dynamic libraries into one project called "helloBeatlesII" which has the project directory "C:\helloBeatlesII" (note that this directory does not nest the other project directories). The linking process that I did is described below: To the "helloBeatlesII" solution, I added the solutions "libjohnpaul" and "libgeorgeringo"; then I changed the properties of the "helloBeatlesII" project to additionally point to the include directories of the other two projects on which it depends ("C:\libgeorgeringo\libgeorgeringo" & "C:\libjohnpaul\libjohnpaul"); added "libgeorgeringo" and "libjohnpaul" to the project dependencies of the "helloBeatlesII" project and made sure that the "helloBeatlesII" project was built last. Trying to compile this project gives me the following unsuccessful build: 1------ Build started: Project: helloBeatlesII, Configuration: Debug Win32 ------ 1Build started 10/13/2012 5:48:32 PM. 1InitializeBuildStatus: 1 Touching "Debug\helloBeatlesII.unsuccessfulbuild". 1ClCompile: 1 helloBeatles.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl georgeringo(void)" (?georgeringo@@YAXXZ) referenced in function _main 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl johnpaul(void)" (?johnpaul@@YAXXZ) referenced in function _main 1E:\programming\cpp\vs-projects\cpp-cookbook\helloBeatlesII\Debug\helloBeatlesII.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:01.34 ========== Build: 0 succeeded, 1 failed, 2 up-to-date, 0 skipped ========== At this point I decided to call in the cavalry. I am new to VS2010, so in all likelihood I am missing something straightforward.

    Read the article

  • iFrame content pageviews not matching parent page pageviews

    - by surfbird0713
    I have a page with content hosted in an iFrame, both using the same GA account ID. When I look at the pages report, the parent page has about 9000 unique views, but the iFrame content only has 3700. Anyone have an idea what could cause that kind of discrepancy? My only guess is that it would be caused by people moving on before the iFrame content has a chance to load, but the average time on page for the host page is 56 seconds, so that doesn't seem possible. This is the page in question: http://cookware.lecreuset.com/cookware/content_le-creuset-lid_10151_-1_20002 The flipbook is hosted in the iFrame on a separate domain. I have each page of the flipbook triggering a virtual pageview to try to evaluate engagement with the book - when the flipbook loads, it fires a pageview for the page it is on, so that is the page I'm using for the 3700 number. I also looked at the source of the iFrame in the pages report, and that number just about matches the virtual pageviews so that piece is consistent. Any ideas on this are much appreciated. Thanks!

    Read the article

  • OOW content for Pattern Matching....

    - by KLaker
    If you missed my sessions at OpenWorld then don't worry - all the content we used for pattern matching (presentation and hands-on lab) is now available for download. My presentation "SQL: The Best Development Language for Big Data?" is available for download from the OOW Content Catalog, see here: https://oracleus.activeevents.com/2013/connect/sessionDetail.ww?SESSION_ID=9101 For the hands-on lab ("Pattern Matching at the Speed of Thought with Oracle Database 12c") we used the Oracle-By-Example content. The OOW hands-on lab uses Oracle Database 12c Release 1 (12.1) and uses the MATCH_RECOGNIZE clause to perform some basic pattern matching examples in SQL. This lab is broken down into four main steps: Logically partition and order the data that is used in the MATCH_RECOGNIZE clause with its PARTITION BY and ORDER BY clauses. Define patterns of rows to seek using the PATTERN clause of the MATCH_RECOGNIZE clause. These patterns use regular expressions syntax, a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. You can download the setup files to build the ticker schema and the student notes from the Oracle Learning Library. The direct link to the example on using pattern matching is here: http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:6781,2.

    Read the article

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • Leading Analyst Firm Positions Oracle in Leaders Quadrant for Web Content Management

    - by Christie Flanagan
    Gartner, Inc. has named Oracle a Leader in its latest “Magic Quadrant for Web Content Management.” Gartner’s Magic Quadrants position vendors within a particular quadrant based on their completeness of vision and their ability to execute on that vision. According to Gartner, “WCM plays an increasingly important role in business performance. It has become the central point of coordination for initiatives involving the enterprise's online presence, and these initiatives have become more sophisticated and more important to enterprises' business strategies. Thus, WCM is key for organizations wishing to execute a strategy of OCO (online channel optimization) that embraces areas such as customer experience management, e-commerce, digital marketing, multichannel marketing and website consolidation.” Gartner continued, “Leaders should drive market transformation. Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of OCO. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can: demonstrate enterprise deployments’ offer integration with other business applications and content repositories; provide a vertical-process or horizontal-solution focus.” Oracle WebCenter, the engagement platform powering exceptional experiences for customers, employees and partners, connects people and information by bringing together the most complete portfolio of portal, Web experience management, content, social, and collaboration technologies into a single integrated product suite. Oracle WebCenter also provides the foundation for Oracle Fusion Middleware and Oracle Fusion Applications to deliver a next-generation user experience.  To see the latest reports, webcasts and demonstrations about Oracle's web experience management solution, Oracle WebCenter Sites, please visit our Connected Customer Experience Resource Center.

    Read the article

  • JavaOne Content on Video

    - by Tori Wieldt
    JavaOne content is available in video in three sizes, depending on if you want to have a sip, have a drink, or go to the proverbial firehose. Tall (Keynote Highlights) Go to the JavaOne playlist on the YouTube Java channel for highlights of the JavaOne Keynotes.  Grande (Keynotes in Full) Go to the Oracle Media Network JavaOne 2012 channel to view the keynotes in full (Community Keynote coming soon). Venti (All Sessions, BOFs and Tutorials) To view slides paired with audio of each session, go to the JavaOne content catalog (JavaOne homepage > Programs > Content Catalog) and select a session. If a video is available, you'll see "Media" in the right column. Look under "Presentation Download" to get the slides. Sessions are being made available as quickly as possible. "It's exciting to see Oracle take community stewardship so seriously," said Sharat Chander, Group Director for Java Technology Outreach. "Making all JavaOne sessions on video available online for free will helps make the future Java for everyone."

    Read the article

  • Add to extra div to the side of the content

    - by Kreker
    Hi. I've read some post for adding extra divs to the both sides of the main content and act like a sidebars. But I want to add 2 extra div for adding some graphics and the right will be to the right of the actual sidebar. I took a screenshot for explain my self better. I've added to extra div before the pagewrap and working with css I locate the divs to left and to the right but there is a problem, if some has a resolution smaller then me the divs start moving messing up the loyout. I want that they stay near the content and the sidebar for all the resolution. I've tryed working with css with position and % but with no results. Ok so, the cyan rectangle would be the extra divs, and the gray graphics the contents of these. I don't care if the users has smaller resolutions, the graphics can go out the screen, but the "roots" must be fixed to the side of the main content ;) Hope I explained well. This is the screenshot http://cl.ly/0D2H1s0t3Y0p2A412L35/Schermata-2011-01-11-a-22.22.17.png Thanks for help

    Read the article

  • How to send Content-Disposition headers in apache for files?

    - by Rory McCann
    I have a directory of text files that I'm serving out with apache 2. Normally when I (or any user) access the files they see them in their browser. I want to 'force'* the web browser to pop up a 'Save as' dialog box. I know this is possible to do with the Content-Disposition headers (more info). Is there some way to turn that on for each file? Ideally I'd like something like this: <Directory textfiles> AutoAddContentDispositionHeaders On </Directory> And then apache would set the correct content disposition header, including using the same filename. Something like this might be possible with the apache Header directive. Bonus points if it's included by standing in apache in debian. I could do a simple PHP wrapper script that takes in a filename argument, makes the call to header(...) and then prints the file, but then i have to validdate input etc. that's work I'm trying to avoid. * I know you can't actually force things when it comes to the web

    Read the article

  • Content Length and Transfer Encoding Chunked nginx, node-http-proxy

    - by rampr
    I have the following setup - node-http-proxy acts as a reverse proxy forwarding all requests to nginx/socket.io as necessary My problem is this When I send a HTTP DELETE request from the browser, node-http-proxy adds a header "Transfer Encoding Chunked" as the request from the browser had no Content Length. The request from the browser had no Content Length as it had no body. Nginx doesn't like the Transfer Encoding Chunked Header and throws a 411 asking for Content-Length. The problem gets solved when I send dummy data as part of the DELETE request so there is a Content Length and node-http-proxy doesn't add Transfer Encoding Chunked header and nginx is happy. I want to understand if node-http-proxy isn't working as expected, because it adds a Transfer Encoding Chunked header when Content Length is missing because there is no Content Body.

    Read the article

  • Power Dynamic Database-Driven Websites with MySQL & PHP

    - by Antoinette O'Sullivan
    Join major names among MySQL customers by learning to power dynamic database-driven websites with MySQL & PHP. With the MySQL and PHP: Developing Dynamic Web Applications course, in 4 days, you learn how to develop applications in PHP and how to use MySQL efficiently for those applications! Through a hands-on approach, this instructor-led course helps you improve your PHP skills and combine them with time-proven database management techniques to create best-of-breed web applications that are efficient, solid and secure. You can currently take this course as a: Live Virtual Class (LVC): There are a number events on the schedule to suit different timezones in January 2013 and March 2013. With an LVC, you get to follow this live instructor-led class from your own desk - so no travel expense or inconvenience. In-Class Event: Travel to an education center to attend this class. Here are some events already on the scheduled:  Where  When  Delivery Language  Lisbon, Portugal  15 April 2013  European Portugese  Porto, Portugal 15 April 2013   European Portugese  Barcelona, Spain 28 February 2013  Spanish  Madrid, Spain 4 March 2013   Spanish If you do not see an event that suits you, register your interest in an additional date/location/delivery language. If you want more indepth knowledge on developing with MySQL and PHP, consider the MySQL for Developers course. For full details on these and all courses on the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • How-to get the binding for a tab in the Dynamic Tab Shell Template

    - by Frank Nimphius
    The Dynamic Tab Shell template does expose a method on the Tab.java class that allows you to get access to the ADF binding container for a tab. At least in theory this works, because in practice this call always returns a null value (a bug is filed for this). To work around the problem, you can use code similar to the following to get the ADF binding for a specific tab DCBindingContainer currentBinding = (DCBindingContainer) BindingContext.getCurrent().getCurrentBindingsEntry(); DCBindingContainer templateBinding = (DCBindingContainer)currentBinding.get("ptb1"); DCBindingContainer tabBinding= (DCBindingContainer)templateBinding.get("r"+0);  In the code line above, the tabBinding variable will hold the binding reference to the first tab in the dynamic tab shell template. Note that the tab doesn't need to be visible for this (which has to do with how the template works).  "ptb1" is the template reference name in the PageDef file (Executable section) of the template consumer view. Check this string in your page before using this code. If it differs, change it also in the code above. "r0" is the binding reference of the first tab in the template. Te last tab is referenced by "r14".  

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • PHP Included files writing their own content from Importer values ...

    - by Adrian
    Hello, I have a index.php file that will include several external files: "content/templates/id1/template.php" "content/templates/id2/template.php" "content/templates/id3/template.php" etc. All these files are loaded dynamically into index.php (it reads all folders inside "templates" directory and then includes every "template.php" file). I want to make "template.php" to have the same code in all the "id1,id2,id3" folders, BUT to load values from index.php depending in which folder it stays.. How can I do that? Thank You!

    Read the article

  • wpf display staggered content

    - by Chris Cap
    I am trying to display a rather dynamic list of data in WPF. I have essentially a LineItem class that contains a list of strings and a line type. The line type separates different categories of line items. All line items with the same type should be displayed the same and their data should line up. For example, this list will contain an order summary. And the there will be a line type that represents something with a width and height. The width and height must line up vertically. However, there may be other line types that don't have to line up vertically. I want to produce a table similar to what you see below: ------------------------------------------------------------------ | some content here | some more content here | last content here | |----------------------------------------------------------------| | some content here | | last content here | |----------------------------------------------------------------| | spanning content that is longer then most | last content here | |----------------------------------------------------------------| | some content that can span a really long distance | ------------------------------------------------------------------ I attempted to do this by creating ListView with a single column that had a datatemplate that contained a grid with a fixed number of fields and then bind to the Colspan value. Unfortunately, this didn't work. I ended up with incorrect or overlapping content anytime I tried to do a column span. Here's the XAML I was working with <ListView ItemsSource="{Binding}" > <ListView.View> <GridView> <GridViewColumn Header="Content"> <GridViewColumn.CellTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Grid.ColumnSpan="{Binding Path=Tokens[0].ColumnSpan}" Text="{Binding Path=Tokens[0].Content}" ></TextBlock> <TextBlock Grid.Column="1" Grid.ColumnSpan="{Binding Path=Tokens[1].ColumnSpan}" Text="{Binding Path=Tokens[1].Content}" ></TextBlock> <TextBlock Grid.Column="2" Text="{Binding Path=Tokens[2].Content}"></TextBlock> </Grid> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> </GridView> </ListView.View> And here's the classes I was binding to public class DisplayLine { public LineType Linetype { get; set; } public List<Token> Tokens { get; set; } public DisplayLine() { Tokens = new List<Token>(); } } public class Token { public string Content { get; set; } public bool IsEmpty { get { return string.IsNullOrEmpty(Content); } } public int ColumnSpan { get; set; } public Token() { ColumnSpan = 1; } } Does anyone have any suggestions of maybe a way of making this work. I may be taking the wrong approach. I'm trying to avoid any solutions where I explcitily build something in the code behind as I'm using the MVVM pattern so it has to be something that I can bind from exposed through the controller. My intial plan was to create a factory and separate classes that display the data differently based on type. However, I'm struggling coming up with a strategy for this using MVVM as I really can't just build something and display it. I have toyed with the idea of making some kind of UI service class that is injected, but it would still require some pretty detailed UI information from the controller to do it's work.

    Read the article

  • ViewBag dynamic in ASP.NET MVC 3 - RC 2

    - by hajan
    Earlier today Scott Guthrie announced the ASP.NET MVC 3 - Release Candidate 2. I installed the new version right after the announcement since I was eager to see the new features. Among other cool features included in this release candidate, there is a new ViewBag dynamic which can be used to pass data from Controllers to Views same as you use ViewData[] dictionary. What is great and nice about ViewBag (despite the name) is that its a dynamic type which means you can dynamically get/set values and add any number of additional fields without need of strongly-typed classes. In order to see the difference, please take a look at the following examples. Example - Using ViewData Controller public ActionResult Index() {     List<string> colors = new List<string>();     colors.Add("red");     colors.Add("green");     colors.Add("blue");                 ViewData["listColors"] = colors;     ViewData["dateNow"] = DateTime.Now;     ViewData["name"] = "Hajan";     ViewData["age"] = 25;     return View(); } View (ASPX View Engine) <p>     My name is     <b><%: ViewData["name"] %></b>,     <b><%: ViewData["age"] %></b> years old.     <br />         I like the following colors: </p> <ul id="colors"> <% foreach (var color in ViewData["listColors"] as List<string>){ %>     <li>        <font color="<%: color %>"><%: color %></font>    </li> <% } %> </ul> <p>     <%: ViewData["dateNow"] %> </p> (I know the code might look cleaner with Razor View engine, but it doesn’t matter right? ;) ) Example - Using ViewBag Controller public ActionResult Index() {     List<string> colors = new List<string>();     colors.Add("red");     colors.Add("green");     colors.Add("blue");     ViewBag.ListColors = colors; //colors is List     ViewBag.DateNow = DateTime.Now;     ViewBag.Name = "Hajan";     ViewBag.Age = 25;     return View(); } You see the difference? View (ASPX View Engine) <p>     My name is     <b><%: ViewBag.Name %></b>,     <b><%: ViewBag.Age %></b> years old.     <br />         I like the following colors: </p> <ul id="colors"> <% foreach (var color in ViewBag.ListColors) { %>     <li>         <font color="<%: color %>"><%: color %></font>     </li> <% } %> </ul> <p>     <%: ViewBag.DateNow %> </p> In my example now I don’t need to cast ViewBag.ListColors as List<string> since ViewBag is dynamic type! On the other hand the ViewData[“key”] is object.I would like to note that if you use ViewData["ListColors"] = colors; in your Controller, you can retrieve it in the View by using ViewBag.ListColors. And the result in both cases is Hope you like it! Regards, Hajan

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >