Search Results

Search found 31630 results on 1266 pages for 'content management'.

Page 82/1266 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • A Year of Upheaval for Procurement Professionals-New Report & Webinar

    - by DanAshton
    2013 will see significant changes in priorities and initiatives among procurement professionals as they balance the needs of their enterprises with efforts to add capabilities for long-term procurement success. In response, procurement managers will expand their organization’s spend influence via supplier relationship management, sourcing, and category management. These findings are part of the new report, “2013 Procurement Key Issues: Going Deeper and Broader to Deliver Borderless Procurement Services,” by the Hackett Group. The authors say that compared to similar studies over the last five years, 2013 is registering the greatest year-over-year changes in priorities for both procurement performance and capability issues. Three Important PrioritiesThe survey found that procurement professionals are focusing their attention in three key areas. Cost reduction. Controlling expenses is always a high priority, but with 90 percent of the respondents now placing this at the top of their performance concerns, the Hackett analysts say this “clearly shows that, for better or worse, cost reduction is king” in 2013. Technology innovation. Innovation has shot up significantly in the priority rankings and is now tied with spend influence for second among procurement professionals. Sixty-five percent of the survey participants said pursuing game-changing innovation and technology is a top procurement initiative. Managing supply risk. This area registered a sharp rise in importance because of its role in protecting profits, Hackett says. Supplier compliance with performance milestones and regulatory requirements is receiving particular attention, with an emphasis on efficient management of cross-functional workflows. “These processes create headaches for suppliers and buyers alike, and can detract from strategic value creation when participants are bogged down in processing paper and spreadsheets,” the report explains.  For more insights into the current state of the procurement industry, download the full report, “2013 Procurement Key Issues: Going Deeper and Broader to Deliver Borderless Procurement Services” and watch a Webcast featuring Global Procurement Advisory Practice Leader for The Hackett Group, Chis Sawchuk, and Managing Supervisor of Supply Chain Processes and Systems for Ameren, Chris Nelms. 

    Read the article

  • What is the role of traditional issue tracker when Scrum / Kanban board is used?

    - by Borek
    From a very high level view, to me it seems there are generally 2 types of Project Management tools: Traditional issue trackers like Fogbugz, JIRA, BugZilla, Trac, Redmine etc. Virtual card boards / agile project management tools like Pivotal Tracker, GreenHopper, AgileZen, Trello etc. Sure, they overlap in one way or another, e.g. Pivotal Tracker tasks can be imported to JIRA, GreenHopper itself is implemented on top of JIRA issue base etc. but I think one can still see the difference in orientation between those two types of tools. Traditional issue tracker seems to be used even in companies otherwise doing agile project management. My question is, why do they do that? I also feel that we should use an issue tracker in my company but when I'm thinking about it, I'm not actually sure why should we need it. For example, Trello development seems to be managed by using Trello itself (see this virtual wall) even though they have access to Fogbugz, one of the best issue trackers around. So maybe we don't need traditional issue tracker when we'll be doing 100% of our work in an agile manner using one of the agile PM tools?

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • Webcast: DB Enterprise User Security Integration with Oracle Directory Services

    - by B Shashikumar
    The typical enterprise has a large number of DBA (Database administrator) accounts that are locally managed, which is often very costly, problematic and error-prone. Databases are a crucial component of your enterprise IT infrastructure, housing sensitive corporate data and database user accounts and privileges. To ensure the integrity of your enterprise's data, it's imperative to have a well-managed identity management system. This begins with centralized management of user accounts and access rights. Enterprise User Security (EUS), an Oracle Database Enterprise Edition feature, combined with Oracle Identity Management, gives you the ability to centrally manage database users and their authorizations in one central place. The cost of user provisioning and password resets is dramatically reduced. This technology is a must for new application development and should be considered for existing applications as well. Join Oracle Advisors for a live webcast on Jul 11 at 8am Pacific Time where Oracle experts will briefly introduce EUS, followed by a detailed discussion about the various directory options that are supported, including integration with Microsoft Active Directory. We'll conclude how to avoid common pitfalls deploying EUS with directory services. To register for this event, click here  

    Read the article

  • PeopleSoft CRM 9.2 Release Value Proposition

    - by Race Bannon
    Oracle's PeopleSoft Customer Relationship Management (CRM) delivers solutions that have been tailored to fit your industry business processes, your customer strategies, and your success criteria. With PeopleSoft CRM 9.2, organizations will be able to deploy a solution that delivers built-in best practices specific to your industry with a highly configurable, tightly integrated platform, ensuring that solutions will be fast to implement. The result is less configuration, less customization, and less integration. PeopleSoft Customer Relationship Management (CRM) is a world-class solution for organizations of every size and Oracle’s planned product roadmap for PeopleSoft applications is to deliver valuable, needed features for all of an organization’s constituents along three design principles — Simplicity, Productivity, and Lowered Total Cost of Ownership — as well as new application functionality as prioritized by our customers. The upcoming 9.2 release of PeopleSoft Customer Relationship Management focuses on these themes of Simplicity, Productivity, and Lower Total Cost of Ownership while also delivering robust new functionality to help your organization succeed. The recently published PeopleSoft CRM 9.2 Release Value Proposition provides overviews of the new features and enhancements planned for these applications for Release 9.2. This document offers customers a road map intended to help them assess the business benefits of upgrading to the 9.2 release while also helping them plan their IT projects and investments. (Link is to a My Oracle Support page, available to customers and partners.) Oracle continues to deliver enterprise-wide features that enhance our customer ownership experience and helps them run their businesses more efficiently and profitably. With the CRM 9.2 release, we continue to abide by this firm commitment we’ve made to our customers.

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • The spork/platypus average: shameless self promotion

    - by Roger Hart
    This is the video of presentation I gave at UA Europe and TCUK this year. The actual sub-title was "Content strategy at Red Gate Software", but this heading feels more honest. For anybody who missed it, or is just vaguely interested, here's a link to me talking about de-suckifying the web. You can find the slideshare deck here, too* Watching it back is more than a little embarrassing, and makes me really, really want to do a follow up, so I can do three things: explain the rest of the big web project, now we've done it give some data on the outcome of the content review make a grovelling apology to our marketing guys, who I've been unfairly mean to in a childish effort to look cool There are a whole bunch of other TCUK presentations online, too. You can find them all here: http://tiny.cc/tcuk10_videos I'd particularly recommend Chris Atherton's: "Everything you always wanted to know about psychology and technical communication" - it's full of cool stuff. You should probably also watch David Black's opening keynote, which managed to make my hour of precocious grandstanding look measured, meek, and helpful. He actually makes some interesting points, but you'd basically have to ship Richard Dawkins off to Utah, if you wanted to go further out of your way to aggravate your audience. It does give an engaging account of running a large tech comms project, and raise some questions about how we propose to understand a world where increasing amounts of our stuff gets done by increasingly many increasingly complicated tissues of APIs. Well, sort of. That's what all the notes I made were about, anyway.   *Slideshare ate my fonts. Just so we're clear on this: I'd never use badly-kerned Arial in a presentation. Don't worry.

    Read the article

  • Stay Connected with Oracle Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    We want to be sure you stay connected and updated with the latest in Oracle Content Management, Portal and Collaboration technologies. We invite you to follow us on Twitter, become our friends on Facebook, check our blog frequently, and subscribe to the Enterprise 2.0 newsletter! Oracle Enterprise 2.0 Twitter Oracle Enterprise 2.0 Facebook Oracle Enterprise 2.0 Blog Oracle Enterprise 2.0 Newsletter We look forward to staying connected with you in 2011!

    Read the article

  • XNA CustomModelAnimationSample problem

    - by Mentoliptus
    I downloaded the official tutorial from:CustomModelAnimationSample It works fine but when I try to replicate it in my project, it fails to load the Tag property in my model. Is found that the probelm is in the line: skinnedModel = Content.Load<Model>("DudeWalk"); This line loads the model from the DudeWalk.fbx file and with the custom SkinnedModelProcessor. It loads the animations data in the model. After the line the Tag property is full. I stepped into the method and it went to the custom ModelData class. I copied everything from the projects CustomModelAnimationWindows and CustomModelAnimationPipeline to my solution and set all the references. I tried the same line of code and couldn't step in the method. It called the default method or model constructor and after the line the model's Tag propetry was null. I have to load the model through my custom SkinnedModelProcessor class, but how I tell the game to use this class? In the tutroail CustomModelClass the line is changed to: model = Content.Load<CustomModel>("tank"); So I assumed that I have to set the generic type to a custom model class, but the first example works without it. If anyone has some useful advice or some other helpful link, I'll be happy to try it.

    Read the article

  • Procedural world generation oriented on gameplay features

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game (that is, not for the sake of scenery, but for the sake of gameplay) needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than the natural way cities arise, scattered on the land based on resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards? The reason I'm asking is that I'd previously pondered taking existing maps from fantasy fiction (my own and others), putting the information into the system as a base point, and then generating a good world to play in from it. This seems covered by existing technology, that is, where the designer puts in all the necessary information such as the city populations, resources, biomes, road networks and rivers, then allows the PCG fill in the gaps. But now I'm wondering if it may be possible to have a content generator generate also the overall design. Generate the cities and population centres, balancing them so that there is a natural seeming need of commerce, then generate the positions and connectivity, then from the type of city produce the list of necessary resources that must be nearby, and only then, maybe given some rules on how to make the journey between cities both believable and interesting, generate the final content including the roads, the choke points, the bridges and tunnels, ferries and the terrain including the biomes and coastline necessary. If this has been done before, I'd like to know, and would like to know what went wrong, and what went right.

    Read the article

  • Non-dynamic CMS [closed]

    - by user20457
    Some of the web sites I visit every day (news, sports, etc..), although the content changes very often (several times per day), the URLs always have .html extension, what makes me thing that the content has been generated once, and then published as a static page, rather than generated in every call, or even cached in memory. For example, the fictitious site "mysports.com" have a "futbol.html" page, and then yesterday Messi gets injured and they have another thing to put in that page, then I presume they post the new item in their CMS system, and automatically a publishing action is triggered aftewards that recreates "futbol.html" in a CDN with the new item and probably discard the oldest one. Then the ETag changes and clients will get the new page if they try to access it. (the site is fictitious but this is what I believe happened yesterday in the sports site I read) This would fit in the CQRS approach, and I presume they have a huge performance. I know lots of CMS (WP, Drupal, BlogEngine.net, DNN, etc...), but I have never seen any able of doing this, or at least, I was not aware this feautre. How are called those distributed CMS? Which are the most well known? Cheers.

    Read the article

  • Can't load model using ContentTypeReader

    - by Xaosthetic
    I'm writing a game where I want to use ContentTypeReader. While loading my model like this: terrain = Content.Load<Model>("Text/terrain"); I get following error: Error loading "Text\terrain". Cannot find ContentTypeReader AdventureGame.World.HeightMapInfoReader,AdventureGame,Version=1.0.0.0,Culture=neutral. I've read that this kind of error can be caused by space's in assembly name so i've already removed them all but exception still occurs. This is my content class: [ContentTypeWriter] public class HeightMapInfoWriter : ContentTypeWriter<HeightmapInfo> { protected override void Write(ContentWriter output, HeightmapInfo value) { output.Write(value.getTerrainScale); output.Write(value.getHeight.GetLength(0)); output.Write(value.getHeight.GetLength(1)); foreach (float height in value.getHeight) { output.Write(height); } } public override string GetRuntimeType(TargetPlatform targetPlatform) { return "AdventureGame.World.Heightmap,AdventureGame,Version=1.0.0.0,Culture=neutral"; } public override string GetRuntimeReader(TargetPlatform targetPlatform) { return "AdventureGame.World.HeightMapInfoReader,AdventureGame,Version=1.0.0.0,Culture=neutral"; } } Does anyone meed that kind of error before?

    Read the article

  • Software to Stream Media Content from Dedicated Server [closed]

    - by Christian
    We have Windows 2008 R2 Servers and we want to stream content (avi, wmv, mpeg etc) to Windows/Mac OS X/iOS etc devices. The visitor must be able to select the file (s)he want to view withing the library. We tried to accomplish this using: VLC Windows Media Service (WMS) Mediaportal VLC: We didnt find a solution to publish the content in a library WMS: only supports WMV/WMA, needs MediaPlayer MediaPortal: it is not supported on W2k8R2 Server Any suggestions? /chris

    Read the article

  • Difference between Content Protection and DRM

    - by BlueGene
    In this recent post about criticism regarding built-in DRM in Intels SandyBridge processors, Intel denies that there's any DRM in Sandybridge processors but goes on to say that Intel created Intel insider, an extra layer of content protection. Think of it as an armoured truck carrying the movie from the Internet to your display, it keeps the data safe from pirates, but still lets you enjoy your legally acquired movie in the best possible quality I'm confused now. So far I was thinking DRM is content protection. Can someone shed light on this?

    Read the article

  • Is Flash a secure content delivery technology for password protected digital content?

    - by Merkel Fastia
    We are working on a project that would be a competitor to Yudu for online publishing and what we are debating is whether to use Flash for content security protection as Yudu does. See for example "The Testicle Cookbok" for which a limited (3-frame) preview is available before a password is requested by the Flash application running in the browser. Do you see any problems with this approach or could you recommend an alternative technology for password proected digital content?

    Read the article

  • loading content from php file with jQuery

    - by Billa
    The following code fetch all data (by clicking a.info link) from a php file "info.php" and prints in the #content div. The problem is that it prints everything from info.php file. Can I possibly select only some part of data from info.php file to load in #content? The reason to ask this question is that, I want to load different data from the same php file for the different links. $("a.info").click(function(){ var id=$(this).attr("id"); $("#box").slideDown("slow"); $.ajax({ type: "POST", data: "id="+$(this).attr("id"), url: "info.php", success: function(html){ $("#content").html(html); } }); }); Html where content is loading: <div id="box"> <div id="content"></div> </div> info.php paragraph1. paragraph2. For example, In the above info.php file, i only want to load paragraph1 in the #content. I hope my question is clear. Any help will be appreciated.

    Read the article

  • jQuery - Could use a little help with a content loader

    - by Kenny Bones
    Hi, I'm not very elite when it comes to JavaScript, especially the syntax. So I'm trying to learn. And in this process I'm trying to implement a content loader that basically removes all content from a div and inserts content from another div from a different document. I've tried to do this on this site: www.matkalenderen.no - Check the butt ugly link there. See what happens? I've taken the example from this site: http://nettuts.s3.cdn.plus.org/011_jQuerySite/sample/index.html#index But I'm not sure this example actually works the way I think it does. I mean, if the code just wipes out existing content from a div and inserts content from another div, why does the other webpages in this example include doctype and heading etc etc? Wouldn't you just need the div and it's content? Without all the other stuff "around"? Maybe I don't get how this works though. Thought it worked mosly like include really. This is my code however: $(document).ready(function() { var hash = window.location.hash.substr(1); var href = $('#dynloader a').each(function(){ var href = $(this).attr('href'); if(hash==href.substr(0,href.length-5)){ var toLoad = hash+'.html #container'; $('#container').load(toLoad) } }); $('#dynloader a').click(function(){ var toLoad = $(this).attr('href')+' #container'; $('#container').hide('fast',loadcontainer); $('#load').remove(); $('#wrapper').append('<span id="load">LOADING...</span>'); $('#load').fadeIn('normal'); window.location.hash = $(this).attr('href').substr(0,$(this).attr('href').length-5); function loadcontainer() { $('#container').load(toLoad,'',showNewcontainer()) } function showNewcontainer() { $('#container').show('normal',hideLoader()); } function hideLoader() { $('#load').fadeOut('normal'); } return false; }); });

    Read the article

  • Jquery Cluetip - clean up between ajax loaded content

    - by ted776
    Hi, I'm using the jquery cluetip plugin and trying to figure out how to remove any open cluetip dialogs once i load new content via ajax. I am either stuck with the dialog boxes still showing on top of new content, or the ways i've tried to fix this actually remove all future cluetip dialogs from showing at all. Here's my code, thanks for any help. On dom ready i instantiate cluetip as below. //activate cluetip $('a.jTip').cluetip({ attribute: 'href', cluetipClass: 'jtip', arrows: true, activation: 'click', ajaxCache: false, dropShadow: true, sticky: true, mouseOutClose: false, closePosition: 'title' }); When i'm loading new content, I have the following code. The problem i have is that $('.cluetip-jtip').empty() prevents dialog boxes from opening on any of the new content loaded in, while the destroy function doesn't remove any open dialog boxes, but just destroys the current object. $('.next a').live("click", function(){ var toLoad = $(this).attr('href'); var $data = $('#main_body #content'); $.validationEngine.closePrompt('body'); //close any validation messages $data.fadeOut('fast', function(){ $data.load(toLoad, function(){ $data.animate({ opacity: 'show' }, 'fast'); //reinitialise datepicker and toolip $(".date").date_input(); //JT_init(); $('.hidden').hide(); //scroll to top of form $("html,body").animate({ "scrollTop": $('#content').offset().top + "px" }); //remove existing instance //$('a.jTip').cluetip('destroy'); //remove any opened popups $('.cluetip-jtip').empty(); //reinitialise cluetip $('a.jTip').cluetip({ attribute: 'href', cluetipClass: 'jtip', arrows: true, activation: 'click', ajaxCache: false, dropShadow: true, sticky: true, mouseOutClose: false, closePosition: 'title' }); }); }); return false; });

    Read the article

  • django custom management command does not show up in production

    - by Tom Tom
    I wrote a custom management command for django. Locally with my dev settings everything works fine. Now I deployed my project onto the production server and the management command does not show up, respectively is not available. But I did not get an error message deploying the project (syncdb). Any ideas where I could try to begin to search? Is there a special command that all custom management commands are "autodiscovered"?

    Read the article

  • Drupal: Template Files, Modules and Content Types for Advanced Theme

    - by theandym
    Intro I am in the process of trying to convert my first HTML/CSS design into a theme for Drupal. I have used ModX for quite a few designs and appreciate the ability to create different page templates and custom variables to be assigned to those templates. However I seem to be having some issues making the transition. The site I am working on theming in Drupal is for a real estate agent. Each page/section will have a different set of content associated with it and will need to display only that content. For example, there will be a page for current listings, each of which will be formatted by a custom content type. However, when I call the content on the home page (or on other pages) I do not want to see this listing data. Layout The layout of the site and the regions associated with each page/section is as follows: Home Spotlight Featured 1 Featured 2 About Spotlight Bios - Profiles of each agent (each will be a node with name, contact info, pic, etc) listed on the page; multiple nodes listed Sidebar Listings Spotlight Listings - Profiles of properties (each will be a node with locations, basic info, pic, etc) listed on the page; multiple nodes listed Sidebar Services Spotlight Content - general paragraph text area Sidebar News/Blog News/Blog Items - List of stories with summaries and links to full article Sidebar Each page/section will use the same header and footer. Issue I have done some reading on Drupal, custom content types (and CCK), Views, and Pathauto. However I have not been able to get a clear picture of how to put it all together to accomplish what I am attempting. What I really would like to know is which modules to use, how best to use them, which elements I need to use where, and what template files I should be using to theme the elements I need to use. Any help or reference to useful resources would be much appreciated.

    Read the article

  • Compressing xls content with apache deflate module

    - by Clinton Bosch
    I am trying to compress an excel spreadsheet being sent from my application using apache deflate module. I have added the following line to the my sites-enabled file: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/excel But is seems to make the response data bigger??? Using firebug, without the module I downloaded the xls spreadsheet from the application and it downloaded 100Kb of data, the file size once on the filesystem was also 100Kb as expected. Once I enabled the deflate module as described above and repeated the process, the amount of data downloaded was 295Kb?? but the file was still only 100Kb once save on the filesystem. As an experiment I manually gzipped the saved xls file and it compressed to 20Kb. What am I doing wrong here? Using deflate (Firebug output): 200 OK xxxxxxx.co.za 293 KB 4.43s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:01:43 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Vary Accept-Encoding Content-Encoding gzip Content-Type application/excel Without deflate (Firebug output): 200 OK xxxxxxxx.co.za 100 KB 3.46s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:06:00 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Content-Length 102912 Content-Type application/excel

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >