Search Results

Search found 28992 results on 1160 pages for 'content pages'.

Page 43/1160 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • access dropdown control from the master page to the content page using asp.net

    - by Isha Jain
    i have a master page and the content pages in the master page i have the textbox and dropdown the value in the dropdown may vary according to the content pages e.g for one content page the dropdown may contain branchname, city, address and let for other content page under same master page the dropdown may have values like Contactnumber, EmailID, ........... ......... etc..... so please help me to how can i bind that dropdown from my content page thanks.

    Read the article

  • Silverlight layout hack: Centered content with fixed maxwidth

    - by brainbox
     Today we need to create centered content with fixed maxwidth. It is very easy to implement it for fixed width, but is not clear how to achieve the same for maxwidth.The solution to the problem is Grid with 3 columns: <Grid>      <Grid.ColumnDefenitions>            <ColumnDefenition Width="0.01*" />             <ColumnDefenition Width="0.98*" MaxWidth="1280" />            <ColumnDefenition Width="0.01*" />      </Grid.ColumnDefenitions> </Grid>Huh... like html coding xaml coding is still full of dirty tricks =)

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • My first Windows Phone 7 App: Getting SharePoint Content

    - by Jan Tielens
    Earlier this week at the Mix10 conference, Microsoft announced the developer story of the Windows Phone 7 Series. As expected, it’s all about Silverlight! For all the details I highly recommend to watch the recorded keynotes (day 1, day 2). Tonight I could resist trying to build my very first Windows Phone 7 application; the traditional Hello World thingy. Because the developer tools (Visual Studio 2010 and the free Visual Studio 2010 Express) have pretty nice templates, that wasn’t much of a challenge. So I tried to build something real: an application that can display SharePoint 2010 content, for example items from an announcements list. I head to work my way around some limitations because both SharePoint 2010 and the developer tools are still in beta and CTP, but finally I got it working! Because of the many workarounds, the code is not yet ready for publication, but I’ve created a small screencast so you can see the result. To be continued! :-) Windows Phone 7 POC: Getting SharePoint Data from Jan Tielens on Vimeo.

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Similar domains using my business' content, and stealing SEO results

    - by Murciano
    I've been hired to create a website for a restaurant in my city, let's call it "Flying Dragon" Chinese restaurant. The restaurant has never had a website, though the business itself is about ten years old. However, if you Google the restaurant's name, the first site that comes up seems to be affiliated with the restaurant itself, even though it is not. This site - let's say, flyingdragonchinese.com - is also the one that Google has apparently selected, in its results, to be the official website of the restaurant - in essence, the first Google result is flyingdragonchinese.com, and directly beneath it, within the same entry, are the Google reviews and contact information. Upon visiting flyingdragonchinese.com (again, not the actual name), I see that the website has taken the menu content from the restaurant, in the same manner that Yelp does, but it also seems (to the untrained eye) to be the restaurant's official site. Basically, someone has created a fake website for the business (I am not sure why) using its actual menu and contact information, and is hogging the search results. The concept is similar to a "scraping site" except that the information seems to have been stolen manually. The main problem is that visitors to this site will have an inaccurate impression of the restaurant. I feel like the obvious solution is to register a new domain for my site, and simply beat out this competitor (or whatever it is) with smarter SEO and business verification with Google. However, the Conan-the-Barbarian-web-designer part of me wants to somehow bash this other site (deservedly?) into oblivion. But I don't know what I can really do, besides maybe issuing a cease-and-desist letter, or trying to contact the web host for the site, although there is no contact information available on this "fake" site for the site owner. Has anyone ever experienced something like this? Is there any solution?

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Website directory structure regarding subdomains, www, and "global" content

    - by Pawnguy7
    I am trying to make a homemade HTTP server. It occurs to me, though, I never fully understood what you might call "relativity" among web pages. I have come across that www. is a subdomain, and I understand its original purpose. IT sounds like, in general, you would redirect (is that 301 or 302?) it to a... non-subdomain, sort of. As in, redirecting www.example.com to example.com. I am not entirely sure how to make this work when retrieving files for an HTTP server though. I would assume that example.com would be the root, and www manifests as a folder within it. I am unsure. There is also the question of multi-level subdomains, e.g. subdomain2.subdomain1.example.com. It seems to me they are structured "backwards", where you go from the root left in folder structure. In this situation, subdomain2 is a directory within subdomain1, which is a directory in the root. Finally, it occurs to me I might want a sort of global location. For example, maybe all subdomains still use an image as a logo. It makes more sense to me that there is one image, rather than each having a copy. In the same way, albiet more doubtfully, you might have global CSS (though that is a bit contrary to the idea of a subdomain in the first place), or a javascript that is commonly used. (more efficient than each having its copy and better for organization purposes). Finally, mabye you have a global 404 page. I think this might be the case where you have user-created subdomains (say bloggername.example.com), where example.com still has a default 404 when either a subdomain does not exist or page does not exists under a valid blogger. I am confused on what the directory structure for this should be. To summarize: Should and how it have a global files not in a subdirectory, how should www. be handled, (or how a now www or other subdomain should be handled), and the pattern for root/subdomain, as well as subdomain within subdomains (order-wise). Sorry this is multiple questions, but I feel at the root they are all related to the directory.

    Read the article

  • New Online Learning Library (OLL) content

    - by Irina
    Looking to brush up on OAM or OVD skills? Want some help with OIM? Well, have you checked our Online Learning Library (OLL) recently? OLL is a great way to pickup new skills in short blocks of time, and there is an enormous selection, on a diverse set of products. Every month these trainings get hundreds or thousands of hits. It would be worth your while to spend some time just poking around the nooks and crannies for items that interest you.A smattering of new OBEs and other content have recently become available, and if you haven't already, you might want to check them out: Identity Management: Business Scenarios Business and IT – Collaborative Access Review Sign Off and Closed Loop Identity Certification Oracle Identity Governance: End to End integration From Oracle Identity Manager to a Target Webservice Oracle Identity Manager: Configuring SOA Composite Oracle Identity Manager: Web Services Connector - Overview How to do a basic Oracle Virtual Directory (OVD) Setup? How to setup a simple Oracle Virtual Directory (OVD) Join? Installing Oracle Access Manager: Identity Server and WebPass  Also new is an Oracle University 5-day class you might want to investigate: Oracle Access Manager R2: Administration Essentials An OAM Advanced Administration class is in the works and should be available late summer or fall, so keep your calendar clear! Be sure to let us know in the Comments if there is a training you would find useful. Happy Trails :)

    Read the article

  • How to add event receiver to SharePoint2010 content type programmatically

    - by ybbest
    Today , I’d like to show how to add event receiver to How to add event receiver to SharePoint2010 content type programmatically. 1. Create empty SharePoint Project and add a class called ItemContentTypeEventReceiver and make it inherit from SPItemEventReceiver and implement your logic as below public class ItemContentTypeEventReceiver : SPItemEventReceiver { private bool eventFiringEnabledStatus; public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); UpdateTitle(properties); } private void UpdateTitle(SPItemEventProperties properties) { SPListItem addedItem = properties.ListItem; string enteredTitle = addedItem["Title"] as string; addedItem["Title"] = enteredTitle + " Updated"; DisableItemEventsScope(); addedItem.Update(); EnableItemEventsScope(); } public override void ItemUpdated(SPItemEventProperties properties) { base.ItemUpdated(properties); UpdateTitle(properties); } private void DisableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = false; } private void EnableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = true; } } 2.Create a Site or Web(depending or your requirements) scoped feature and implement your feature event handler as below: public override void FeatureActivated(SPFeatureReceiverProperties properties) { SPWeb web = GetFeatureWeb(properties); //http://karinebosch.wordpress.com/walkthroughs/event-receivers-theory/ string assemblyName =  System.Reflection.Assembly.GetExecutingAssembly().FullName; const string className = "YBBEST.AddEventReceiverToContentType.ItemContentTypeEventReceiver"; SPContentType contentType= web.ContentTypes["Item"]; AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemAdded, SPEventReceiverSynchronization.Asynchronous); AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemUpdated, SPEventReceiverSynchronization.Asynchronous); contentType.Update(); } protected static void AddEventReceiverToContentType(string className, SPContentType contentType, string assemblyName, SPEventReceiverType eventReceiverType, SPEventReceiverSynchronization eventReceiverSynchronization) { if (className == null) throw new ArgumentNullException("className"); if (contentType == null) throw new ArgumentNullException("contentType"); if (assemblyName == null) throw new ArgumentNullException("assemblyName"); SPEventReceiverDefinition eventReceiver = contentType.EventReceivers.Add(); eventReceiver.Synchronization = eventReceiverSynchronization; eventReceiver.Type = eventReceiverType; eventReceiver.Assembly = assemblyName; eventReceiver.Class = className; eventReceiver.Update(); } 3.Deploy your solution and now you have a event receiver that attached to the Item contentType. You can download the complete source code here.You can also check how to add event receiver to a list using SharePoint event receiver item in Visual Studio2010 in my previous blog.

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • Loading SpriteFont through a different class than Game.cs

    - by MintyAnt
    I am trying to load up a single SpriteFont to print some debug information. In our current game, we load up both Textures and Music through a ResourceManager. They are both loaded with a filestream, and thus do not require Content.Load SoundEffect soundEffect = SoundEffect.FromStream( fs ); Since this ResourceManager does not inherit from Game or is like Game.cs, I cannot use the usual method: SpriteFont spriteFont = Content.Load<SpriteFont>(resource.Key.Item2); Anyone have any idea how I can either: -Load the SpriteFont a different way -Create my own Contentmanager

    Read the article

  • Oracle Tutor: Create Accessible Content for the Disabled Community

    - by emily.chorba(at)oracle.com
    For many reasons--legal, business, and ethical--Oracle recognizes the need for its applications, and our customers' and partners' products built with our tools, to be usable by the disabled community. The following features of Tutor Author and Publisher software facilitate the creation of accessible HTML content for the disabled community.TablesThe following formatting guidelines will ensure that Tutor documents containing tables will be accessible once they are converted to HTML.• Determine whether a table is a "data table" or whether you are using a table simply for formatting. If it's a data table, you must use a heading for each column, and you should format this heading row as "table heading" style and select Table > Heading Rows Repeat.• For non data tables, it is not necessary to include a heading row.GraphicsTo create accessible graphics, add a caption to the graphic. In Microsoft Office 2000 and greater, right-click on the graphic and select Format Picture > Web (tab) > Alternative Text or select the graphic then Format > Picture > Web (tab) Alternative Text. Enter the appropriate information in the dialog box.When a document containing a graphic with alternative text is converted to HTML by Tutor, the HTML document will contain the appropriate accessibility information.Javascript elementsThe tabbed format and other javascript elements in the HTML version of the Tutor documents may not be accessible to all users. A link to an accessible/printable version of the document is available in the upper right corner of all Tutor documents.Repetitive dataIf repetitive data such as the distribution section and the ownership section are causing accessibility issues with your Tutor documents, you can insert a bookmark in the appropriate location of the document, and, when the document is converted to HTML, the bookmark will be converted to an A NAME reference (also known as an internal link). With this reference, you can create a link in Header.txt that can be prepended to each Tutor document that allows the user to bypass repetitive sections. Tutor and Oracle ApplicationsRegarding accessibility, please check Oracle's website on accessibility http://www.oracle.com/accessibility/ to find out what version of E-Business Suite is certified to work with screen readers. Oracle Tutor 11.5.6A and greater works with screen readers such as JAWS.There is no certification between Oracle Tutor and Oracle Applications because there are no related dependencies. It doesn't matter which version of the Oracle Applications you are running. Therefore, it is possible to use Oracle Tutor with earlier versions of Oracle Applications.Oracle Business Process Converter and Oracle ApplicationsOracle Business Process Converter (OBPC) converts Visio, XPDL, and Tutor models to Oracle Business Process Architect and Oracle Business Process Management. The OBPC is one of a collection of plugins to Oracle JDeveloper. Please see the VPAT as the same considerations apply.Learn MoreFor more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • Single paging Meta Tag Duplication [duplicate]

    - by Dominic Reigns
    This question already has an answer here: Should paginated content have unique meta tags? 1 answer How to make Google understand about single paging content issue ? A single page has got above 100 paging ? 1-10 of 199 Go to Page 12345678910 ... like this as mentioned above, Do I need to put fresh meta tag on each page ??? Help me out from this issue ?

    Read the article

  • Content Based Routing with BRE and ESB

    - by Christopher House
    I've been working with BizTalk 2009 and the ESB toolkit for the past couple of days.  This is actually my first exposure to ESB and so far I'm pleased with how easy it is to work with. Initially we had planned to use UDDI for storing endpoint information.  However after discussing this with my client, we opted to look at BRE instead of UDDI since we're already storing transforms in BRE.  Fortunately making the change to BRE from UDDI was quite simple.  This solution of course has the added advantage of not needing to go through the convoluted process of registering our endpoints in UDDI. The first thing to remember if you want to do content based routing with BRE and ESB is that the pipleines included in the ESB toolkit don't include disassembler components.  This means that you'll need to first create a custom recieve pipeline with the necessary disassembler for your message type as well as the ESB components, itinerary selector and dispather. Next you need to create a BRE policy.  The ESB.ContextInfo vocabulary contains vocabulary links for the various items in the ESB context dictionary.  In this vocabulary, you'll find an item called Context Message Type, use this as the left hand side of your condition.  Set the right hand side to your message type, something like http://your.message.namespace/#yourrootelement.  Now find the ESB.EndPointInfo vocabulary.  This contains links to all the properties related to endpoint information.  Use the various set operators in your rule's action to configure your endpoint. In the example above, I'm using the WCF-SQL adapter. Now that the hard work is out of the way, you just need to configure the resolver in your itinerary. Nothing complicated here.  Just select BRE as your resolver implementation and select your policy from the drop-down list.  Note that when you select a policy, the Version field will be automatically filled in with the version of your policy.  If you leave this as-is, the resolver will always use that policy version.  Alternatively, you can clear the version number and the resolver will use the highest deployed version.

    Read the article

  • Managing a Web Portal

    - by A competent translator
    After skimming almost all questions tagged "books" here at Pro Webmasters to find a book on managing web portals I found references to general web mastering. Are there books on managing and curating content of public web portals , i.e. a governmental portal for a given department or agency ? I need a reference on the day-to-day management and best practices of a portal and its content. Also welcomed are references on managing multilingual portals.

    Read the article

  • ??????????????????????????R-online"The Shop"?????

    - by mamoru.kobayashi
    ???????????????????????RMAP ?????????EC?????????????????????????Oracle Universal Content Management???????????????? ???????????????????2010?4?20?????? ??????????R-online "The Shop"????????? ·???????????????·????????????????????????????????????????????????????????????2009?4????????RMAP - RENOWN Make Again Plan????????????????????????????????????????????????????????EC???????????????? ·????????????????????????????????????????????????????????????EC????????????????????Oracle Universal Content Management??Web ???????????(CMS)??????????? ????????????????

    Read the article

  • CNAME subdomain of subdomain with content on first subdomain

    - by BandonRandon
    Okay so maybe that title is a bit confusing but here is what I'm trying to do: I have a subdomain with content at my.example.com I want to set up something like sub.my.example.com This does not seem to work if there is content on my.example.com However if there is not content on the first subdomain it seems that it's okay to CNAME the record to example2.org This is my first real experiance with CNAME so I'm wondering about common errors to check for. my.example.com is an A record and sub.my.example.com is a CNAME EDIT I am managing the DNS though Web Host Manger (WHM) Dns records. The content is at http://vbx.knowconceptdw.net and I'm trying to set a voicemail CNAME to point to http://api.twillio.com at http://voicemail.vbx.knowconceptdw.net

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    Hi, I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

  • Safari taking too long to load web-pages.

    - by ayaz
    I am running the latest and greatest Snow Leopard on a MacBook. I have started to notice that Safari is taking too long to open properly most of the web-pages I routinely visit. Safari would first sit at "Connecting ..." status, and then on "Waiting ..." status, taking a lot of time to load the page. In contrast, Firefox (and Opera), for example, is loading pages quickly. I am not using any proxy settings on either. I have emptied the cache on Safari; also re-set Safari completely, but all in vain. I am now stumped about what I should try to locate what the problem is, and if possible, fix it. Any help from the community here will be very much appreciated!

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • What Raid should I use for Website Static Files / Content

    - by Simon
    I'm building a Web server (IIS7) and would like to know the best practice for storing static content and the uploaded files of website's users (predominantly pictures, but also other documents like pdf's). I will keep the operating System on a Raid 1 array. Where should I be keeping the actual website's pages & files, it's own static content, and that of it's users? Should I be placing this content on a seperate raid array, and if so which type? I was considering using SLC SSD's (Such as the Intel's X25-e) but the following issues came to light. Will the SLC SSD's give any improvement over a 2.5" 15k SAS Drive for this type of content? If I did use SSD's, I'm under the belief I would still need to use Raid for redundancy, yet I've heard Intel X25-e's don't support TRIM. Does this scrap them as a legitimate option?

    Read the article

  • Apache 2.2 Present rss http 410 pages as application/rss+xml content type

    - by Mark Bakker
    I have a problem sending http-410 for very old rss feeds. Functional this can happen in one Very old rss feeds where content is not updated anymore / subject could not move to another feed Migration from 3th party site to our site where the rss feed is not longer functional supported I tried several things in my site config see below; <VirtualHost *:80> DocumentRoot /opt/tomcat/webapps/ROOT/ ErrorDocument 500 /error/static/error-500.html ErrorDocument 503 /error/static/error-500.html ErrorDocument 404 /error/static/rss/error-404.html ErrorDocument 410 /error/static/rss/error-410.html # When error pages need to be served by apache, # exclude the files to serve as below (in comment) SetEnvIf Request_URI "/error/static/*" no-jk # force all files to be image/gif: <Location *.rss> #<Location *> #ForceType application/rss+xml </Location> #AddType application/rss+xml .rss #AddType application/rss+xml .xml #AddType application/rss+xml .html JkMount /* rss;use_server_errors=402 # JkMount /* rss RewriteEngine on JkMount /news.rss rss JkMount /documenten-en-publicaties.rss rss RewriteEngine on RewriteRule ^/news.rss$ - [NC,T=application/rss+xml,G,L] RewriteRule ^/documenten-en-publicaties.rss$ - [NC,G,L] # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ErrorLog "|/usr/bin/logger -s -p local3.err -t 'Apache'" CustomLog "|/usr/bin/logger -s -p local2.info -t 'Apache'" combined ServerSignature Off </VirtualHost> The desired end result should be on /news.rss and /documenten-en-publicaties.rss a 410 page with content in the error page with a content type 'application/rss+xml'

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >