Search Results

Search found 25614 results on 1025 pages for 'content filter'.

Page 673/1025 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Eclipse making background video skip with almost every click of the mouse

    - by RustyH
    The problem is i like to listen to videos online (mostly or all flash video) while programming and never had any problems until i started using eclipse, now every time i do just about anything other than type, it makes the video skip/freeze really bad and is really annoying. I have never had this happen using visual studio, netbeans, or adobe flash for that matter, unless i was compiling or doing something that hogs the processor. Are there any settings that might could fix this its getting really annoying. it happens almost every time i even click "find" on the find pop-up window and it is not like it is having to search a big file or anything it is only 700 lines, with honestly only about 1/2 that have content on them. Any ideas?

    Read the article

  • Oracle University has released “Oracle AIA Foundation Pack 11g: Developing Applications” in the Training on Demand format (TOD)

    - by Lionel Dubreuil
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} In this course, you will learn how to quickly develop integrations using Application Integration Architecture (AIA) Foundation Pack 11g that run on Oracle Fusion Middleware. You’ll learn to: Design and create Application Business Connector Services to integrate applications into AIA Create Enterprise Business Services to perform specific business activities Configure Guaranteed Message Delivery to ensure no loss of messages Extend Enterprise Business Objects and Application Business Connector Services to meet Corporate requirements This course is available now in Training on Demand format. Training On Demand Features are: Delivered by top instructors Video of classroom lecture, whiteboarding, labs Hands-on practice environment Ask your instructor Bonus material from product experts Why Choose On Demand? Start training within 24 hours Get full classroom content online Customize your learning experience Eliminate travel-related expenses Access anytime, anywhere 24/7 You'll find more information here.

    Read the article

  • Postfix Not Sending Email to Some Addresses?

    - by Jake
    I'm using Jetpack on Wordpress, and it wasn't working. I was getting the following error: Diagnostic-Code: X-Postfix; unknown user: "jake" --60FD1138CAD.1354039466/example.com Content-Description: Undelivered Message (example.com substituted for our domain) We set up a test mail function, and that wasn't sending either. We changed the email to an outside email and it worked. Any thoughts why it won't send to an email that is at the same domain? Or why it sends to some emails but not others? Upon running postconf -n, I get the following: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = all mailbox_size_limit = 0 mydestination = example.com, Example, localhost.localdomain, localhost myhostname = example.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • Extension Methods in Dot Net 2.0

    - by Tom Hines
    Not that anyone would still need this, but in case you have a situation where the code MUST be .NET 2.0 compliant and you want to use a cool feature like Extension methods, there is a way.  I saw this article when looking for ways to create extension methods in C++, C# and VB:  http://msdn.microsoft.com/en-us/magazine/cc163317.aspx The author shows a simple  way to declare/define the ExtensionAttribute so it's available to 2.0 .NET code. Please read the article to learn about the when and why and use the content below to learn HOW. In the next post, I'll demonstrate cross-language calling of extension methods. Here is a version of it in C# First, here's the project showing there's no VOODOO included: using System; namespace System.Runtime.CompilerServices {    [       AttributeUsage(          AttributeTargets.Assembly          | AttributeTargets.Class          | AttributeTargets.Method,       AllowMultiple = false, Inherited = false)    ]    class ExtensionAttribute : Attribute{} } namespace TestTwoDotExtensions {    public static class Program    {       public static void DoThingCS(this string str)       {          Console.WriteLine("2.0\t{0:G}\t2.0", str);       }       static void Main(string[] args)       {          "asdf".DoThingCS();       }    } }   Here is the C++ version: // TestTwoDotExtensions_CPP.h #pragma once using namespace System; namespace System {        namespace Runtime {               namespace CompilerServices {               [                      AttributeUsage(                            AttributeTargets::Assembly                             | AttributeTargets::Class                            | AttributeTargets::Method,                      AllowMultiple = false, Inherited = false)               ]               public ref class ExtensionAttribute : Attribute{};               }        } } using namespace System::Runtime::CompilerServices; namespace TestTwoDotExtensions_CPP { public ref class CTestTwoDotExtensions_CPP {    public:            [ExtensionAttribute] // or [Extension]            static void DoThingCPP(String^ str)    {       Console::WriteLine("2.0\t{0:G}\t2.0", str);    } }; }

    Read the article

  • mac osX file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Vim is spellchecking in XML files where I don't want it to, and only there

    - by Kazark
    I'm trying to use Vim's builtin spellchecking in some XML documents. This happens merely by having the XML syntax loaded, as seen in the following minimalistic example (which reproduces what I also see in large XML documents): Note that given two buffers with exactly the same content, when Filetype is text, the spellchecking works; when it is xml, it does not. spell is set in both buffers. However, given this view of the top three lines of a large XML document, you can see that the spellchecking is certainly on: but it is only checking attributes. The nuisance is that none of the things it is actually finding are mispelled, and it isn't finding any of the numerous misspellings in the document. I would like it at a minimum to find the spelling errors in the body of the document, and being able to turn off the checking on attributes would be a nice option. I've searched for @NoSpell in the xml.vim file, but that returns no hits.

    Read the article

  • San Joaquin County, California Wins AIIM 2012 Carl E. Nelson Best Practice Award

    - by Peggy Chen
    Last month, AIIM, the global community of information professionals, announced the winners of the 2012 Carl E. Nelson Best Practices Awards. And San Joaquin County, California won in the small company category for 1-100 employees. The Carl E. Nelson Best Practices Award was established to recognize excellence in the area of information management. "Best practice" denotes a standard of excellence that has been achieved with an organization and refers to a process that can be quantified, adapted and repeated. Like many counties, San Joaquin County, California, was faced with huge challenges due to decreasing funds and staff, including decreased cost of building capability. It needed to streamline processes, cut costs per activity, modernize and strengthen the infrastructure, and adopt new technology and standards such as the National Information Exchange Model (NIEM). The Integrated Justice Information System (IJIS) provides a Web-based system to link more than 650,000 residents, 18 agencies countywide and other law enforcement systems nationwide. The county’s modernization initiative focused on replacing its outdated warrant system, implementing service-oriented architecture (SOA) to simplify integration between county law and justice systems, deploying Business Process Management (BPM), Case Management with content management, and Web technologies from Oracle. A critical part of their success has been the proper alignment of our Strategic Vision to the way the organization was enabled to plan and execute (and continues to execute) their modernization project. Congratulations to San Joaquin County!

    Read the article

  • Oracle’s AutoVue Enables Visual Decision Making

    - by Pam Petropoulos
    That old saying about a picture being worth a thousand words has never been truer.  Check out the latest reports from IDC Manufacturing Insights which highlight the importance of incorporating visual information in all facets of decision making and the role that Oracle’s AutoVue Enterprise Visualization solutions can play. Take a look at the excerpts below and be sure to click on the titles to read the full reports. Technology Spotlight: Optimizing the Product Life Cycle Through Visual Decision Making, August 2012 Manufacturers find it increasingly challenging to make effective product-related decisions as the result of expanded technical complexities, elongated supply chains, and a shortage of experienced workers. These factors challenge the traditional methodologies companies use to make critical decisions. However, companies can improve decision making by the use of visual decision making, which synthesizes information from multiple sources into highly usable visual context and integrates it with existing enterprise applications such as PLM and ERP systems. Product-related information presented in a visual form and shared across communities of practice with diverse roles, backgrounds, and job skills helps level the playing field for collaboration across business functions, technologies, and enterprises. Visual decision making can contribute to manufacturers making more effective product-related decisions throughout the complete product life cycle. This Technology Spotlight examines these trends and the role that Oracle's AutoVue and its Augmented Business Visualization (ABV) solution play in this strategic market. Analyst Connection: Using Visual Decision Making to Optimize Manufacturing Design and Development, September 2012 In today's environments, global manufacturers are managing a broad range of information. Data is often scattered across countless files throughout the product life cycle, generated by different applications and platforms. Organizations are struggling to utilize these multidisciplinary sources in an optimal way. Visual decision making is a strategy and technology that can address this challenge by integrating and widening access to digital information assets. Integrating with PLM and ERP tools across engineering, manufacturing, sales, and marketing, visual decision making makes digital content more accessible to employees and partners in the supply chain. The use of visual decision-making information rendered in the appropriate business context and shared across functional teams contributes to more effective product-related decision making and positively impacts business performance.

    Read the article

  • CodePlex Daily Summary for Friday, March 26, 2010

    CodePlex Daily Summary for Friday, March 26, 2010New Projects.NET settings class generator T4 templates: A couple of T4 templates to generate a Settings class for your .NET project. Allows you to define your application settings in an XML file and have...AlphaPagedList: AlphaPagedList makes it easier for .Net developers to write paging code. Based on PagedList it allows you to take any List<T> and split it based on...C# Projects: C# ProjectsChitme: Aenean feugiat pharetra enim rhoncus viverra. In at nunc nec sem varius bibendum. Aliquam erat volutpat. Nullam fringilla facilisis massa et eleife...CloudCache - Distributed Cache Tier with Azure: Cloudcache makes it easier for you to manage and deploy a distributed caching tier to Windows Azure. Included is a web-dashboard in MVC 2.0, Memcac...Composer: Composer is an extensible Compositional Architecture framework, providing a set of functionality such as Inversion of Control container (IoC), Depe...Data Connection Suite: Data Connection Suite is a set of easy to use data connection string builder dialogs & controls ready to be integrated in any .NET application.DatabaseHandler: Database HandlerEPiServer Blog Page Provider: A example page provider implementation for EPiServer that supports external blog sources for pages, Blogger and WordPress supported out of the box ...Extended MessageBox: ExtendedMessageBox makes it easier to display messages from your Windows applications. Based on the built-in .NET MessageBox class functionality, i...FluentPath: FluentPath implements a modern wrapper around System.IO, using modern patterns such as fluent APIs and Lambdas. By using FluentPath instead of Syst...Halcyone : Silverlight without pain: Halcyone is application framework for Silverlight that should make live of developers easier =)IlluminaRT: Real-time renderingme2: Mista Engine 2MessegeBox RightToLeft Lib: This is really simple lib project for use RTL in MessegeBox class. This just for short code and default option for RTL.MS Word Automation Service: A MS Word Automation service that comsumes a Word template and combines with XML to produce a word document. Currently in production. Must add some...SharePoint - Site Request InfoPath Form Template: This template allow portal user to enter initial information for requesting of creating a new SharePoint site. TextFlow - Text Editor: TextFlow is a fast and light text editor that simplifies day-to-day tasks. You can create letters and documents through TextFlow. It also includes ...TiledLib: A library for using Tiled (http://mapeditor.org) levels in XNA Game Studio projects. Includes a content pipeline extension and runtime library.wcf learning 2010: myWCFprojectsNew Releases.NET settings class generator T4 templates: Example 1: An example project containing the T4 templates and associated files. SingleSite - generate settings for a single site MultiSite - generate setting...AccessibilityChecker: Accessibility Checker V0.1: SharePoint Accessibility Checker V0.1AlphaPagedList: AlphaPagedList v0.9: Initial release of AlphaPagedListASP.Net RIA Controls: Version 1.1 Beta: New XHTML compliant version with alternative content support if no plugin installed.Business & System Analysis Templates and Best Practices: R 00: You may find out here the structured on my own materials from from Luxoft ReqLabs 2009 + short presentation about System Analysis and Modelling. Th...CloudCache - Distributed Cache Tier with Azure: v1.0.0.0: First release! More information at http://blog.shutupandcode.net/?p=935CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.39: implemented client side functions on remainder of account pagesDevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3d: Alpha 3d is a general bug fix -tweaking pagination, navigation, packaging, file system storage, page validation, security, locals, and linked views.Digital Media Processing Project 1: Image Processor: Image Processor 1.01: Supports opening files through Windows Explorer or by drag and drop.Extended MessageBox: ExtendedMessageBox Runtime Version 1.2: Initial releaseExtended MessageBox: SourceCode for Version 1.2: Initial SourceCodeFluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.0: Fluent Ribbon Control Suite 1.0 Includes: Fluent.dll (with .pdb and .xml, debug and release version) Showcase Application Samples Foundation (T...FluentPath: FluentPath Beta: The Beta release of FluentPath.HaterAide ORM: HaterAide ORM 1.5: This version is a, more or less, rewrite of the code base. Also many new features have been added in this release: 1) Foreign keys are now added to...iTuner - The iTunes Companion: iTuner 1.2.3735 Beta: V1.2 allows you to synchronize one or more iTunes playlists to a USB MP3 player. This continues the evolution yet maintains the minimalistic appro...LogWin-Logging Your Computer Activities: LogWin-Logging your computer activities: This program is logging your computer activities and display them as table and pie chart. It is made by native C , HTML Dialog and Google Chart API.MessegeBox RightToLeft Lib: MessegeBoxRTL-1.0.0.0_BIN: My First upload.. This is binary release only. Have fun.MessegeBox RightToLeft Lib: MessegeBoxRTL-1.0.0.0_SRC: My first upload.. This is source code with binary. Have fun.MS Word Automation Service: Alpha: In production already, but who cares. It works.MultiMenu ASP.NET Cascading Menu WebControl: MultiMenu 2.6 ASP.NET Menu: Fixed problems that prevented the menu from working with the XHTML DocTypes Added support for IE 7-8 Added XmlLoading and XmlLoaded events Ad...netgod: LanyoWebBrowser: Lanyo ERP ClientnopCommerce. Open Source online shop e-commerce solution.: nopCommerce 1.50: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/ReleaseNotes.aspx).Open NFe: Open NFe v1.9.7: Fontes do DANFe 1.9.7 Trim na conversão TXT para XMLpatterns & practices - Smart Client Guidance: Smart Client Software Factory 2010 Beta Source: The Smart Client Software Factory 2010 provides an integrated set of guidance that assists architects and developers in creating composite smart cl...Physics Helper for Silverlight, WPF, Blend, and Farseer: PhysicsHelper 3.0.0.5 Alpha: This release supports Windows Phone 7 Series Development, along with the Silverlight 3 and WPF support. It requires Visual Studio 2010, plus the Wi...Protein Insight: ProteinInsight V2.0.1: Protein Insight is protein structure visualization system. Visualization rendering engine is based on native C and Direct3D, plug-in is based on CL...PSFGeneric: ERP / CRM business management and administration: PSFGeneric 1.4.0.9000 Manual and power-ups ASNIA: PSFGeneric 1.4.0.9000 Tareas 2.1.0 MySQL Persistente 1.0.3 TM-U220 40 col. Driver 1.0.0 Gestor Contable Básico 1.1.2.1 Cafetería 1.1.6 Catalogo 1....QuestTracker: QuestTracker 0.2: Primary new feature: Import/Export Quest Log. Deleting anything will cause an automatic export prior to deletion, automatically backing up your log...Reusable Library: V1.0.5: A collection of reusable abstractions for enterprise application developer.Reusable Library Demo: Reusable Library Demo v1.0.3: A demonstration of reusable abstractions for enterprise application developerSharePoint - Site Request InfoPath Form Template: SharePoint - Site Request InfoPath Form Template: This template allow portal user to enter initial information for requesting of creating a new SharePoint site To install: 1. Run the SiteRequest.m...Silverlight Gantt Chart: Silverlight Gantt Chart 1.2: Updates include ability to add GanttNodeSections that allow for multiple GanttItems in a single row.Spiral Architecture Driven Development (SADD): SADD v.1.0: This is the First complete Release with the NEW materials now all in English ! The abstract from the main article named "SADD-MSAJ-The Spiral Arc...Spiral Architecture Driven Development (SADD) for Russian: SADD v.1.0: Это Первая Версия полного релиза SADD на русском языке. Отрывок из этой статьи опубликован в Microsoft Architecture Journal #23, вы можете найти в ...Sprite Sheet Packer: 2.3 Release: SpriteSheetPacker now supports saved user settings so the app will now remember your previous values for padding, image size, image options, whethe...Standalone XQuery Implementation in .NET: 1.4: This is version 1.4 of the QueryMachine.XQuery. It's includes bug fixes and performance optimization. Document load time is dramatically increased...TextFlow - Text Editor: Kernel: TextFlow core KernelTextFlow - Text Editor: TextFlow Beta 3 Technical Preview: This is a technical preview of TextFlow and is made to run for 40 days after which it will expire. Changes : 140 Bug fixes Supports Windows(R) 7...TiledLib: TiledLib 1.0: First release of TiledLib. This download is for prebuilt DLLs and a demo project. For the full source code, use the Source Code tab to download the...UnGrouper: Current build: This is a preview build. Hide and show the main window with winkey+a. IMPORTANT NOTE: You must close all applications before launching this build ...VCC: Latest build, v2.1.30325.0: Automatic drop of latest buildWCF Metal: WCFMetal 0.3.0.0: WCFMetal 0.3.0.0Copyright © 2010 John Leitch Distributed under the terms of the GNU General Public License Summary By utilizing LINQ to SQL gene...Web Log Analyzer: Release Indihiang 1.0: For installation and how to use, please read Indihiang portal: http://wiki.indihiang.com What's New in Indihiang 1.0 ? check http://geeks.netindone...異世界の新着動画: Ver. 10-03-25: ニコ生仕様に対応Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics EngineFacebook Developer ToolkitLINQ to TwitterFluent Ribbon Control SuiteTable2ClassNB_Store - Free DotNetNuke Ecommerce Catalog ModulePHPExcel

    Read the article

  • Steps for MySQL DB Replication

    - by Manish Agrawal
    Following are the steps for MySQL Replication implementation on Linux machine: Pre-implementation steps for DB Replication:   1.    Identify the databases to be replicated 2.    Identify the tables to be ignored during replication per database for example log tables 3.  Carefully identify and replace the variables and paths(locations) mentioned (in bold) in the commands given below with appropriate values 4.  Schedule the maintenance activity in odd hours as these activities will affect all the databases on Master database server       Implementation steps for DB Replication:     1.    Configure the /etc/my.cnf file on Master database server to enable Binary logging, setting of server id and configuring of dbnames for which logging should be done. [mysqld] log-bin=mysql-bin server-id=1 binlog-do-db = dbname   Note: You can specify multiple DB in binlog-do-db by using comma separated dbname values like: dbname1, dbname2, …, dbnameN   2.    On Master database, Grant Replication Slave Privileges, by executing following command on mysql prompt mysql> GRANT REPLICATION SLAVE ON *.* TO slaveuser@<hostname> identified by ‘slavepassword’;   3.    Stop the Master & Slave database by giving the command      mysqladmin shutdown   4.    Start the Master database by giving the command      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user&     5.    mysql> FLUSH TABLES WITH READ LOCK; Note: Leave the client (putty session) from which you issued the FLUSH TABLES statement running, so that the read lock remains in effect. If you exit the client, the lock is released. 6.    mysql > SHOW MASTER STATUS;          +---------------+----------+--------------+------------------+          | File          | Position | Binlog_Do_DB | Binlog_Ignore_DB |          +---------------+----------+--------------+------------------+          | mysql-bin.003 | 117       | dbname       |                  |          +---------------+----------+--------------+------------------+ Note: Note this information as this will be required while starting of Slave and replication in later steps   7.    Take MySQL dump by giving the following command, In another session window (putty window) run the following command: mysqldump –u user --ignore-table=dbname.tbl_name -–ignore-table=dbname.tbl_name2 --master-data dbname > dbname_dump.db Note: When choosing databases to include in the dump, remember that you will need to filter out databases on each slave that you do not want to include in the replication process.     8.    Unlock the tables on Master by giving following command: mysql> UNLOCK TABLES;   9.    Copy the dump file to Slave DB server   10.  Startup the Slave by using option --skip-slave      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --skip-slave&   11.  Restore the dump file on Slave DB server      mysql –u user dbname < dbname_dump.db   12.  Stop the Slave database by giving the command      mysqladmin shutdown   13.  Configure the /etc/my.cnf file on the Slave database server [mysqld] server-id=2 replicate-ignore-table = dbname.tablename   14.  Start the Slave Mysql Server with 'replicate-do-db=DB name' option.      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --replicate-do-db=dbname --skip-slave   15.  Configure the settings at Slave server for Master host name, log filename and position within the log file as shown in Step 6 above Use Change Master statement in the MySQL session mysql> CHANGE MASTER TO MASTER_HOST='<master_host_name>', MASTER_USER='<replication_user_name>', MASTER_PASSWORD='<replication_password>', MASTER_LOG_FILE='<recorded_log_file_name>', MASTER_LOG_POS=<recorded_log_position>;   16.  On Slave Servers mysql prompt give the following command: a.     mysql > START SLAVE; b.    mysql > SHOW SLAVE STATUS;         Note: To stop slave for backup or any other activity you can use the following command on the Slave Servers mysql prompt: mysql> STOP SLAVE     Refer following links for more information on MySQL DB Replication: http://dev.mysql.com/doc/refman/5.0/en/replication-options.html http://crazytoon.com/2008/04/21/mysql-replication-replicate-by-choice/ http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

    Read the article

  • SpinRite and USB blues - does a solution exist?

    - by Peter Mortensen
    I use SpinRite to recover hard disks and their content, and to a lesser degree for preventive maintenance. However, if a USB drive (USB thumb drive and/or external hard disk with a USB interface) is connected when SpinRite scans for devices, then SpinRite never finishes/hangs. The work-around is of course to disconnect the drive, but there is value in being able to use SpinRite on USB drives. Some external drives have no screws and it is difficult to take out the hard disk without damaging the casing. And for those that have it would save the disassembling time. Is there a way to fix this problem (e.g. BIOS changes or a modified SpinRite boot CD) without resorting to floppy disks?

    Read the article

  • OpenLdap TLS authentication setup

    - by CrazycodeMonkey
    I am trying to setup openldap on ubuntu 12.04 by following this guide https://help.ubuntu.com/12.04/serverguide/openldap-server.html When I tried to enable TLS on the server by creating a self signed crtificate as decribed in the guide above, I got the following error command that I ran ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/ssl/certinfo.ldif Content of ldif file dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/ldap01_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/ldap01_slapd_key.pem Error Message ldap_modify: Inappropriate matching (18) additional info: modify/add: olcTLSCertificateFile: no equality matching rule After hours of searching on google, I have not found anything that tells much about this error. Does anyone have any more information on this?

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • Getting More Out of UPK

    - by [email protected]
    Are you getting the most out of UPK? Remember the idea of streamlining your content creation efforts? How about the concept of collaboration during development? How are you leveraging the System Process Documents or Test Scripts? Is your training team benefiting from the creation of process documentation? Is UPK linked into the help menu of your application or your even at the browser level (Smart Help)? Many customers underutilize UPK. Some customers just think of UPK as a training creation solution or just for creating documentation. To get the full value of UPK you need to first evaluate how the UPK developer is installed. Single User or Multi User? If you have more than two developers of UPK, then there is a significant benefit from installing UPK in multi user mode. This helps drive collaboration, automatic version control and better facilitation of the workflow and state features with use of customized views for the developers. Has your organization installed Usage Tracking? How are the outputs deployed and for how many applications? If these questions have you thinking about your overall usage of UPK and you see significant improvement by using more of what UPK has to offer, then it could be time for a UPK Health Check. Contact your UPK Sales Consultant to help understand your environment and how to maximize the value of UPK and start getting more out of the product.

    Read the article

  • Long file path returning 404 for "hello.htm"

    - by Adam Kane
    Hello, I have a long file path that works on my server, but a simliar path returns a 404 error when it is on my clients (IIS6) server (http://ddmat.com/). Here's the functioning file path on my server: http://www.forgefx.com/projects/ddmat/install/Application Files/McCurdys_1_0_0_0/Content/FBX/CCAE1B33/Roof-sectionB-02.fbm/hello.htm My guesses: Maybe the file path is too long? Maybe the ".fbm" in the directory path is invalid? Sorry for the vauge problem description. Please let me know what additional info I can provide that'd be helpful. Update: The problem happens even in short paths, with no spaces: http://www.myserver/test.folder/hell.htm Thanks, Adam

    Read the article

  • Can´t verify my site on Google (error 403 Forbidden). I have other sites in the same host with no problems whatsoever

    - by Rosamunda Rosamunda
    I can´t verify my site on Google. I´ve done this several times for several sites, all inside the same host. I´ve tried the HTML tag method, HTML upload, Domain Name provider (I canp´t find the options that Google tell me that I should activate...), and Google Analytics. I always get this response: Verification failed for http://www.mysite.com/ using the Google Analytics method (1 minute ago). Your verification file returns a status of 403 (Forbidden) instead of 200 (OK). I´ve checked the server headers, and I get this result: REQUESTING: http://www.mysite.com GET / HTTP/1.1 Connection: Keep-Alive Keep-Alive: 300 Accept:/ Host: www.mysite.com Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0) SERVER RESPONSE: HTTP/1.1 403 Forbidden Date: Wed, 19 Sep 2012 03:25:22 GMT Server: Apache/2.2.19 (Unix) mod_ssl/2.2.19 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 PHP/5.2.17 Connection: close Content-Type: text/html; charset=iso-8859-1 Final Destination Page (It shows my actual homepage). What can I do? The hosting is the very same as in my other sites, where I didn´t have any issue at all! Thanks for your help! Note: As I have a Drupal 7 site, I´ve tried a "Drupal solution" first, but haven´t found any that solved this issue... How can it be forbidden when I can access the link perfectly ok? Is there any solution to this? Thanks!

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Office add on saves you time if you use Moodle

    - by Brian Scarbeau
    Moodle is a free elearning content management software program. It does take a great deal of time to set it up because you need to upload your Office files to Moodle. Now, Microsoft has made that job easier with their new Office Add on. With it you can save directly into Moodle.   Here are the instructions on how to use. Just change the URL you use for your Moodle site. 1. Go to this site and download and install the software. http://www.educationlabs.com/projects/officeaddinformoodle/Pages/default.aspx 2. Open your Office Word in this example and then select Save to Moodle (Notice you can also open files that you have stored in moodle make changes and then save back to moodle. (WOW) 3,  Now because this is the first time you are using this feature you will see a dialog box that looks like this: Enter the moodle website exactly as you see here along with your username and password for moodle. Click the checkbox to remember you. 4. After you click on Save to Moodle you should see a dialog box like this: 5.  Click the plus on the left Lake Highland Preparatory School-Online Learning 6. You will now see the listing of your moodle classes. Now click on the class that you your file to go to and save. Now you use this file in moodle. Good luck!

    Read the article

  • Asynchrony in C# 5: Dataflow Async Logger Sample

    - by javarg
    Check out this (very simple) code examples for TPL Dataflow. Suppose you are developing an Async Logger to register application events to different sinks or log writers. The logger architecture would be as follow: Note how blocks can be composed to achieved desired behavior. The BufferBlock<T> is the pool of log entries to be process whereas linked ActionBlock<TInput> represent the log writers or sinks. The previous composition would allows only one ActionBlock to consume entries at a time. Implementation code would be something similar to (add reference to System.Threading.Tasks.Dataflow.dll in %User Documents%\Microsoft Visual Studio Async CTP\Documentation): TPL Dataflow Logger var bufferBlock = new BufferBlock<Tuple<LogLevel, string>>(); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); Note the filter applied to each link (in this case, the Logging Level selects the writer used). We can specify message filters using Predicate functions on each link. Now, the previous sample is useless for a Logger since Logging Level is not exclusive (thus, several writers could be used to process a single message). Let´s use a Broadcast<T> buffer instead of a BufferBlock<T>. Broadcast Logger var bufferBlock = new BroadcastBlock<Tuple<LogLevel, string>>(     e => new Tuple<LogLevel, string>(e.Item1, e.Item2)); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> allLogger =     new ActionBlock<Tuple<LogLevel, string>>(     e => Console.WriteLine("All: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.LinkTo(allLogger, e => (e.Item1 & LogLevel.All) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); As this block copies the message to all its outputs, we need to define the copy function in the block constructor. In this case we create a new Tuple, but you can always use the Identity function if passing the same reference to every output. Try both scenarios and compare the results.

    Read the article

  • HTTP Headers - need to check OPTIONS

    - by chris
    I've received the results of a pen test and there is some unwanted exposure in the HTTP OPTIONS where the fact that Frontpage Server Extensions was available (now removed) was reported. I need to run a check on the OPTIONS to see whether it has been removed. The test output from the report is below, I need to recreate it to establish that it has gone but don't know how to check the OPTIONS, I can only find tools that seem to check the "HEAD / HTTP/1.1". Does anyone know how to test this - I'm running a Windows environment? Many thanks OPTIONS / HTTP/1.1 Host: www.website.com HTTP/1.1 200 OK Allow: OPTIONS, TRACE, GET, HEAD Content-Length: 0 Server: Microsoft-IIS/6.0 Public: OPTIONS, TRACE, GET, HEAD, POST MS-Author-Via: MS-FP/4.0 X-Powered-By: ASP.NET MicrosoftOfficeWebServer: 5.0_Pub Date: Fri, 01 Feb 2010 16:09:15 GMT

    Read the article

  • Accessing Server-Side Data from Client Script: Accessing JSON Data From an ASP.NET Page Using jQuery

    When building a web application, we must decide how and when the browser will communicate with the web server. The ASP.NET WebForms model greatly simplifies web development by providing a straightforward mechanism for exchanging data between the browser and the server. With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. On postback, the server sends the entire contents of the web page back to the browser, which then displays this new content. With WebForms we don't need to spend much time or effort thinking about how or when the browser will communicate with the server or how that returned information will be processed by the browser. It just works. While this approach certainly works and has its advantages, it's not without its drawbacks. The primary concern with postback forms is that they require a large amount of information to be exchanged between the browser and the server. Specifically, the browser sends back all of its form fields (including hidden ones, like view state, which may be quite large) and then the server sends back the entire contents of the web page. Granted, there are scenarios where this large quantity of data needs to be exchanged, but in many cases we can use techniques that exchange much less information. However, these techniques necessitate spending more time and effort thinking about how and when to have the browser communicate with the server and intelligently deciding on what information needs to be exchanged. This article, the first in a multi-part series, examines different techniques for accessing server-side data from a browser using client-side script. Throughout this series we will explore alternative ways to expose data on the server so that it can be accessed from the browser using script; we will also examine various tools for communicating with the server from JavaScript, including jQuery and the ASP.NET AJAX library. Read on to learn more! Read More >

    Read the article

  • Looking for a very bare and basic blog system?

    - by Shedo Chung-Hee Surashu
    Does anyone know of a user-hosted blog which can be an alternative to WordPress only, it should only have the bare necessities of a blog. I'll just take it from there. It should only have the following: Admin Account (for posting, editing, etc) Archive System Posting System with character limitation (For the Read More links.) Accept Comments from other users (only requires the user's name and email and / or website, then the actual comment). Pages (Allows me to create pages for custom content.) The reason I want this is because WordPress is already too bloated up that there are a ton of features that I don't need. I'd mostly be satisfied with a blogging system that has the above feature-set and I'll just work my way to add my own feature as I require it along the way.

    Read the article

  • Will using Apache's ProxyPass directive on persistent Ajax connections alleviate the connection limit error?

    - by naurus
    I've got some javascript that keeps a persistent Ajax connection open for each client, and I know that this can cause some serious issues for apache, but not for lighttpd. One thing I learned from researching how to get around this was how to use the ProxyPass directive to send all requests for a certain directory to another address:port combination (without letting the user know). What I want to know is, if I put my PHP in a proxy'd (to lighttpd) directory and call that with javascript, will this still count against my apache connection limit? The reason I wonder is that apache is still serving the content, just not processing it. Seems to me that this would be a connection. Thanks

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >