Search Results

Search found 6312 results on 253 pages for 'sad admin'.

Page 97/253 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • CodePlex Daily Summary for Thursday, May 31, 2012

    CodePlex Daily Summary for Thursday, May 31, 2012Popular ReleasesNaked Objects: Naked Objects Release 4.1.0: Corresponds to the packaged version 4.1.0 available via NuGet. Note that the versioning has moved to SemVer (http://semver.org/) This is a bug fix release with no new functionality. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.54: Fix for issue #18161: pretty-printing CSS @media rule throws an exception due to mismatched Indent/Unindent pair.OMS.Ice - T4 Text Template Generator: OMS.Ice - T4 Text Template Generator v1.4.0.14110: Issue 601 - Template file name cannot contain characters that are not allowed in C#/VB identifiers Issue 625 - Last line will be ignored by the parser Issue 626 - Usage of environment variables and macrosSilverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.totalem: version 2012.05.30.1: Beta version added function for mass renaming files and foldersAudio Pitch & Shift: Audio Pitch and Shift 4.4.0: Tracklist added on main window Improved performances with tracklist Some other fixesJson.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...Indent Guides for Visual Studio: Indent Guides v12.1: Version History Changed in v12.1: Fixed crash when unable to start asynchronous analysis Fixed upgrade from v11 Changed in v12: background document analysis new options dialog with Quick Set selections for behavior new "glow" style for guides new menu icon in VS 11 preview control now uses editor theming highlighting can be customised on each line fixed issues with collapsed code blocks improved behaviour around left-aligned pragma/preprocessor commands (C#/C++) new setting...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...Thales Simulator Library: Version 0.9.6: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptographic device. This release fixes a problem with the FK command and a bug in the implementation of PIN block 05 format deconstruction. A new 0.9.6.Binaries file has been posted. This includes executable programs without an installer, including the GUI and console simulators, the key manager and the PVV clashing demo. Please note that you will need ...????: ????2.0.1: 1、?????。WiX Toolset: WiX v3.6 RC: WiX v3.6 RC (3.6.2928.0) provides feature complete Burn with VS11 support. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/5/28/WiX-v3.6-Release-Candidate-availableJavascript .NET: Javascript .NET v0.7: SetParameter() reverts to its old behaviour of allowing JavaScript code to add new properties to wrapped C# objects. The behavior added briefly in 0.6 (throws an exception) can be had via the new SetParameterOptions.RejectUnknownProperties. TerminateExecution now uses its isolate to terminate the correct context automatically. Added support for converting all C# integral types, decimal and enums to JavaScript numbers. (Previously only the common types were handled properly.) Bug fixe...callisto: callisto 2.0.29: Added DNS functionality to scripting. See documentation section for details of how to incorporate this into your scripts.Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (May 2012): Fixes: unserialize() of negative float numbers fix pcre possesive quantifiers and character class containing ()[] array deserilization when the array contains a reference to ISerializable parsing lambda function fix round() reimplemented as it is in PHP to avoid .NET rounding errors filesize bypass for FileInfo.Length bug in Mono New features: Time zones reimplemented, uses Windows/Linux databaseSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.1): New fetures:Admin enable / disable match Hide/Show Euro 2012 SharePoint lists (3 lists) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...ScreenShot: InstallScreenShot: This is the current stable release.????SDK for .Net 4.0+(OAuth2.0+??V2?API): ??V2?SDK???: ?????????API?? ???????OAuth2.0?? ????:????????????,??????????“SOURCE CODE”?????????Changeset,http://weibosdk.codeplex.com/SourceControl/list/changesets ???:????????,DEMO??AppKey????????????????,?????AppKey,????AppKey???????????,?????“????>????>????>??????”.Net Code Samples: Code Samples: Code samples (SLNs).LINQ_Koans: LinqKoans v.02: Cleaned up a bitNew ProjectsAAABBBCCC: hauhuhaAutomação e Robótica: Sistema de automação e robótica de periféricos. todo o projeto consiste em integrar interface de controle pelo software do hardware para realização de comando e acionamentos eletricos e eletromecânicos.Barium Live! API Client: Simple test client for the Barium Live! REST API Contact Barium Live! support on http://www.bariumlive.com/support to get an API key for development. Please note that the test client has a dependency to the Newtonsoft Json.NET library that has to be downloaded separately from http://json.codeplex.com/BizTalk Host Restarter: This is a quick tool that I wrote to more quickly Stop, Start and Restart the BizTalk Host instances. It's command line, with full source code, I use it in my build scripts and deploy scripts, works a treat. MUCH faster than the BizTalk Admin Console can do the same task. CNAF Portal: CNAFPortalConsulta_Productos: Nos permite ingresar un código del 1 al 10..... y al dar clic en buscar nos aparecerá un mensaje con el nombre del producto de dicho código.Deployment Tool: Deployement is a small piece in our IT lifecycle for many small projects. But, considering that you are working for a large project, it has many servers, softwares, configurations, databases and so on. The problem emits, even you have upgrades and version controls. The deploymentool is a solution which can help you resolve the difficulties described above.Devmil MVVM: A toolset for MVVM development with focus on INotifyPropertyChanged and dependenciesEAL: no summaryEasyCLI: EasyCLI is a command shell for the Commodore 64 computer. EasyCLI is packaged as an EasyFlash cartridge.ExcelSharePointListSync: This tool help is Getting Data Form SharePoint to Excel 2010 , Following are the silent feature : 1) Maintain a List connection Offline 2) Choose to get data Form a View of Form Columns 3) Sync the data Back to SharePoint GH: Some summaryicontrolpcv: icontrolpvcLotteryVoteMVC: LotteryVote use mvcMail API for Windows Phone 7: Library that allows developers to add some email functionality to their application, like sending and reading email messages, with the capability of adding attachments to the messages. Matchy - Fashion Website: Fashion website for Match Brand, a fashion brand of BOOM JSC.,MIracLE Guild Admin System: asdOrchard Content Picker: Orchard Content Picker enables content editor to select Content Items.ProSoft Mail Services: This project implements a mail server that allows sending mail with SMTP and access the mailbox via POP. Later on, the server also get a MAPI interface and an interface for SPAM defense.Silver Sugar Web Browser: This is a simple web browser named "Silver Sugar". It has a back button, forward button, home button, url text box and a go button.social sdk for windows phone: Windows Phone Social Share Lib. (Including Weibo, Tencent Weibo & Renren)SolutionX: dfsdf asdfds asdfsdf asdfsdfspUtils: This project aims to facilitate easy to use methods that interact with SharePoint's Client OM/JSOM.Streambolics Library: A library of C# classes providing robust tools for real-time data monitoringTestExpression: TestExpressionTFS Helper package: VS2010 is well integrated to TFS and has all the features required on day to day basis. Most common feature any developer use is managing workitems. Creating work items is very easy as is copying work items, but what I need is to copy workitem between projects retaining links and attachments. I am sure this is not Ideal and against sprint projects.Ubelsoft: Sistema de Registro Eletrônico de Ponto - SREPZombieSim: Simulate the spread of a zombie virusZYWater: ......................

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle WebCenter Portlet Debugging

    - by Alexander Rudat
    IntroductionThis article describes how to debug a portlets that is already deployed to WebLogic server using Oracle JDeveloper 11g.OverviewThese a Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} re the basic steps involved in remote debugging an WebCenter portlets deployed in WebLogic:Configuration of the WebLogic to support remote debuggingConfiguration of the portlet project in JDeveloperActual debugging of the portletConfiguration of the WebLogicTo start the WebLogic server in debugging mode, there are a couple of configuration changes that need to be done to the WebLogic domain where the portlet is deployed.First we need to edit JVM options of the WebLogic server startup script where the portlet is deployed. Normally the startManagedWebLogic.cmd is used to start this managed server.This startup script is located in the %MIDDLEWARE_HOME%\user_projects\domains\<domain_name>\bin  directory, where %MIDDLEWARE_HOME% is the installation directory of WebLogic.Add the following line before the set JAVA_OPTIONS= line:set REMOTE_DEBUG_JAVA_OPTIONS=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=nChange the set JAVA_OPTIONS= line to read like the one below:set JAVA_OPTIONS=%SAVE_JAVA_OPTIONS% %ADF_JAVA_OPTIONS% %REMOTE_DEBUG_JAVA_OPTIONS%After this changes save the startup script and start the managed server and be sure that you have access to the admin console (for example http://localhost:7001/console).Finally we need to check, that HTTP tunneling is enabled on the managed server. To do this, login to the admin console, select the managed server and select the Protocols tab.Be sure that Enable Tunneling is selected.Configuration of the portletFirst let's create a new run configuration specifically for remote debugging. Double-click the project where you portlets are developed.In the Project Properties select the Run/Debug/Profile page. Click New... to create a new run configuration. In the Create Run Configuration  dialog enter Remote Debugging for the name of the run configuration. Leave the Copy Settings From selection to Default and click OK to create the new run configuration.Once the Remote Debug run configuration is created, select it in the Run Configurations and click Edit... to bring up the Edit Run Configuration dialog. In the Launch Settings page click on the Remote Debugging checkbox to enable remote debugging for this run configuration.Finally select the Remote page and verify that the Protocol is set to Attach to JPDA and the port matches the port specified earlier when configuring WebLogic for remote debugging (defaults to 4000).Actual debugging of the portletTo start the remote debugging profile, right-click on your portlet project and select Start Remote Debugger.Now JDeveloper is asking the host and port specification. If you WebLogic server is installed locally, you can apply the default settings: Set a breakpoint at you java code and run the portal (WebCenter) application, where the portlet is used.That's all, now you are able to debug the portlet java code. Hope you will find all errors in your portlet :-)Referenceshttp://www.oracle.com/technology/products/jdev/howtos/weblogic/remotedebugwls.html

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • Running Multiple WebLogic and OSB Domains

    - by jeff.x.davies
    I have any number of OSB domain created on my machine at any point in time. For example, I have different domains for different version of Oracle Service Bus and Oracle SOA Suite. I also have different domains for different purposes. I have a demo domain and another domain for the projects in my blog. Starting with OSB 11g and the Apache Derby server, there is a small "gotcha" if you want to create multiple domains on a devevelopment machine. When you create a new domain for OSB 11g it will use the same database info for all databases and this will cause an error when starting the admin server of the second domain (the first domain doesn't have to be running for this error to occur). Here is an example of the error message in the server console: ####<Mar 8, 2011 2:58:48 PM PST> <Critical> <JTA> <jeff-laptop> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1299625128464> <BEA-110482> <A logging last resource failed during initialization. The server cannot boot unless all configured logging last resources (LLRs) initialize. Failing reason: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: JDBC LLR, table verify failed for table 'WL_LLR_ADMINSERVER', row 'JDBC LLR Domain//Server' record had unexpected value 'osb11gR1PS3//AdminServer' expected 'OSBCIM//AdminServer'*** ONLY the original domain and server that creates an LLR table may access it *** The solution is to create a database instance for each of your domains and this is very simple to do. After you create a domain using the Configuration Wizard, locate the wlsbjmsrpDataSource-jdbc.xml file that is found under the DOMAIN_HOME/config/jdbc directory. Near the top of the file you will see the following entry: <url>jdbc:derby://localhost:1527/osbexamples;create=true;ServerName=localhost;databaseName=osbexamples</url> You need to modify this entry with a different and unique database name. The easiest way to do this is to substiture the name of your domain. For example: <url>jdbc:derby://localhost:1527/mydomain;create=true;ServerName=localhost;databaseName=mydomain</url> will create a database named mydomain . Now, when you restart the admin server for the domain, it will create the new database for you. Do this for each domain you create on your development machine and you'll have no troubles. The process is much simpler if you are creating a domain using the Configuration Wizard. Simply name the database when you get to the Configure JDBC Component Schema step of the Configuration Wizard, select the OSB JMS Reporting Provider and set the name in the DBMS/Service field to whatever name you like, as shown in Figure 1 below. Figure 1 – Configuring the JDBC Component Schema That is all there is to it. Now you can create as many domains on your leptop or development machine as you like and not have to worry about them conflicting with each other.

    Read the article

  • New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available

    - by Max Arderius
    A new Rapid Install startCD (Patch 18086193) for Oracle E-Business Suite Release 12.2 is now available. We recommend that all EBS customers installing or upgrading to EBS 12.2 use this latest update. The startCD updates are distributed to customers via My Oracle Support Patch which can be uncompressed on top of any previous 12.2 startCD under the main staging area. This patch replaces any previous startCDs. What's New in This Update? This new startCD version 12.2.0.48 includes important fixes for multi-node Installs, RAC, pre-install checks, platform specific issues, and upgrade scenario failures: 18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD 18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE 18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES 18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES 18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB 18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB 18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND 18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK 18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING 18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1 18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT ) 18383075 - ADD VERBOSE OPTION TO RAC VALIDATION 18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4 18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE 18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER 18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC 18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER 18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD 18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP 18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE 18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING 18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE 18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3 18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING 18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47 18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC 18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES 18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS 18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH> 18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE 18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS 18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6 18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS 18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK 18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED 18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED 17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH 17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010 17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN 17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW 17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING 17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS 17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN 17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF 17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION 17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED 17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK 17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK 17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED 17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX 17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK 17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE 17630972 - /TMP PRE-REQ INSTALLATION CHECK 17617245 - 12.2 VISION INSTALL FAILS ON AIX 17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION 17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2 17588765 - CHECKER VERSION AND PLUGIN VERSION 17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX 17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS 17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL References 12.2 Documentation Library 1581299.1 : EBS 12.2 Product Information Center 1320300.1 : Oracle E-Business Suite Release Notes, Release 12.2 1606170.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for Release 12.2.3 1624423.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for R12.TXK.C.Delta.4 and R12.AD.C.Delta.4 1594274.1 : Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Related Articles Oracle E-Business Suite 12.2 Now Available startCD options to install Oracle E-Business Suite Release 12.2

    Read the article

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • BizTalk Server 2013 beta on Windows 8 (with Visual Studio 2012, SQL Server 2012 &amp; ESB Toolkit 2.2)

    - by Vishal
    Hello BizTalkers, Finally, Microsoft released the beta version of BizTalk Server 2010 R2 and now its called BizTalk Server 2013. I had tried the BTS 2010 R2 CTP version on Windows Azure VM and particularly I was excited about the RESTful services support and ESB fully integrated into BizTalk. Well didn’t get chance to test it much, Azure & VM running cost associated . Anyways, I was waiting for this announcement and I was so much glad that Microsoft finally released the on premise one.  Check what’s new in the BizTalk Server 2013.  Officially Microsoft says that BizTalk Server 2013 “beta” is not supported on Windows 8 but I was curious to try it out. Below is my installation and configuration experience. Virtual Machine configuration: VM Ware Workstation 9.0. Windows 8 Enterprise x64. SQL Server 2012. Visual Studio 2012 Ultimate. BizTalk Server 2013 beta. Windows 8 Machine name: WIN8 Local Administrator account name: Admin First I installed Windows 8 Enterprise on a VM Ware Workstation 9.0 and updated the OS. Even Windows 8 is the new release so luckily didn’t had much updates to perform. Next Installed Visual Studio 2012 Ultimate which was straightforward installation. Next Installed SQL Server 2012. Select New SQL Server stand-alone installation & followed the steps as shown in the screenshot below.   Once the installation is finished, fire up SQL Server Management Studio and try connecting. Initially when the management studio opened up, I thought why did Visual Studio 2010 open when I tried opening SQL Management studio but well, they made the interface alike VS 2010. Cool, I like it. Next is the real deal, download the BizTalk Server 2013 and unzip to particular folder. Double click the Setup.exe and follow the steps in the screenshots. Install Microsoft BizTalk Server 2013 beta. I selected all the normal artifacts and also all the artifacts under Additional Software's. So far so good. Next Launch BizTalk Server Configuration and I used Basic configuration as shown in screenshot below. Didn’t expect to see this but “wala”. Successful in the first shot. Still I wasn’t sure & something would have gone wrong so fired up the BizTalk Server Administration Console and that too came up just fine. Still was not able to believe so created a simple messaging application:  message in –> message out and that too worked just fine. Finally I was convinced that BizTalk Server 2013 did work on Windows 8. Next step was to install the ESB Toolkit 2.2 which is now integrated with BizTalk Server and does not come as a separate standalone installation file. Again run the BizTalk Setup.exe from the unzipped folder. Install Microsoft ESB Toolkit. Next, unlike ESB Configuration would  not open up by itself so go to “Windows 8 so called Start” (I could not resist to write this) and open the ESB Toolkit Configuration wizard. Below screenshot display the configurations I used. Also you can find them on MSDN here. Finally after the ESB Configuration, I open Admin Console and checked the 2 ESB application deployed. Cool. This concludes my experience about installation and configuration of BizTalk Server 2013 Beta & ESB Toolkit 2.2 on Windows 8. I will try and keep writing about BizTalk Server 2013 and its use with RESTful Services etc. Thanks, Vishal Mody

    Read the article

  • Running a WebLogic Portal (WLP) 10.3.4 Domain as a Windows Service

    - by user647124
    To start a WLP server as a Windows service it is simplest to make your own script based on the provided standard script located at WL_HOME\server\bin\installSvc.cmd. The standard script works fine for a plain WLS domain, but lacks some classpath and options necessary for WLP.Start by making a copy of the installSvc.cmd script and naming it something specific to your domain.Next, just under SETLOCAL you will find where WL_HOME is defined. Here you will add the definitions you would normally add in a script that later calls installSvc.cmd (as per the standard documentation). set DOMAIN_NAME=gnma_test_domainset USERDOMAIN_HOME=D:\my_test_domainset SERVER_NAME=AdminServerset WLS_USER=weblogicset WLS_PW=gnmaAdmin01set PRODUCTION_MODE=trueset MEM_ARGS=-Xms512m –Xmx512mset MW_HOME=C:\Oracle\Middleware Note: I had heard of people using this approach who had issues with the length of the command line. This may be due to their use of the default domain path. In the example above, I use a shorter path.At this point, edit the DOMAIN_HOME\bin\startWebLogic.cmd and set it to echo both the classpath and the options. Then start the domain and capture the output of those echoes, then shut the domain back down. Now REM out the existing CLASSPATH definition, then use the outputs you captured earlier to set the CLASSPATH and JAVA_OPTIONS like this: REM set CLASSPATH=%WEBLOGIC_CLASSPATH%;%CLASSPATH%; C:\Oracle\Middleware\wlportal_10.3\portal\lib\security\wsrp-security-providers.jarset CLASSPATH=%MW_HOME%\patch_wls1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_wlp1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_oepe1111\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_ocm1033\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\JROCKI~1.1-3\lib\tools.jar;%WL_HOME%\server\lib\weblogic_sp.jar;%WL_HOME%\server\lib\weblogic.jar;%MW_HOME%\modules\features\weblogic.server.modules_10.3.4.0.jar;%WL_HOME%\server\lib\webservices.jar;%MW_HOME%\modules\ORGAPA~1.1/lib/ant-all.jar;%MW_HOME%\modules\NETSFA~1.0_1/lib/ant-contrib.jar;%WL_HOME%\common\derby\lib\derbyclient.jar;%WL_HOME%\server\lib\xqrl.jar;%WL_HOME%\server\lib\xquery.jar;%WL_HOME%\server\lib\binxml.jarset JAVA_OPTIONS= -Xverify:none -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole... -Dplatform.home=%WL_HOME% -Dwls.home=%WL_HOME%\server -Dweblogic.home=%WL_HOME%\server -Dweblogic.wsee.bind.suppressDeployErrorMessage=true -Dweblogic.wsee.skip.async.response=true -Dweblogic.management.discover=true -Dwlw.iterativeDev=true -Dwlw.testConsole=true -Dwlw.logErrorsToConsole=true -Dweblogic.ext.dirs=%MW_HOME%\patch_wls1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_wlp1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_oepe1111\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_ocm1033\profiles\default\sysext_manifest_classpath;%MW_HOME%\wlportal_10.3\p13n\lib\system;%MW_HOME%\wlportal_10.3\light-portal\lib\system;%MW_HOME%\wlportal_10.3\portal\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\lib\system;%MW_HOME%\wlportal_10.3\analytics\lib\system;%MW_HOME%\wlportal_10.3\apps\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\deprecated\lib\system;%MW_HOME%\wlportal_10.3\content-mgmt\lib\system -Dweblogic.alternateTypesDirectory=%MW_HOME%\wlportal_10.3\portal\lib\securityAnd that's it. Looks really simple, but it took me quite some time to gather all the necessary pieces in order to make it work. Hopefully you find this before you went through half as much research.The example here uses a domain with only the Admin server and no managed servers. For a variety of reasons I only want the Admin server to be run as a service. The standard documentation along with the example above should allow you to expand this to include managed servers should you feel the need.

    Read the article

  • Something about Property Management or &hellip; the understanding of SharePoint Admins/roles ?!?

    - by Enrique Lima
    When I talk about SharePoint, for some reason it comes to my mind as if it were property management and all the tasks associated with it. So, imagine you have a lot ( a piece of land of sorts), you then decide there is something you want to do with it.  So, you make the choice of having a building built.  Now, in order to go forward with your plan, you need to check what the rules/regulations are.  Has is it been zoned residential, commercial, industrial … you get the idea.  This to me sounds like Governance.  The what am I to do given a defined set of rules. We keep on moving forward based on those rules.  And with this we start the process of building, the building process takes us to survey the land, identify what our boundaries are.  And as we go along we start getting the idea in our head as to what we will do as far as the building goes.  We identify the essentials of the building, basic services and such.  All in all, we plan.  And as with many things we do, we like solid foundations.  What a solid foundation looks like will depend on where and what we build.  The way buildings are built depends in many ways in being able to foresee the potential for natural disasters or to try to leverage the lay of the land.  Sound familiar?  We have done our Requirements Gathering. We have the building in place, we have followed the zoning rules, we have implemented services.  But we need someone to manage the building, now we move on to the human side of the story.  We want to establish a means to normalcy in the building, someone that can be the monitoring agent as to the “what’s going on?” of it.  This person will be tasked with making sure all basic services are functional, that measures are taken if there is an issue and so on.  Enter the Farm Administrator. In a way, we establish an extension of the rules to make sure the building and the apartments/offices build follow a standard set of rules too. Now, in turn you will have people leasing or buying the apartments/offices, they will be the keepers of that space.  So, now we are building sites, we have moved from having the building (farm) ready, to leasing/selling offices/apartments (site collections).  There will be someone assuming responsibility for those offices, that person will authorize or be informed about activities and also who not only gets a code into the building, but perhaps a key to the office.  Enter Site Collection Administrator.  And then perhaps we move on to the person that would be responsible for specifics within the office, for example a Human Resources Manager or Coordinator.  They will have specific control and knowledge about people.  A facilities coordinator, and so on.  I would translate that into Site Administrators. With that said then, we identify the following: Role Name Responsibility (but not limited to) Farm Administrator Infrastructure Site Collection Admin Policies for Content, Hierarchy, Recycle Bin, Security and Access Site Owner (Site Admin) Security and Access, Training, Guidance, Manage Templates All in all there are different levels of responsibility to be handled, but it is very important to understand what they are and what they mean. Here is a link to very well laid out explanation on this … http://www.endusersharepoint.com/2009/08/11/site-managers-and-end-user-expectations-roles-and-responsibilities/

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • SharePoint Content and Site Editing Tips

    - by Bil Simser
    A few content management and site editing tips for power users on this bacon flavoured unicorn morning. The theme here is keep it clean!Write "friendly" email addressesRemember it's human beings reading your content. So seeing something like "If you have questions please send an email to [email protected]" breaks up the readiblity. Instead just do the simple steps of writing the content in plain English and going back, highlighting the name and insert a link (note: you might have to prefix the link with mailto:[email protected]). It makes for a friendlier looking page and hides the ugliness that are sometimes in email addresses.Use friendly column and list namesThis is a big pet peeve of mine. When you first create a column or list with spaces the internal name is changed. The display name might be "My Amazing List of Animals with Large Testicles" but the internal (and link) name becomes "My_x00x20_Amazing_x00x20_List_x00x20_of_x00x20_Animals_x00x20_with_x00x20_Large_x00x20_Testicles". What's worse is if you create a publishing page named "This Website is Fueled By a Dolphin's Spleen". Not only is it incorrect grammar, but the apostrophe wreaks havoc on both the internal name for the list (with lots of crazy hex codes) as well as the hyperlink (where everything is uuencoded). Instead create the list with a distinct and compact name then go back and change it to whatever you want. The end result is a better formed name that you can both script and access in code easier.Keep your Views CleanWhen you add a column to a list or create a new list the default is to add it to the default view. Do everyone a favour and don't check this box! The default view of a list should be something similar to the Title field and nothing else. Keep it clean. If you want to set a defalt view that's different, go back and create one with all the fields and filtering and sorting columns you want and set it as default. It's a good idea to keep the original AllItems.aspx (note the lack of space in the filename!) easy and unfiltered. It's also a good idea to keep your column count down in views. Don't let every column be added by default and don't add every column just because you can. Create separate views for distinct responsibilities and try to keep the number of columns down to a single screen to prevent horizontal scrolling.Simple NavigationThe Quick Launch is a great tool for navigating around your site but don't use the default of adding all lists to it. Uncheck that box and keep navigation simple. Create custom groupings that make sense so if you don't have a site with "Documents and Lists" but "Reports and Notices" makes more sense then do it. Also hide internal lists from the Quick Launch. For example, if most users don't need to see all the lookup tables you might have on a site don't show them. You can use audience filtering on the Quick Launch if you want to hide admin items from non-admin users so consider that as an option.Enjoy!

    Read the article

  • edited and reversed changes on .htaccess - site starts redirecting to .comindex.php/

    - by Aurigae
    Site is a Joomla 2.5 site. I wanted to add a non www to www redirect to the htaccess file, did so, then the redirection went mad, reversed but still the site redirects. When i click view site in admin panel, i get linked to http://domain.comindex.php/ The website is http://www.domain.com Visiting the website URL works without www, but once you click on projects it acts mad too. Projects is managed with joomshopping extension. EDIT: the redirect also happens when rewrite is deactivated in admin panel. ## # @package Joomla # @copyright Copyright (C) 2005 - 2012 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. Redirect 301 /index.html /index.php Redirect 301 /services /project Redirect 301 /projects/projects.html /project Redirect 301 /projects/project1.html /project Redirect 301 /projects/project2.html /project Redirect 301 /projects /project Redirect 301 /keypersonnel.html /about-agrin/keystaff Redirect 301 /cooperation.htm /about-agrin/intcoop Redirect 301 /member.html /about-agrin/memberships Redirect 301 /contact.html /contacts Redirect 301 /hr.htm /jobs Redirect 301 /index.php/404 /index.php

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • How can I change the color of the text in my iFrame? [closed]

    - by VinylScratch
    I have code here: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Frag United Banlist</title> </head> <body> <h1>Tekkit Banlist</h1> <?php // change these things $server = "server-host"; $dbuser = "correct-user"; $dbpass = "correct-password"; $dbname = "correct-database"; mysql_connect($server, $dbuser, $dbpass); mysql_select_db($dbname); $result = mysql_query("SELECT * FROM banlist ORDER BY id DESC"); //This will display the most recent by id edit this query how you see fit. Limit, Order, ect. echo "<table width=100% border=1 cellpadding=3 cellspacing=0>"; echo "<tr style=\"font-weight:bold\"> <td>ID</td> <td>User</td> <td>Reason</td> <td>Admin/Mod</td> <td>Time</td> <td>Ban Length</td> </tr>"; while($row = mysql_fetch_assoc($result)){ if($col == "#eeeeee"){ $col = "#ffffff"; }else{ $col = "#eeeeee"; } echo "<tr bgcolor=$col>"; echo "<td>".$row['id']."</td>"; echo "<td>".$row['user']."</td>"; echo "<td>".$row['reason']."</td>"; echo "<td>".$row['admin']."</td>"; //Convert Epoch Time to Standard format $datetime = date("F j, Y, g:i a", $row['time']); echo "<td>$datetime</td>"; $dateconvert = date("F j, Y, g:i a", $row['length']); if($row['length'] == "0"){ echo "<td>None</td>"; }else{ echo "<td>$dateconvert</td>"; } echo "<td>".$row['id']."</td>"; echo "</tr>"; } echo"</table>" ?> </div> </body></html> And I am trying to make it so that when I put it in this iframe: <iframe src="http://bans.fragunited.net/" width="100%" length="100%"><p>Your browser does not support iframes.</p></iframe> But if you go to this page, fragunited.net/bans, (not bans.fragunited.net) the text is black and I want it to be white so you can actually see it. Sorry for the large amount of code, however I don't know where you have to put the code to change the color.

    Read the article

  • KnpLabs / DoctrineBehaviors Translatable - currentLocale = null

    - by Ruben
    Using the trait \Knp\DoctrineBehaviors\Model\Translatable\Translation inside an Entity, I see that the property currentLocale is never setted , so we always obtain the default locale ('en') in all the calls to $this->translate(). If I change this getDefaultLocale, all the translations are made correctly, so I think that is no problem with the fallback. I tried debug the currentLocaleCallable. I see that if I put a "var_dump ($this-container-get('request'));" in the contructor of currentLocaleCallable, the request have a locale to null. And outside the request has the correct locale.It seems that container is not in the scope: request , i don't know how can I get it to work I post an issue in github https://github.com/KnpLabs/DoctrineBehaviors/issues/71 EDITED This service is defined in vendor/knplabs/doctrine-behaviors/config/orm-services.yml and is: knp.doctrine_behaviors.reflection.class_analyzer: class: "%knp.doctrine_behaviors.reflection.class_analyzer.class%" public: false knp.doctrine_behaviors.translatable_listener: class: "%knp.doctrine_behaviors.translatable_listener.class%" public: false arguments: - "@knp.doctrine_behaviors.reflection.class_analyzer" - "%knp.doctrine_behaviors.reflection.is_recursive%" - "@knp.doctrine_behaviors.translatable_listener.current_locale_callable" tags: - { name: doctrine.event_subscriber } knp.doctrine_behaviors.translatable_listener.current_locale_callable: class: "%knp.doctrine_behaviors.translatable_listener.current_locale_callable.class%" arguments: - "@service_container" # lazy request resolution public: false EDIT 2: My composer.json "php": ">=5.3.3", "symfony/symfony": "2.3.*", "doctrine/orm": ">=2.2.3,<2.4-dev", "doctrine/doctrine-bundle": "1.2.*", "twig/extensions": "1.0.*", "symfony/assetic-bundle": "2.3.*", "symfony/swiftmailer-bundle": "2.3.*", "symfony/monolog-bundle": "2.3.*", "sensio/distribution-bundle": "2.3.*", "sensio/framework-extra-bundle": "2.3.*", "sensio/generator-bundle": "2.3.*", "incenteev/composer-parameter-handler": "~2.0", "friendsofsymfony/user-bundle": "1.3.*", "avalanche123/imagine-bundle": "v2.1", "raulfraile/ladybug-bundle": "~1.0", "genemu/form-bundle": "2.2.*", "friendsofsymfony/rest-bundle": "0.12.0", "stof/doctrine-extensions-bundle": "dev-master", "sonata-project/admin-bundle": "dev-master", "a2lix/translation-form-bundle": "1.*@dev", "sonata-project/user-bundle": "dev-master", "psliwa/pdf-bundle": "dev-master", "jms/serializer-bundle": "dev-master", "jms/di-extra-bundle": "dev-master", "knplabs/doctrine-behaviors": "dev-master", "sonata-project/doctrine-orm-admin-bundle": "dev-master", "knplabs/knp-paginator-bundle": "dev-master", "friendsofsymfony/jsrouting-bundle": "~1.1", "zendframework/zend-validator": ">=2.0.0-rc2", "zendframework/zend-barcode": ">=2.0.0-rc2"

    Read the article

  • AD - Using UserPrincipal.FindByIdentity and PrincipalContext with nested OU - C#

    - by Solid Snake
    Here is what I am trying to achieve: I have a nested OU structure that is about 5 levels deep. OU=Portal,OU=Dev,OU=Apps,OU=Grps,OU=Admin,DC=test,DC=com I am trying to find out if the user has permissions/exists at OU=Portal. Here's a snippet of what I currently have: PrincipalContext domain = new PrincipalContext( ContextType.Domain, "test.com", "OU=Portal,OU=Dev,OU=Apps,OU=Grps,OU=Admin,DC=test,DC=com"); UserPrincipal user = UserPrincipal.FindByIdentity(domain, myusername); PrincipalSearchResult<Principal> group = user.GetAuthorizationGroups(); For some unknown reason, the value user generated from the above code is always null. However, if I were to drop all the OU as follows: PrincipalContext domain = new PrincipalContext( ContextType.Domain, "test.com", "DC=test,DC=com"); UserPrincipal user = UserPrincipal.FindByIdentity(domain, myusername); PrincipalSearchResult<Principal> group = user.GetAuthorizationGroups(); this would work just fine and return me the correct user. I am simply trying to reduce the number of results as opposed to getting everything from AD. Is there anything that I am doing wrong? I've googled for hours and tested various combinations without much luck. Any help is appreciated. Thanks. Dan

    Read the article

  • "SignTool error: Access is denied" in TFS 2010 build process

    - by user351352
    I'm getting "SignTool Error: Access is Denied" when I attempt to sign a file. When I use an administrator cmd, all works fine. However, this process is going to be used in a TFS 2010 build process and using the InvokeProcess task with signtool gives the same access denied message as a non-administrator command prompt. More info: On a Win2008 R2 enterprise machine. User is machine admin and on the domain. The TFS Build service is also set to run as this user. Using a self signed certificate created using these instructions: How do I create a self-signed certificate for code signing on Windows? After following these instructions I have the following files: MyCA.cer MyCA.pvk MySPC.cer MySPC.pvk MySPC.pfx MyCA is in my Trusted Root Certification Authorities I imported MySPC.pfx into personal certificates, following the advice here: SignTool error: Access is denied To do the signing I'm using the thumbprint of the MySPC.pfx that was imported into the Personal section so my signtool command looks like: sign /sha1 1e9d7b5ad98552d9c58944e3f3903e6b929f4819 /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" Once again this works in Admin mode. This also works when running cmd as administrator: sign /f "C:\Code Signing Non-Release\MySPC.pfx" /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" New to code signing in general, so any help is welcome.

    Read the article

  • uploadify scriptData problem

    - by elpaso66
    Hi, I'm having problems with scriptData on uploadify, I'm pretty sure the config syntax is fine but whatever I do, scriptData is not passed to the upload script. I tested in both FF and Chrome with flash v. Shockwave Flash 9.0 r31 This is the config: $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : '/media/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': 'e1b552afde044bdd188ad51af40cfa8e'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : '/media/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); I checked that other values (file name) are passed correctly but session_key is not. This is the decorator code from django-filebrowser, you can see it checks for request.POST.get('session_key'), the problem is that request.POST is empty. def flash_login_required(function): """ Decorator to recognize a user by its session. Used for Flash-Uploading. """ def decorator(request, *args, **kwargs): try: engine = __import__(settings.SESSION_ENGINE, {}, {}, ['']) except: import django.contrib.sessions.backends.db engine = django.contrib.sessions.backends.db print request.POST session_data = engine.SessionStore(request.POST.get('session_key')) user_id = session_data['_auth_user_id'] # will return 404 if the session ID does not resolve to a valid user request.user = get_object_or_404(User, pk=user_id) return function(request, *args, **kwargs) return decorator

    Read the article

  • Delphi app manifest file problems under WinXP and Win7

    - by Ronaldo Junior
    My last question "List service and services status under Win-7" made me start working on a solution that gives my app the admin privileges under Windows Vista onward based on a .manifest file. I was not sure about continue the previous question with this matter since they are not the same so here is another question: My app now works fine under Win 7 whether or not I run it "as admin" because of the manifest file. My manifest file is as follow: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.6.0.5" processorArchitecture="X86" name="ServiceMonitorPro" type="win32"/> <description publisher="Powershield Ltd" product="Powershield Service Monitor">Powershield Service Monitor</description> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> When the application runs on windows 7 or Vista, the UAC comes with a dialog like this: How can I replace the "unknow" publisher? The other and bigest problem is, even thou the app runs with no problem under Win7 or Vista, under WinXP it is now crashing with the message: "This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem." Another thing I would like to add: If I add reference (uses clause) to XPMan the app works fine on WinXP but then it my .manifest file makes no diference under Vista or Win7.

    Read the article

  • SSRS 2008: is it possible to make a report parameter NOT query-based for some linked report?

    - by Stefan Mohr
    I suspect the answer is no, but here goes.. I'm using the WebForms Report Viewer on a public-facing website to allow users to report on themselves or their users (if the user is an admin user). A report has a parameter called Users where an admin can pick a user from the list and generate a report from it. Mundane users can also view this report, but I programmatically create a linked report for each user and set the UserID value to their ID so they can only view themselves. This works well except that the UserID parameter is query-based, and not every user is visible in the list using default settings (the user list is based off date range parameters can provide, and only users we consider 'active' during the date range are visible). This is blowing up for mundane users that are not active for the default date range (which is the previous month). I suspect the flow of execution is something like this: Report loads with default parameters The linked report rules are now applied and the value of the UserID is overridden with the ID in the linked report UserID field is now hidden to prevent the user from changing it SSRS can't find the UserID default value in the query results (that I didn't even want it to run) so it displays an error The 'UserID' parameter is missing a value Through some testing I've found a perfect correlation between users not inside the default date range and users who can't view the report. Can anyone suggest a way to make the report usable for those users that aren't in the default list? The reports are created programmatically so I do have a fair bit of control over the situation. I would love to simply be able to mark a parameter in a linked report as no longer being query-based, but those properties are all read-only. I really, really don't want to have to create duplicate reports to accommodate these users but I'm at a bit of a loss right now. Any suggestions are greatly appreciated!

    Read the article

  • NAudio demos not working anymore

    - by Kurru
    I just tried to run the NAudio demos and I'm getting a weird error: System.BadImageFormatException: Could not load file or a ssembly 'NAudio, Version=1.3.8.0, Culture=neutral, PublicKeyToken=null' or one o f its dependencies. An attempt was made to load a program with an incorrect form at. File name: 'NAudio, Version=1.3.8.0, Culture=neutral, PublicKeyToken=null' at NAudioWpfDemo.AudioGraph..ctor() at NAudioWpfDemo.ControlPanelViewModel..ctor(IWaveFormRenderer waveFormRender er, SpectrumAnalyser analyzer) in C:\Users\Admin\Downloads\NAudio-1.3\NAudio-1-3 \Source Code\NAudioWpfDemo\ControlPanelViewModel.cs:line 23 at NAudioWpfDemo.MainWindow..ctor() in C:\Users\Admin\Downloads\NAudio-1.3\NA udio-1-3\Source Code\NAudioWpfDemo\MainWindow.xaml.cs:line 15 WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\M icrosoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure lo gging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fus ion!EnableLog]. Since the last time I used NAudio demos I have changed from 32bit Windows XP to 64bit Windows 7. Would this cause this issue? Its very annoying as I was about to try my hand at audio in C# again

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >