Search Results

Search found 4775 results on 191 pages for 'trace flags'.

Page 27/191 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • getting java.lang.OutOfMemoryError exception while running a Midlet (using netbeans)

    - by Jeeka
    I am writing a Midlet(using Netbeans) which reads a file containing exactly 2400 lines (each line being 32 characters long) and (extract a part of each line) puts them in an array. I am doing the same for 11 such files( all files have exactly 2400 lines).The Midlet runs fine for reading 6 files and putting them in 6 arrays. However, the Midlet stops while doing it for the 7th file throwing the following exception: TRACE: , startApp threw an Exception java.lang.OutOfMemoryError (stack trace incomplete) java.lang.OutOfMemoryError (stack trace incomplete) I have tried the modifying the netbeans.conf file to increase the heap memory ( as suggested by many forums and blogs) but nothing works for me. Here are the parameters that i had modified in the netbeans.conf file: -J-Xss2m -J-Xms1024m -J-Xmx1024m -J-XX:PermSize=1024m -J-XX:MaxPermSize=1536m -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled Can anyone please help me to get me out of this ! I badly need this to be sorted out ASAP ! Thanks in advance !

    Read the article

  • ActionScript BitmapData Built-in To Bitmaps?

    - by TheDarkIn1978
    i've used a Loader and URLRequest to download a .png from the internet and add it to my display list. since it's already a bitmap, does it have built in bitmap data already? or do i have to create the bitmap data myself? also, why does the same trace statement return false in the mouseMoveHandler when it outputs true in the displayImage function? var imageLoader:Loader = new Loader(); imageLoader.load(new URLRequest("http://somewebsite.com/image.png")); imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, displayImage); function displayImage(evt:Event):void { addChild(evt.target.content); addEventListener(MouseEvent.MOUSE_MOVE, mouseMoveHandler); trace(evt.target.content is Bitmap); //outputs 'true' } function mouseMoveHandler(evt:MouseEvent):void { trace(evt.target.content is Bitmap); //outputs 'false' }

    Read the article

  • ActionScript 2.0 problem with the object's _visible parameter

    - by GMan
    Hey guys, I've been trying to build something simple in Flash 8, and I stumbled across something weird I cannot explain: I have an object, and at some point of the program, I want it to be visible (it is invisible at first), so I write: _root.myObj._visible = true; _root.gameOver.swapDepths(_root.getNextHighestDepth()); //so it will be on the top and this works fine, the object becomes visible and etc. What I planned to happen next is that the user presses a button on that same object, and the object will go invisible: on(release) { trace(_root.myObj._visible); _root.myObj._visible = false; trace(_root.myObj._visible); _root.gotoAndPlay("three"); } The trace returns at first true and later on false, so the command works, but oddly the object stays visible, that's what I don't understand. Thanks everybody in advance.

    Read the article

  • constructing dynamic In Statements with sql

    - by nitroxn
    Suppose we need to check three boolean conditions to perform a select query. Let the three flags be 'A', 'B' and 'C'. If all of the three flags are set to '1' then the query to be generated is SELECT * FROM Food WHERE Name In ('Apple, 'Biscuit', 'Chocolate'); If only the flags 'A' and 'B' are set to '1' with C set to '0'. Then the following query is generated. SELECT * FROM Food WHERE Name In ('Apple, 'Biscuit'); What is the best way to do it?

    Read the article

  • Qt toggle always on top for a QMainWindow

    - by Jake Petroules
    void MainWindow::on_actionAlways_on_Top_triggered(bool checked) { Qt::WindowFlags flags = this->windowFlags(); if (checked) { this->setWindowFlags(flags | Qt::CustomizeWindowHint | Qt::WindowStaysOnTopHint); this->show(); } else { this->setWindowFlags(flags ^ (Qt::CustomizeWindowHint | Qt::WindowStaysOnTopHint)); this->show(); } } The above solution works but because setWindowFlags hides the window, it needs to be reshown and of course that doesn't look very elegant. So how do I toggle "always on top" for a QMainWindow without that "flashing" side effect?

    Read the article

  • Preloading images from XML into Flash

    - by tykebikemike
    I know this has been asked to death, but I cant seem to find the answer i'm looking for... I've put together a simple Flash/XML image gallery and I would like to preload the images - Simple process. Get the XML data Preload the images When all images are preloaded, go to next frame Here is my code thus far: var myXML:XML = new XML(); myXML.ignoreWhite = true; myXML.load("xmltest.xml"); myXML.onLoad = function(success){ if(success){ var myImage = myXML.firstChild.childNodes; for (i=0; i<myImage.length; i++){ var imageNumber = i+1; var imageName = myImage[i].attributes.name; var imageURL = myImage[i].firstChild.nodeValue; trace('image #: ' + imageNumber); trace('image name: ' + imageName); trace('image url: ' + imageURL + '\n'); } } }

    Read the article

  • How do I toggle 'always on top' for a QMainWindow in Qt?

    - by Jake Petroules
    void MainWindow::on_actionAlways_on_Top_triggered(bool checked) { Qt::WindowFlags flags = this->windowFlags(); if (checked) { this->setWindowFlags(flags | Qt::CustomizeWindowHint | Qt::WindowStaysOnTopHint); this->show(); } else { this->setWindowFlags(flags ^ (Qt::CustomizeWindowHint | Qt::WindowStaysOnTopHint)); this->show(); } } The above solution works but because setWindowFlags hides the window, it needs to be re-shown and of course that doesn't look very elegant. So how do I toggle "always on top" for a QMainWindow without that "flashing" side effect?

    Read the article

  • fopen in C(Linux) returns "Too many open files"

    - by liv2hak
    static char filename[128] = "trace.txt"; g_file = fopen(filename, "w"); if(NULL == g_file) { printf("Cannot open file %s.error %s\n",filename,strerror(errno)); exit(1); } I am trying to open a empty text file named trace.txt in write mode (in my working directory.) The program is creating an empty file trace.txt in my directory.but the check (NULL == g_file) is returning true and it is returning error code 24 (Too many open files.).Any idea why this is.This is the first file I am opening in my program.

    Read the article

  • ActionScript Local X And Y Coordinates Of Display Object?

    - by TheDarkIn1978
    i'm trying to trace the x and y coordinates from within a sprite. i've added a rectangle to the stage: var rect:Rectangle = new Rectangle(10, 10, 200, 200); addChild(rect); adding a Mouse_Move event to the rect, i can trace mouseX and mouseY to receive the coordinates of the stage while moving over the rect, but how do i get the local x and y coordinates? so if i mouse over the very top left of the rect sprite, the mouseX and mouseY return 10 as the global coordinates, but how do i make it return 0 and the local coordinates of the sprite? i assumed localX and localY was what i was looking for, but this doesn't work: function mouseOverTraceCoords(evt:MouseEvent):void { trace(mouseX, mouseY, evt.localX, evt.localY); }

    Read the article

  • NoMethodError in UsersController#create

    - by Mike DeVerna
    I'm getting stuck on an error I'm getting when signing up a new user in Michael Hart's Ruby on Rails Tutorial. I'm new to rails but I've been searching for hours and can't seem to find anything to figure out the issue. My initial thought is that it's specific to the following line: redirect_to @user This is my file for users_controller.rb #!/bin/env ruby # encoding: utf-8 class UsersController < ApplicationController def show @user = User.find(params[:id]) end def new @user = User.new end def create @user = User.new(params[:user]) if @user.save flash[:success] = "Welcome to the Sample App!" ?????????????? redirect_to @user else render 'new' end end end This is the error message I get: NoMethodError in UsersController#create undefined method `??????????????' for # Rails.root: /Users/mikedeverna/Documents/rails_projects/sample_app Application Trace | Framework Trace | Full Trace app/controllers/users_controller.rb:18:in `create' Here is the code in my routes.rb file: SampleApp::Application.routes.draw do resources :users resources :sessions, only: [:new, :create, :destroy] root to: 'static_pages#home' match '/signup', to: 'users#new' match '/signin', to: 'sessions#new' match '/signout', to: 'sessions#destroy', via: :delete match '/help', to: 'static_pages#help' match '/about', to: 'static_pages#about' match '/contact', to: 'static_pages#contact'

    Read the article

  • Intellitrace bug causes &ldquo;Operation could destabilize the runtime&rdquo; exception

    - by Magnus Karlsson
    We cant use it when we use simplemembership to handle external authorizations.   Server Error in '/' Application. Operation could destabilize the runtime. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Security.VerificationException: Operation could destabilize the runtime. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [VerificationException: Operation could destabilize the runtime.] DotNetOpenAuth.OpenId.Messages.IndirectSignedResponse.GetSignedMessageParts(Channel channel) +943 DotNetOpenAuth.OpenId.ChannelElements.ExtensionsBindingElement.GetExtensionsDictionary(IProtocolMessage message, Boolean ignoreUnsigned) +282 DotNetOpenAuth.OpenId.ChannelElements.<GetExtensions>d__a.MoveNext() +279 DotNetOpenAuth.OpenId.ChannelElements.ExtensionsBindingElement.ProcessIncomingMessage(IProtocolMessage message) +594 DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) +933 DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) +326 DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestBase httpRequest) +1343 DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestBase httpRequestInfo) +241 DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse() +361 DotNetOpenAuth.AspNet.Clients.OpenIdClient.VerifyAuthentication(HttpContextBase context) +136 DotNetOpenAuth.AspNet.OpenAuthSecurityManager.VerifyAuthentication(String returnUrl) +984 Microsoft.Web.WebPages.OAuth.OAuthWebSecurity.VerifyAuthenticationCore(HttpContextBase context, String returnUrl) +333 Microsoft.Web.WebPages.OAuth.OAuthWebSecurity.VerifyAuthentication(String returnUrl) +192 PrioMvcWebRole.Controllers.AccountController.ExternalLoginCallback(String returnUrl) in c:hiddenforyou lambda_method(Closure , ControllerBase , Object[] ) +127 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +250 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.Async.<>c__DisplayClass39.<BeginInvokeActionMethodWithFilters>b__33() +87 System.Web.Mvc.Async.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49() +439 System.Web.Mvc.Async.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49() +439 System.Web.Mvc.Async.<>c__DisplayClass37.<BeginInvokeActionMethodWithFilters>b__36(IAsyncResult asyncResult) +15 System.Web.Mvc.Async.<>c__DisplayClass2a.<BeginInvokeAction>b__20() +34 System.Web.Mvc.Async.<>c__DisplayClass25.<BeginInvokeAction>b__22(IAsyncResult asyncResult) +221 System.Web.Mvc.<>c__DisplayClass1d.<BeginExecuteCore>b__18(IAsyncResult asyncResult) +28 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__3(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +523 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +176 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.17929

    Read the article

  • Stuck at the STARTUP [closed]

    - by Tarik Setia
    I started with "Getting started with asp mvc4 tutorial". I just created the project and when I pressed F5 I got this: Server Error in '/' Application. -------------------------------------------------------------------------------- Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.] System.Web.Mvc.VirtualPathProviderViewEngine.GetPath(ControllerContext controllerContext, String[] locations, String[] areaLocations, String locationsPropertyName, String name, String controllerName, String cacheKeyPrefix, Boolean useCache, String[]& searchedLocations) +0 System.Web.Mvc.VirtualPathProviderViewEngine.FindView(ControllerContext controllerContext, String viewName, String masterName, Boolean useCache) +315 System.Web.Mvc.c__DisplayClassc.b__a(IViewEngine e) +68 System.Web.Mvc.ViewEngineCollection.Find(Func`2 lookup, Boolean trackSearchedPaths) +182 System.Web.Mvc.ViewEngineCollection.Find(Func`2 cacheLocator, Func`2 locator) +67 System.Web.Mvc.ViewEngineCollection.FindView(ControllerContext controllerContext, String viewName, String masterName) +329 System.Web.Mvc.ViewResult.FindView(ControllerContext context) +135 System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) +230 System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult) +39 System.Web.Mvc.c__DisplayClass1c.b__19() +74 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) +388 System.Web.Mvc.c__DisplayClass1e.b__1b() +72 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) +303 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +844 System.Web.Mvc.Controller.ExecuteCore() +130 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +229 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +39 System.Web.Mvc.c__DisplayClassb.b__5() +71 System.Web.Mvc.Async.c__DisplayClass1.b__0() +44 System.Web.Mvc.Async.c__DisplayClass8`1.b__7(IAsyncResult _) +42 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +152 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +59 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40 System.Web.Mvc.c__DisplayClasse.b__d() +75 System.Web.Mvc.SecurityUtil.b__0(Action f) +31 System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +61 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +118 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +38 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +10303829 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +178 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.17020

    Read the article

  • Need help with PHP web app bootstrapping error potentially related to Zend [migrated]

    - by Matt Shepherd
    I am trying to get a program called OpenFISMA running on an Ubuntu AMI in AWS. The app is not really coded on the Ubuntu platform, but I am in my comfort zone there, and have tried both CentOS and OpenSUSE (both are sort of "native" for the app) for getting it working with the same or worse results. So, why not just get it working on Ubuntu? Anyway, the app is found here: www.openfisma.org and an install guide is found here: https://openfisma.atlassian.net/wiki/display/030100/Installation+Guide The install guide kind of sucks. It doesn't list dependencies in any coherent way or provide much of any detail (does not even mention Zend once on the entire page) so I've done a lot of work to divine the information I do have. This page provided some dependency inf (though again, Zend is not mentioned once): https://openfisma.atlassian.net/wiki/display/PUBLIC/RPM+Management#RPMManagement-BasicOverviewofRPMPackages Anyway, I got all the way through the install (so far as I could reconstruct it). I am going to the login page for the first time, and there should be some sort of bootstrapping occurring when I load the page. (I am not a programmer so I have no idea what it is doing there.) Anyway, I get a message on the web page that says: "An exception occurred while bootstrapping the application." So, then I go look in /var/www/data/logs/php.log and find this message: [22-Oct-2013 17:29:18 UTC] PHP Fatal error: Uncaught exception 'Zend_Exception' with message 'No entry is registered for key 'Zend_Log'' in /var/www/library/Zend/Registry.php:147 Stack trace: #0 /var/www/public/index.php(188): Zend_Registry::get('Zend_Log') #1 {main} thrown in /var/www/library/Zend/Registry.php on line 147 This occurs every time I load the page. I gather there is an issue related to registering the Zend_Log variable in the Zend registry, but other than that I really have no idea what to do about it. Am I missing a package that it needs, or is this app not coded to register the variables properly? I have no clue. Any help is greatly appreciated. The application file referenced in the log message (index.php) is included below. <?php /** * Copyright (c) 2008 Endeavor Systems, Inc. * * This file is part of OpenFISMA. * * OpenFISMA is free software: you can redistribute it and/or modify it under the terms of the GNU General Public * License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later * version. * * OpenFISMA is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied * warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along with OpenFISMA. If not, see * {@link http://www.gnu.org/licenses/}. */ try { defined('APPLICATION_PATH') || define( 'APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application') ); // Define application environment defined('APPLICATION_ENV') || define( 'APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'production') ); set_include_path( APPLICATION_PATH . '/../library/Symfony/Components' . PATH_SEPARATOR . APPLICATION_PATH . '/../library' . PATH_SEPARATOR . get_include_path() ); require_once 'Fisma.php'; require_once 'Zend/Application.php'; $application = new Zend_Application( APPLICATION_ENV, APPLICATION_PATH . '/config/application.ini' ); Fisma::setAppConfig($application->getOptions()); Fisma::initialize(Fisma::RUN_MODE_WEB_APP); $application->bootstrap()->run(); } catch (Zend_Config_Exception $zce) { // A zend config exception indicates that the application may not be installed properly echo '<h1>The application is not installed correctly</h1>'; $zceMsg = $zce->getMessage(); if (stristr($zceMsg, 'parse_ini_file') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file is missing.'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file does not have the ' . 'appropriate permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else if (stristr($zceMsg, 'database.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file is missing.<br/>'; echo 'If you find a database.ini.template file in the config directory, edit this file ' . 'appropriately and rename it to database.ini'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file does not have the appropriate ' . 'permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else { echo 'An ini-parsing error has occured. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly'; } } elseif (stristr($zceMsg, 'syntax error') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } elseif (stristr($zceMsg, 'database.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } else { echo 'A syntax error has been reached. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly.'; } } else { // Then the exception message says nothing about parse_ini_file nor 'syntax error' echo 'Please check all configuration files, and ensure all settings are valid.'; } echo '<br/>For more information and help on installing OpenFISMA, please refer to the ' . '<a target="_blank" href="http://manual.openfisma.org/display/ADMIN/Installation">' . 'Installation Guide</a>'; } catch (Doctrine_Manager_Exception $dme) { echo '<h1>An exception occurred while bootstrapping the application.</h1>'; // Does database.ini have valid settings? Or is it the same content as database.ini.template? $databaseIniFail = false; $iniData = file(APPLICATION_PATH . '/config/database.ini'); $iniData = str_replace(chr(10), '', $iniData); if (in_array('db.adapter = ##DB_ADAPTER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.host = ##DB_HOST##', $iniData)) { $databaseIniFail = true; } if (in_array('db.port = ##DB_PORT##', $iniData)) { $databaseIniFail = true; } if (in_array('db.username = ##DB_USER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.password = ##DB_PASS##', $iniData)) { $databaseIniFail = true; } if (in_array('db.schema = ##DB_NAME##', $iniData)) { $databaseIniFail = true; } if ($databaseIniFail) { echo 'You have not applied the settings in ' . APPLICATION_PATH . '/config/database.ini appropriately. ' . 'Please review the contents of this file and try again.'; } else { if (Fisma::debug()) { echo '<p>' . get_class($dme) . '</p><p>' . $dme->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $dme->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($dme) . "\n" . $dme->getMessage() . "\nStack Trace:\n" . $dme->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } } } catch (Exception $exception) { // If a bootstrap exception occurs, that indicates a serious problem, such as a syntax error. // We won't be able to do anything except display an error. echo '<h1>An exception occurred while bootstrapping the application.</h1>'; if (Fisma::debug()) { echo '<p>' . get_class($exception) . '</p><p>' . $exception->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $exception->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($exception) . "\n" . $exception->getMessage() . "\nStack Trace:\n" . $exception->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } }

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Illegal characters for SharePoint 2010 Content Type name

    - by Kelly Jones
    Quick tip: you can’t include a backslash in the name of the SharePoint 2010 Content Type.  In fact, there are several illegal characters:  \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. What, you didn’t know that after entering one of these characters in the name?  Is it because you saw this screen: Oh, that’s right….you need to turn off custom errors in the layouts folder…See this blog post for details and you’ll also need to turn off for the web application. Once you do that, you’ll see this: I wonder why the SharePoint team just doesn’t let the user know that the content type name contains illegal characters before the user hits the create button. Here’s a copy of the complete error (for the search engines): Server Error in '/' Application. -------------------------------------------------------------------------------- The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.  Stack Trace: [SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab.]    Microsoft.SharePoint.SPContentType.ValidateName(String name) +27419522    Microsoft.SharePoint.SPContentType.ValidateNameWithResource(String strVal, String& strLocalized) +423    Microsoft.SharePoint.SPContentType.set_Name(String value) +151    Microsoft.SharePoint.SPContentType.Initialize(SPContentType parentContentType, SPContentTypeCollection collection, String name) +112    Microsoft.SharePoint.SPContentType..ctor(SPContentType parentContentType, SPContentTypeCollection collection, String name) +132    Microsoft.SharePoint.ApplicationPages.ContentTypeCreatePage.BtnOK_Click(Object sender, EventArgs e) +497    System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115    System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140    System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981   -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • Additional options in MDL

    - by Jane Zhang
        The Metadata Loader(MDL) enables you to populate a new repository as well as transfer, update, or restore a backup of existing repository metadata. It consists of two utilities: metadata export and metadata import. The export utility extracts metadata objects from a repository and writes the information into a file. The import utility reads the metadata information from an exported file and inserts the metadata objects into a repository.      While the Design Client provides an intuitive UI that helps you perform the most commonly used export and import tasks, OMBPlus scripting enables you to specify some additional options, and manage a control file that allows you to perform more specialized export and import tasks. Is it possible to utilize these options in MDL from Design Client? This article will tell you how to achieve it.      A property file named mdl.properties is used to configure the additional options. It stores options in name/value pairs. This file can be created and placed under the directory <owb installation path>/owb/bin/admin/. Below we will introduce the options that can be specified in the mdl.properties file. 1. DEFAULTDIRECTORY     When we open a Metadata Export/Import dialog in Design Client, a default directory is provided for MDL file and log file. For MDL Export, the default directory is <owb installation path>/owb/bin/. As for MDL Import, the default directory is <owb installation path>/owb/mdl/. It may not be the one you would want to use as a default. You can specify the option DEFAULTDIRECTORY in the mdl.properties file to set your own default directory for MDL Export/Import, for example, DEFAULTDIRECOTRY=/tmp/     In this example, the default directory is set to /tmp/. Be sure the value ends with a file separator since it represents a directory. In Windows, the file separator is “\”. In linux, the file separator is “/”. 2. MDLTRACEFILE     Sometimes we would like to trace the whole process of MDL Export/Import, and get detailed information about operations to help developers or supports troubleshooting. To turn on MDL trace, set the option MDLTRACEFILE in the mdl.properties file. MDLTRACEFILE=/tmp/mdl.trc    The right side of the equals sign is to specify the name of the file for MDL trace information to be written. If no path is specified, the file will be placed under directory <owb installation path>/owb/bin/admin/. However, the trace file may be large if the MDL file contains a large number of metadata objects, so please use this option sparingly. 3. CONTROLFILE       We can use a control file to specify how objects are imported or exported. We can set an option called CONTROLFILE in the mdl.properties file, so the control file can also be utilized in Design Client, for example, CONTROLFILE=/tmp/mdl_control_file.ctl     The control file stores options in name/value pairs. When using control file, be sure the file exists, otherwise an exception java.lang.Exception: CNV0002-0031(ERROR): Cannot find specified file will be thrown out during MDL Export/Import.      Next we will introduce some options specified in control file. ZIPFILEFORMAT     By default, MDL exports objects into a zip format file. This zip file has an .mdl extension and contains two files. For example, you export the repository metadata into a file called projects.mdl. When you unzip this MDL file, you obtain two files. The file projects.mdx contains the repository objects. The file mdlcatalog.xml contains internal information about the MDL XML file. Another choice is to combine these two files into one unzip text format file when doing MDL exporting.    In OMBPlus command related to MDL, there is an option called FILE_FORMAT which is used to specify the file format for the exported file. Its acceptable values are ZIP or TEXT. When the value TEXT is selected, the exported file is in text format, for example, OMBEXPORT MDL_FILE '/tmp/options_file_format_test.mdl' FILE_FORMAT TEXT FROM PROJECT 'MY_PROJECT'    How to achieve this via Design Client when doing an MDL exporting? Here we have another option called ZIPFILEFORMAT which has the same function as the FILE_FORMAT. The difference is the acceptable values for ZIPFILEFORMAT are Y or N. When the value is set to N, the exported file is in text format, otherwise it is in zip file format. LOGMESSAGELEVEL     Whenever you export or import repository metadata, MDL writes diagnostic and statistical information to a log file. Their are 3 types of status messages: Informational, Warning and Error. By default, the log file includes all types of message. Sometimes, user may only care about one type of messages, for example, they would like only error messages written to the log file. In order to achieve this, we can set an option called LOGMESSAGELEVEL in control file. The acceptable values for LOGMESSAGELEVEL are ALL, WARNING and ERROR. ALL: If the option LOGMESSAGELEVEL is set to ALL, all types of messages (Informational, Warning and Error) will be written into the log file. WARNING: If the option LOGMESSAGELEVEL is set to WARNING, only warning messages will be written into log file. ERROR: If the option LOGMESSAGELEVEL is set to ERROR, only error messages will be written into log file. UPDATEPROJECTATTRIBUTES, UPDATEMODULEATTRIBUTES      These two options are used to decide whether updating the attributes of projects/modules. The options work when projects/modules being imported already exist in repository and we use update metadata mode or replace metadata mode to do the MDL import. The acceptable values for these two options are Y or N. If the value is set to Y, the attributes of projects/modules will be updated, otherwise not.      Next, let’s give an example to see how these options take effect in MDL. 1. First of all, create the property file mdl.properties under the directory <owb installation path>/owb/bin/admin/. 2. Specify the options in the mdl.properties file, see the following screenshot. 3. Create the control file mdl_control_file.ctl under the directory /tmp/. Set the following options in control file. 4. Log into the OWB Design Client. 5. Create an Oracle module named ORA_MOD_1 under the project MY_PROJECT, then export the project MY_PROJECT into file my_project.mdl. 6. Check the trace file mdl.trc under the directory /tmp/. In this file, we can see very detail information for the above export task. 7. Check the exported MDL file. The file my_project.mdl is in text format. Opening the file, you can see the content of the file directly. It concats the file my_project.mdx and mdlcatalog.xml. 8. Modify the project MY_PROJECT and Oracle module ORA_MOD_1, add descriptions for them separately. Delete the location created in step 5. 9. Import the MDL file my_project.mdl. From the Metadata Import dialog, we can see the default directory for MDL file and log file has been changed to /tmp/. Here we use update metadata mode, match by names to do the importing. 10. After importing, check the description of the project MY_PROJECT, we can see the description is still there. But the description of the Oracle module ORA_MOD_1 has gone. That because we set the option UPDATEPROJECTATTRIBUTES to N, and set the option UPDATEMODULEATTRIBUTES to Y. 11. Check the log file, the log file only contains warning messages and the log message level is set to WARNING.      For more details about the 3 types of status messages, see Oracle® Warehouse Builder Installation and Administration Guide11g Release 2.

    Read the article

  • box2D simulation doesn't work

    - by shadow_of__soul
    has been a while since last time i used box2D, and i needed to make some stuff, and i saw that my simulation don't worked (compiles, but do anything). i haven't been able to even have working the examples or this simple example i'm pasting below: package { import flash.display.Sprite; import flash.events.Event; import Box2D.Common.Math.b2Vec2; import Box2D.Dynamics.b2World; import Box2D.Dynamics.b2BodyDef; import Box2D.Dynamics.b2Body; import Box2D.Collision.Shapes.b2CircleShape; import Box2D.Dynamics.b2Fixture; import Box2D.Dynamics.b2FixtureDef; import org.flashdevelop.utils.FlashConnect; import flash.events.TimerEvent; import flash.utils.Timer; public class Main extends Sprite { public var world:b2World; public var wheelBody:b2Body; public var stepTimer:Timer; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); var gravity:b2Vec2 = new b2Vec2(0, 10); world = new b2World(gravity, true); var wheelBodyDef:b2BodyDef = new b2BodyDef(); wheelBodyDef.type = b2Body.b2_dynamicBody; wheelBody = world.CreateBody(wheelBodyDef); var circleShape:b2CircleShape = new b2CircleShape(5); var wheelFixtureDef:b2FixtureDef = new b2FixtureDef(); wheelFixtureDef.shape = circleShape; var wheelFixture:b2Fixture = wheelBody.CreateFixture(wheelFixtureDef); stepTimer = new Timer(0.025 * 1000); stepTimer.addEventListener(TimerEvent.TIMER, onTick); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); stepTimer.start(); // entry point } private function onTick(a_event:TimerEvent):void { world.Step(0.025, 10, 10); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); } } } on this, the object should fall down, but the positions reported me by the trace method, are always 0. so is not a display problem, that i see everything freeze, is why the simulation is not working, and i have no idea why :( can anyone point me to the right direction of where i need to look for the problem? my settings are: windows 7 flashdevelop 4.2.1 SDK: 4.6.0 compiling for flash 10, but i tried every target i have available (till flash 11.5) project set at 30fps

    Read the article

  • Games to Vista Game explorer with Inno Setup

    - by Kraemer
    Ok, i'm trying to force my inno setup installer to add a shortcut of my game to Vista Games Explorer. Theoretically this should do the trick: [Files] Source: "GameuxInstallHelper.dll"; DestDir: "{app}"; Flags: ignoreversion overwritereadonly; [Registry] Root: HKLM; Subkey: SOFTWARE\dir\dir; Flags: uninsdeletekeyifempty Root: HKLM; Subkey: SOFTWARE\dir\dir; ValueName: Path; ValueType: String; ValueData: {app}; Flags: uninsdeletekey Root: HKLM; Subkey: SOFTWARE\dir\dir; ValueName: AppFile; ValueType: String; ValueData:{app}\executable.exe ; Flags: uninsdeletekey [CustomMessages] en.Local=en en.removemsg=Do you wish to remove game saves and settings? en.taskentry=Play [Code] const PlayTask = 0; AllUsers = 2; Current = 3; type TGUID = record Data1: Cardinal; Data2, Data3: Word; Data4: array [0..7] of char; end; var GUID: TGUID; function GetDC(HWND: DWord): DWord; external '[email protected] stdcall'; function GetDeviceCaps(DC: DWord; Index: Integer): Integer; external '[email protected] stdcall'; function ReleaseDC(HWND: DWord;DC: DWord): Integer; external '[email protected] stdcall'; function ShowWindow(hWnd: DWord; nCmdShow: Integer): boolean; external '[email protected] stdcall'; function SetWindowLong(hWnd: DWord; nIndex: Integer; dwNewLong: Longint):Longint; external '[email protected] stdcall'; function GenerateGUID(var GUID: TGUID): Cardinal; external 'GenerateGUID@files:GameuxInstallHelper.dll stdcall setuponly'; function AddToGameExplorer(Binary: String; Path: String; InstallType: Integer; var GUID: TGUID): Cardinal; external 'AddToGameExplorerW@files:GameuxInstallHelper.dll stdcall setuponly'; function CreateTask(InstallType: Integer; var GUID: TGUID; TaskType: Integer; TaskNumber: Integer; TaskName: String; Binary: String; Parameters: String): Cardinal; external 'CreateTaskW@files:GameuxInstallHelper.dll stdcall setuponly'; function RetrieveGUIDForApplication(Binary: String; var GUID: TGUID): Cardinal; external 'RetrieveGUIDForApplicationW@{app}\GameuxInstallHelper.dll stdcall uninstallonly'; function RemoveFromGameExplorer(var GUID: TGUID): Cardinal; external 'RemoveFromGameExplorer@{app}\GameuxInstallHelper.dll stdcall uninstallonly'; function RemoveTasks(var GUID: TGUID): Cardinal; external 'RemoveTasks@{app}\GameuxInstallHelper.dll stdcall uninstallonly'; function InitializeSetup(): Boolean; var appath: string; ResultCode: Integer; begin if RegKeyExists(HKEY_LOCAL_MACHINE, 'SOFTWARE\dir\dir') then begin RegQueryStringValue(HKEY_LOCAL_MACHINE, 'SOFTWARE\dir\dir', 'Path', appath) Exec((appath +'\unins000.exe'), '', '', SW_SHOW, ewWaitUntilTerminated, ResultCode) end else begin Result := TRUE end; end; procedure CurUninstallStepChanged(CurUninstallStep: TUninstallStep); begin if CurUninstallStep = usUninstall then begin if GetWindowsVersion shr 24 > 5 then begin RetrieveGUIDForApplication(ExpandConstant('{app}\AWL_Release.dll'), GUID); RemoveFromGameExplorer(GUID); RemoveTasks(GUID); UnloadDll(ExpandConstant('{app}\GameuxInstallHelper.dll')); end; end; if CurUninstallStep = usPostUninstall then begin if MsgBox(ExpandConstant('{cm:removemsg}'), mbConfirmation, MB_YESNO)=IDYES then begin DelTree(ExpandConstant('{app}'), True, True, True); end; end; end; procedure CurStepChanged(CurStep: TSetupStep); begin if GetWindowsVersion shr 24 > 5 then begin if CurStep = ssInstall then GenerateGUID(GUID); if CurStep = ssPostInstall then begin AddToGameExplorer(ExpandConstant('{app}\AWL_Release.dll'), ExpandConstant('{app}'), Current, GUID); CreateTask(3, GUID, PlayTask, 0, ExpandConstant('{cm:taskentry}'), ExpandConstant('{app}\executable.exe'), ''); CreateTask(3, GUID, 1, 0, 'Game Website', 'http://www.gamewebsite.com/', ''); end; end; end; The installer works just fine, but it doesn't place a shortcut of my game to Games explorer. Since i believe that the problem is on the binary file i guess for that part i should ask for some help. So, can anyone please give me a hand here?

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Google Now is One Step Closer to Becoming Active in Google Chrome

    - by Akemi Iwaya
    Many people have been eager to have Google Now working in their Chrome browsers and this week that dream got one step closer to reality. The first teasers that the new feature is becoming active have started to appear, so now is a good time to activate the switch for it and be ready for its arrival. You will need to be running the Dev Channel on your computer and enable the Google Now switch via Chrome Flags (chrome://flags/) if you have not already done so. The switch will be towards the bottom of the list. Once that is done restart your browser. After the browser has restarted you will see a notification window pop up as seen in the first screenshot above. Click Yes and a second small pop up message window will appear letting you know more about the freshly enabled feature. Unfortunately we were not able to catch a screenshot of the second message window before it disappeared.    

    Read the article

  • How to perform regular expression based replacements on files with MSBuild

    - by Daniel Cazzulino
    And without a custom DLL with a task, too . The example at the bottom of the MSDN page on MSBuild Inline Tasks already provides pretty much all you need for that with a TokenReplace task that receives a file path, a token and a replacement and uses string.Replace with that. Similar in spirit but way more useful in its implementation is the RegexTransform in NuGet’s Build.tasks. It’s much better not only because it supports full regular expressions, but also because it receives items, which makes it very amenable to batching (applying the transforms to multiple items). You can read about how to use it for updating assemblies with a version number, for example. I recently had a need to also supply RegexOptions to the task so I extended the metadata and a little bit of the inline task so that it can parse the optional flags. So when using the task, I can pass the flags as item metadata as follows:...Read full article

    Read the article

  • Finding an alert in the middle of your javascript

    - by Ariel Popovsky
    I was debugging a script injection issue the other day using some sample code with an alert in it. The alert was popping out meaning the code got executed leaving open the possibility for a hacker to put there some nasty malicious code. I knew my alert was being executed but didn’t know how. So I tried something that worked perfectly for this problem, replaced the native alert function with my own one. All I had to do in Chrome was open the javascript console and type: alert = function(msg){ console.log(msg); console.trace(); }; The next time the malicious code was executed, instead of the regular alert I got something similar to this:   alert("testing") testing console.trace() alert:2 (anonymous function):2 InjectedScript._evaluateOn:312 InjectedScript._evaluateAndWrap:294 InjectedScript.evaluate:288 undefined In my case I was able to see what was going on and find the offending function. This was tested on Firebug in Firefox and it works as.

    Read the article

  • Script to set NO_HEARTBEAT Flag

    - by Koppar
    The new plugin provides some flags to help debug the browser side VM. Below are the flags: JPI_PLUGIN2_DEBUG=1JPI_PLUGIN2_VERBOSE=1 The above 2 provide tracing information JPI_PLUGIN2_NO_HEARTBEAT = 1 This disables sending of  heartbeat messages between browser side VM and the client JVM instance(s). This lets the client JVM stay independent of browser side VM. These are to be set as system environment variables. Many a times we are required to set them using scripts. Here is a small script to achieve the same: -----------set_jpi_flags.vbs------------------------------------------ Set WSHShell = WScript.CreateObject("WScript.Shell")Set WshEnv = WshShell.Environment("USER")WshEnv("JPI_PLUGIN2_NO_HEARTBEAT") = "1"WshEnv("JPI_PLUGIN2_DEBUG") = "1"WshEnv("JPI_PLUGIN2_VERBOSE") = "1"WScript.Echo WshEnv("JPI_PLUGIN2_NO_HEARTBEAT")   ---- displays the value of the NO_HEARTBEAT var ---------------------------------------------------------------------------

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >