Search Results

Search found 13415 results on 537 pages for 'variable caching'.

Page 151/537 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • C# Regex - Match and replace, Auto Increment

    - by Marc Still
    I have been toiling with a problem and any help would be appreciated. Problem: I have a paragraph and I want to replace a variable which appears several times (Variable = @Variable). This is the easy part, but the portion which I am having difficulty is trying to replace the variable with different values. I need for each occurrence to have a different value. For instance, I have a function that does a calculation for each variable. What I have thus far is below: private string SetVariables(string input, string pattern){ Regex rx = new Regex(pattern); MatchCollection matches = rx.Matches(input); int i = 1; if(matches.Count > 0) { foreach(Match match in matches) { rx.Replace(match.ToString(), getReplacementNumber(i)); i++ } } I am able to replace each variable that I need to with the number returned from getReplacementNumber(i) function, but how to I put it back into my original input with the replaced values, in the same order found in the match collection? Thanks in advance! Marcus

    Read the article

  • Behavior of local variables in JavaScripts with()-statement

    - by thr
    I noticed some weird (and to my knowledge undefined behavior, by the ECMA 3.0 Spec at least), take the following snippet: var foo = { bar: "1", baz: "2" }; alert(bar); with(foo) { alert(bar); alert(bar); } alert(bar); It crashes in both Firefox and Chrome, because "bar" doesn't exist in the first alert(); statement, this is as expected. But if you add a declaration of bar inside the with()-statement, so it looks like this: var foo = { bar: "1", baz: "2" }; alert(bar); with(foo) { alert(bar); var bar = "g2"; alert(bar); } alert(bar); It will produce the following: undefined, 1, g2, undefined It seems as if you create a variable inside a with()-statement most browsers (tested on Chrome or Firefox) will make that variable exist outside that scope also, it's just set to undefined. Now from my perspective bar should only exist inside the with()-statement, and if you make the example even weirder: var foo = { bar: "1", baz: "2" }; var zoo; alert(bar); with(foo) { alert(bar); var bar = "g2"; zoo = function() { return bar; } alert(bar); } alert(bar); alert(zoo()); It will produce this: undefined, 1, g2, undefined, g2 So the bar inside the with()-statement does not exist outside of it, yet the runtime somehow "automagically" creates a variable named bar that is undefined in its top level scope (global or function) but this variable does not refer to the same one as inside the with()-statement, and that variable will only exist if a with()-statement has a variable named bar that is defined inside it. Very weird, and inconsistent. Anyone have an explanation for this behavior? There is nothing in the ECMA Spec about this.

    Read the article

  • structDelete doesn't affect the shallow copy?

    - by Travis
    I was playing around onError so I tried to create an error using a large xml document object. <cfset XMLByRef = variables.parsedXML.XMLRootElement.XMLChildElement> <cfset structDelete(variables.parsedXML, "XMLRootElement")> <cfset startXMLShortLoop = getTickCount()> <cfloop from = "1" to = "#arrayLen(variables.XMLByRef)#" index = "variables.i"> <cfoutput>#variables.XMLByRef[variables.i].id.xmltext#</cfoutput><br /> </cfloop> <cfset stopXMLShortLoop = getTickCount()> I expected to get an error because I deleted the structure I was referencing. From LiveDocs: Variable Assignment - Creates an additional reference, or alias, to the structure. Any change to the data using one variable name changes the structure that you access using the other variable name. This technique is useful when you want to add a local variable to another scope or otherwise change a variable's scope without deleting the variable from the original scope. instead I got 580df1de-3362-ca9b-b287-47795b6cdc17 25a00498-0f68-6f04-a981-56853c0844ed ... ... ... db49ed8a-0ba6-8644-124a-6d6ebda3aa52 57e57e28-e044-6119-afe2-aebffb549342 Looped 12805 times in 297 milliseconds <cfdump var = "#variables#"> Shows there's nothing in the structure, just parsedXML.xmlRoot.xmlName with the value of XMLRootElement. I also tried <cfset structDelete(variables.parsedXML.XMLRootElement, "XMLChildElement")> as well as structClear for both. More information on deleting from the xml document object. http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-78e3.html Can someone please explain my faulty logic? Thanks.

    Read the article

  • How do I create Variables in XSLT that are not document fragments?

    - by chiborg
    Consider the following XSLT template <xsl:template match="/"> <xsl:variable name="var1"> <elem>1</elem> <elem>2</elem> <elem>3</elem> </xsl:variable> <xsl:text>var1 has </xsl:text> <xsl:value-of select="count($var1)"/> <xsl:text>elements. </xsl:text> <xsl:variable name="var2" select="$var1/elem[. != '2']"/> <xsl:text>var2 has </xsl:text> <xsl:value-of select="count($var2)"/> <xsl:text>elements. </xsl:text> </xsl:template> The output of this template is var1 has 1 elements var2 has 2 elements The first line outputs 1 (and not, as I first expected 3) because var1 is a document fragment that contains the <elem> elements as childen. Now for my questions: How can I create a variable that does not contain a document fragment? I could do it like I did with var2, only leaving out the predicate. But maybe there is a way without using a second variable. Or, as an alternative: How can I preserve the document fragment in a variable while filtering out some elements?

    Read the article

  • Qt Creator CONFIG (debug, release) switches does NOT work

    - by killdaclick
    Problem: CONFIG(debug,debug|release) and CONFIG(release,deubg|release) are always evaluated wherever debug or release is choosen in Qt Creator 2.8.1 for Linux. My configuration in Qt Creator application (stock - default for new project): Projects->Build Settings->Debug Build Steps: qmake build configuration: Debug Effective qmake call: qmake2 proj.pro -r -spec linux-gnueabi-oe-g++ CONFIG+=debug Projects->Build Settings->Release Build Steps: qmake build configuration: Release Effective qmake call: qmake2 proj.pro -r -spec linux-gnueabi-oe-g++ My configuration in proj.pro: message(Variable CONFIG:) message($$CONFIG) CONFIG(debug,debug|release) { message(Debug build) } CONFIG(release,debug|release) { message(Release build) } Output on console for Debug: Project MESSAGE: Variable CONFIG: Project MESSAGE: lex yacc warn_on debug uic resources warn_on release incremental link_prl no_mocdepend release stl qt_no_framework debug console Project MESSAGE: Debug build Project MESSAGE: Release build Output on console for Release: Project MESSAGE: Variable CONFIG: Project MESSAGE: lex yacc warn_on uic resources warn_on release incremental link_prl no_mocdepend release stl qt_no_framework console Project MESSAGE: Debug build Project MESSAGE: Release build Under Windows 7 I didnt experienced any problem with such .pro configuration and it worked fine. I was desperate and modified .pro file: CONFIG = test message(Variable CONFIG:) message($$CONFIG) CONFIG(debug,debug|release) { message(Debug build) } CONFIG(release,debug|release) { message(Release build) } and I was suprised with the output: Project MESSAGE: Variable CONFIG: Project MESSAGE: test Project MESSAGE: Debug build Project MESSAGE: Release build so even if I completly clean CONFIG variable it still see debug and release configuration. What Im doing wrong?

    Read the article

  • structDelete doesn't effect the shallow copy?

    - by Travis
    I was playing around onError so I tried to create an error using a large xml document object. <cfset variables.XMLByRef = variables.parsedXML.XMLRootElement.XMLChildElement> <cfset structDelete(variables.parsedXML, "XMLRootElement")> <cfset variables.startXMLShortLoop = getTickCount()> <cfloop from = "1" to = "#arrayLen(variables.XMLByRef)#" index = "variables.i"> <cfoutput>#variables.XMLByRef[variables.i].id.xmltext#</cfoutput><br /> </cfloop> <cfset variables.stopXMLShortLoop = getTickCount()> I expected to get an error because I deleted the structure I was referencing. From LiveDocs: Variable Assignment - Creates an additional reference, or alias, to the structure. Any change to the data using one variable name changes the structure that you access using the other variable name. This technique is useful when you want to add a local variable to another scope or otherwise change a variable's scope without deleting the variable from the original scope. instead I got 580df1de-3362-ca9b-b287-47795b6cdc17 25a00498-0f68-6f04-a981-56853c0844ed ... ... ... db49ed8a-0ba6-8644-124a-6d6ebda3aa52 57e57e28-e044-6119-afe2-aebffb549342 Looped 12805 times in 297 milliseconds <cfdump var = "#variables#"> Shows there's nothing in the structure, just parsedXML.xmlRoot.xmlName with the value of XMLRootElement. I also tried <cfset structDelete(variables.parsedXML.XMLRootElement, "XMLChildElement")> as well as structClear for both. More information on deleting from the xml document object. http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-78e3.html Can someone please explain my faulty logic? Thanks.

    Read the article

  • How to pass an integration property to a batch file with CruiseControlNet ?

    - by TridenT
    In the build log of my project, i can see these properties: <integrationProperties> <CCNetProject>Gdet_T</CCNetProject> ... <LastChangeNumber>0</LastChangeNumber> <LastIntegrationStatus>Success</LastIntegrationStatus> <LastSuccessfulIntegrationLabel>25</LastSuccessfulIntegrationLabel> <LastModificationDate>4/6/2010 1:29:04 PM</LastModificationDate> <LastChangeNumber>10841</LastChangeNumber> </integrationProperties> I want to pass the property CCNetProject and LastChangeNumber to a batch file. it works well with CCNetProject, as it can be used in the batch as an environment variable %CCNetProject%. But it doesn't work with other properties (those are not starting with the CCnet prefix) as LastChangeNumber or LastModificationDate. I tried to pass it as environment variable, but it fails ! <exec> <executable>$(WorkingFolderBase)\MyBatch.bat</executable> <baseDirectory>$(WorkingFolderBase)\</baseDirectory> <buildArgs>$(LastModificationDate)</buildArgs> </exec> I tried to pass it as argument, but it fails: <exec> <executable>$(WorkingFolderBase)\MyBatch.bat</executable> <baseDirectory>$(WorkingFolderBase)\</baseDirectory> <environment> <variable> <name>svn_label</name> <value>"${LastModificationDate}"</value> </variable> </environment> </exec> The results is always the same when I display the parameter or variable : empty string or the variable name $(svn_label) I'm sure it is simple, but ... I can't find ! Any idea ?

    Read the article

  • Valid javascript object property names

    - by hawkettc
    I'm trying to work out what is considered valid for the property name of a javascript object. For example var b = {} b['-^colour'] = "blue"; // Works fine in Firefox, Chrome, Safari b['colour'] = "green"; // Ditto alert(b['-^colour']); // Ditto alert(b.colour); // Ditto for(prop in b) alert(prop); // Ditto //alert(b.-^colour); // Fails (expected) This post details valid javascript variable names, and '-^colour' is clearly not valid (as a variable name). Does the same apply to object property names? Looking at the above I'm trying to work out if b['-^colour'] is invalid, but works in all browsers by quirk, and I shouldn't trust it to work going forward b['-^colour'] is completely valid, but it's just of a form that can only be accessed in this manner - (it's supported so Objects can be used as maps perhaps?) Something else As an aside, a global variable in javascript might be declared at the top level as var abc = 0; but could also be created (as I understand it) with window['abc'] = 0; the following works in all the above browsers window['@£$%'] = "bling!"; alert(window['@£$%']); Is this valid? It seems to contradict the variable naming rules - or am I not declaring a variable there? What's the difference between a variable and an object property name? Cheers, Colin

    Read the article

  • Use of extern in C++ dll

    - by dom_beau
    I declare then instantiate a static variable in a DLL. // DLL.h class A { //... }; static A* a; // DLL.cpp A* a = new A; So far, so good... I was suggested to use extern rather than static. extern A* a; // in DLL.h No problem with that but the extern variable must be declared somewhere. I got Invalid storage class member. In other words, what I was used to do is to declare a variable in a source file like this: // In src.cpp A a; then extern declare it in another source file in the same project: // In src2.cpp extern A a; so it is the same object a at link time. Maybe it is not the right thing to do? So, where to declare the variable that is now extern? Note that I used static declaration in order to see the variable instantiated as soon as the dll is loaded. Note that the current use of static works most of the time but I think I observe a delay or something like this in the variable instantiation while it should always be instantiated at load time. I'm investigating this problem for a week now and I can't find no solution.

    Read the article

  • ODI 11g – How to Load Using Partition Exchange

    - by David Allan
    Here we will look at how to load large volumes of data efficiently into the Oracle database using a mixture of CTAS and partition exchange loading. The example we will leverage was posted by Mark Rittman a couple of years back on Interval Partitioning, you can find that posting here. The best thing about ODI is that you can encapsulate all those ‘how to’ blog posts and scripts into templates that can be reused – the templates are of course Knowledge Modules. The interface design to mimic Mark's posting is shown below; The IKM I have constructed performs a simple series of steps to perform a CTAS to create the stage table to use in the exchange, then lock the partition (to ensure it exists, it will be created if it doesn’t) then exchange the partition in the target table. You can find the IKM Oracle PEL.xml file here. The IKM performs the follows steps and is meant to illustrate what can be done; So when you use the IKM in an interface you configure the options for hints (for parallelism levels etc), initial extent size, next extent size and the partition variable;   The KM has an option where the name of the partition can be passed in, so if you know the name of the partition then set the variable to the name, if you have interval partitioning you probably don’t know the name, so you can use the FOR clause. In my example I set the variable to use the date value of the source data FOR (TO_DATE(''01-FEB-2010'',''dd-MON-yyyy'')) Using a variable lets me invoke the scenario many times loading different partitions of the same target table. Below you can see where this is defined within ODI, I had to double single-quote the strings since this is placed inside the execute immediate tasks in the KM; Note also this example interface uses the LKM Oracle to Oracle (datapump), so this illustration uses a lot of the high performing Oracle database capabilities – it uses Data Pump to unload, then a CreateTableAsSelect (CTAS) is executed on the external table based on top of the Data Pump export. This table is then exchanged in the target. The IKM and illustrations above are using ODI 11.1.1.6 which was needed to get around some bugs in earlier releases with how the variable is handled...as far as I remember.

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • Paypal PDT and IPN , how does it work?

    - by slow diver
    PDT Payment Data Transfer is getting the transaction data of the purchase that was made on paypal site and you want to fetch that on your own site and display to the user. Also you may want to store it in your database for archive and tracking purposes. But I cannot exactly follow the documentation here What I am not getting is Once you have activated PDT, every time a buyer makes a website payment and is redirected to your return URL, a transaction token will be passed along as a "GET" variable to this return URL. In order to properly use PDT and display transaction details to your customer, you should fetch the transaction token, variable name "tx", and retreive transaction details from PayPal by constructing an HTTP POST to PayPal. Your POST should be sent to https://www.paypal.com/cgi-bin/webscr. You must post the transaction token using the variable "tx" and the value of the transaction token previously received (e.g. "tx=transaction_token"), and the special identity token using the variable at and the value of your PDT identity token (e.g. "at=identity_token"). You will also need to append a variable named "cmd" with the value "_notify-synch", for example "cmd=_notify-synch", to the POST string. IPN I have setup Instant Payment Notification through setting according to this documentation. This is basically logging into your paypal account and enable IPN while specifying a url where the notification will be sent. This is used to complete an order so that the product can be shipped. What I did is setup a PHP page. I have created a table and whenever that page is called (or hit), it registers an entry in the table so I know a notification came from Paypal. But it does not work either. What am I really doing wrong? The first thing I want to trouble shoot though is when the buyer pays the amount, he is automatically redirected to my site. I have enabled this but automatic redirection just does not work. Instead he is shown the url as an option after payment confirmation is shown. Can someone guide my how the PDT process goes? Where do I make the request for PDT, is it along the very first request (Buy Now button) or it is sent later? Addition I found some good sampling code of how everything should work but it still does not work. I use this code http://officetrio.com/modules/free-php-paypal-ipn-script.php for IPN. I am using this for PDT. This one uses SSL, I changed SSL to regular HTTP (copied paypal version), still does not work. http://ykyuen.wordpress.com/2010/02/17/paypal-payment-data-transfer-sample-code/

    Read the article

  • What is required for a scope in an injection framework?

    - by johncarl
    Working with libraries like Seam, Guice and Spring I have become accustomed to dealing with variables within a scope. These libraries give you a handful of scopes and allow you to define your own. This is a very handy pattern for dealing with variable lifecycles and dependency injection. I have been trying to identify where scoping is the proper solution, or where another solution is more appropriate (context variable, singleton, etc). I have found that if the scope lifecycle is not well defined it is very difficult and often failure prone to manage injections in this way. I have searched on this topic but have found little discussion on the pattern. Is there some good articles discussing where to use scoping and what are required/suggested prerequisites for scoping? I interested in both reference discussion or your view on what is required or suggested for a proper scope implementation. Keep in mind that I am referring to scoping as a general idea, this includes things like globally scoped singletons, request or session scoped web variable, conversation scopes, and others. Edit: Some simple background on custom scopes: Google Guice custom scope Some definitions relevant to above: “scoping” - A set of requirements that define what objects get injected at what time. A simple example of this is Thread scope, based on a ThreadLocal. This scope would inject a variable based on what thread instantiated the class. Here's an example of this: “context variable” - A repository passed from one object to another holding relevant variables. Much like scoping this is a more brute force way of accessing variables based on the calling code. Example: methodOne(Context context){ methodTwo(context); } methodTwo(Context context){ ... //same context as method one, if called from method one } “globally scoped singleton” - Following the singleton pattern, there is one object per application instance. This applies to scopes because there is a basic lifecycle to this object: there is only one of these objects instantiated. Here's an example of a JSR330 Singleton scoped object: @Singleton public void SingletonExample{ ... } usage: public class One { @Inject SingeltonExample example1; } public class Two { @Inject SingeltonExample example2; } After instantiation: one.example1 == two.example2 //true;

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Mac - Flash file not loaded in independent flash player

    - by Mugdha
    Hi, I am working on an independent application to play flash files on Mac. I have already done the same for Linux, and it works flawlessly but on mac for some reason flash is not drawing to my window. It is not throwing any kind of error too. I am using Flash player 10, that would mean that I am using the Core Graphics drawing model. I am able to send mouse events to flash and wrote a sample plugin to check if there was a problem in the context that I was sending, but my sample plugin draws properly to the window. I am getting a call for NPN_InvalidateRect twice and as a response I send an update Event back to flash. I drew a dummy rectangle to check that my context is correct. I have flipped the context to make the origin as top left corner. On doing right click on the debug version of the flash player it shows the following message: "Movie not loaded..." Can anyone give me any idea why is the content not being drawn? I would really appreciate the help, as I have been struggling with it for more than a month now. Here is a small log of the interaction that I have with flash: NPN_UserAgent Called NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVSupportsWindowless; return true NPN_SetValue Called for Variable - NPPVpluginTransparentBool; return true NPN_GetValue Called with variable NPNVsupportsCoreGraphicsBool; return true NPN_SetValue Called for Variable - NPNVpluginDrawingModel NPP_SetWindow (CoreGraphics): 0, window=0xebaa90, context=0xe4c930, window.x:0 window.y:22 window.width:480 window.height:270 NPP_HandleEvent(activateEvent) accepted:0 isActive: 1 NPP_HandleEvent(updateEvt) accepted: 1 NPN_UserAgent Called NPN_GetURLNotify Called with URL - javascript:top.location+"flashplugin_unique" NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPP_NewStream URL=/Users/mjain/Desktop/clock.swf MIME=application/x-shockwave-flash error=0 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPN_InvalidateRect Called NPP_Write responseURL=/Users/mjain/Desktop/clock.swf bytes=9925 total-delivered=9925/9925 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPP_DestroyStream responseURL=/Users/mjain/Desktop/clock.swf error=0 NPP_HandleEvent(updateEvt) accepted: 1 NPN_InvalidateRect Called NPP_HandleEvent(updateEvt) accepted: 1 NPP_NewStream URL=javascript:top.location+"flashplugin_unique" MIME=text/plain error=0 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPN_UserAgent Called NPP_Write responseURL=javascript:top.location+"flashplugin_unique" bytes=52 total-delivered=52/52 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPP_DestroyStream responseURL=javascript:top.location+"flashplugin_unique" error=0 NPP_URLNotify responseURL=javascript:top.location+"flashplugin_unique" reason=0 Thanks Mugdha.

    Read the article

  • EM12c: Using the LIST verb in emcli

    - by SubinDaniVarughese
    Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent! 1. Multiple resources under a single verb:A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.Here is an example which retrieves the list of administrators within EM.Standard mode$ emcli list -resource="Administrators" Interactive modeemcli>list(resource="Administrators")The output will be the same as standard mode.Standard mode$ emcli @myAdmin.pyEnter password :  ******The output will be the same as standard mode.Contents of myAdmin.py scriptlogin()print list(resource="Administrators",jsonout=False).out()To get a list of all available resources use$ emcli list -helpWith every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks. 2. Consistent Formatting:It is possible to format the output of any resource consistently using these options:  –column  This option is used to specify which columns should be shown in the output. Here is an example which shows the list of administrators and their account status$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS" To get a list of columns in a resource use:$ emcli list -resource="Administrators" -help You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30. $ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.  –colsize  This option is used to resize column widths.Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30" The existing standard EMCLI formatting options are also available in list verb. They are: -format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -scriptThere are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.3. Search:Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:           =           !=           >           <           >=           <=           like           is (Must be followed by null or not null)Here is an example which searches for all EM administrators in the marketing department located in the USA.$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'" Here is another example which shows all the named credentials created since a specific date.  $emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.Here is an example$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15" You can provide multiple bind variables. To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:$ emcli list -resource="PreferredCredentialsDefault" -help 4. Secure accessWhen list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access. For example consider this usecase:AdminA has access only to TargetA. AdminA logs into EM CLIExecuting the list verb to get the list of all targets will only show TargetA.5. User defined SQLUsing the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema. To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.Here’s an example  $ emcli list -sql='select * from mgmt$target' 6. JSON output support    JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status. Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.  1 from emcli import *  2  search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE= \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']  3 if len(sys.argv) == 2:  4    print login(username=sys.argv[0])  5    l_prop_val_to_set = sys.argv[1]  6      l_targets = list(resource="TargetProperties", search=search_list,   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")  7    for target in l_targets.out()['data']:  8       t_pn = 'LifeCycle Status'  9      print "INFO: Setting Property name " + t_pn + " to value " +       l_prop_val_to_set + " for " + target['TARGET_NAME']  10      print  set_target_property_value(property_records=      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+      t_pn+":"+l_prop_val_to_set)  11  else:  12   print "\n ERROR: Property value argument is missing"  13   print "\n INFO: Format to run this file is filename.py <username>   <Database Target LifeCycle Status Property Value>" You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.A line by line explanation for beginners: Line  1 Imports the emcli verbs as functions  2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.  3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.  4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.  5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.  6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.  7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.  8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.  9 This a message informing the user the script is setting the property value for dbxyz.  10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets. The script is executed as:$ emcli @myScript.py subin Production The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:$ emcli list -helpWatch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter mt=8">Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • "C variable type sizes are machine dependent." Is it really true? signed & unsigned numbers ;

    - by claws
    Hello, I've been told that C types are machine dependent. Today I wanted to verify it. void legacyTypes() { /* character types */ char k_char = 'a'; //Signedness --> signed & unsigned signed char k_char_s = 'a'; unsigned char k_char_u = 'a'; /* integer types */ int k_int = 1; /* Same as "signed int" */ //Signedness --> signed & unsigned signed int k_int_s = -2; unsigned int k_int_u = 3; //Size --> short, _____, long, long long short int k_s_int = 4; long int k_l_int = 5; long long int k_ll_int = 6; /* real number types */ float k_float = 7; double k_double = 8; } I compiled it on a 32-Bit machine using minGW C compiler _legacyTypes: pushl %ebp movl %esp, %ebp subl $48, %esp movb $97, -1(%ebp) # char movb $97, -2(%ebp) # signed char movb $97, -3(%ebp) # unsigned char movl $1, -8(%ebp) # int movl $-2, -12(%ebp)# signed int movl $3, -16(%ebp) # unsigned int movw $4, -18(%ebp) # short int movl $5, -24(%ebp) # long int movl $6, -32(%ebp) # long long int movl $0, -28(%ebp) movl $0x40e00000, %eax movl %eax, -36(%ebp) fldl LC2 fstpl -48(%ebp) leave ret I compiled the same code on 64-Bit processor (Intel Core 2 Duo) on GCC (linux) legacyTypes: .LFB2: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 movq %rsp, %rbp .cfi_offset 6, -16 .cfi_def_cfa_register 6 movb $97, -1(%rbp) # char movb $97, -2(%rbp) # signed char movb $97, -3(%rbp) # unsigned char movl $1, -12(%rbp) # int movl $-2, -16(%rbp)# signed int movl $3, -20(%rbp) # unsigned int movw $4, -6(%rbp) # short int movq $5, -32(%rbp) # long int movq $6, -40(%rbp) # long long int movl $0x40e00000, %eax movl %eax, -24(%rbp) movabsq $4620693217682128896, %rax movq %rax, -48(%rbp) leave ret Observations char, signed char, unsigned char, int, unsigned int, signed int, short int, unsigned short int, signed short int all occupy same no. of bytes on both 32-Bit & 64-Bit Processor. The only change is in long int & long long int both of these occupy 32-bit on 32-bit machine & 64-bit on 64-bit machine. And also the pointers, which take 32-bit on 32-bit CPU & 64-bit on 64-bit CPU. Questions: I cannot say, what the books say is wrong. But I'm missing something here. What exactly does "Variable types are machine dependent mean?" As you can see, There is no difference between instructions for unsigned & signed numbers. Then how come the range of numbers that can be addressed using both is different? I was reading http://stackoverflow.com/questions/2511246/how-to-maintain-fixed-size-of-c-variable-types-over-different-machines I didn't get the purpose of the question or their answers. What maintaining fixed size? They all are the same. I didn't understand how those answers are going to ensure the same size.

    Read the article

  • Google Search Appliance: Limiting Number of Results

    - by senfo
    I am attempting to limit the number of results that are displayed as a result of dynamic result clustering on the Google Search Appliance. I've looked through the XSLT, but I've only come across the following two user-modifiable options: <!-- *** dyanmic result cluster options *** --> <xsl:variable name="show_res_clusters">1</xsl:variable> <xsl:variable name="res_cluster_position">right</xsl:variable> Are there more options that I'm unaware of that I could use to limit the results? Is there another way that I'm missing?

    Read the article

  • What does export do in BASH?

    - by Chas. Owens
    It is hard to admit, but I have never really understood what exactly export does to an environment variable. I know that if I don't export a variable I sometimes can't see it in child processes, but sometimes it seems like I can. What is really going on when I say export foo=5 and when should I not export a variable?

    Read the article

  • how can I manage user-dependent variables, that are valid on the whole domain ( Win Server 2003)

    - by Stephane R.
    Hello I am working on a system that needs a user-dependent variable, the user in on Windows XP and is connected to Windows Server 2003. I cannot save this variable in the registry of the local machine under HKCU, because the users are likely to exchange their machines. This variable must be accessible on the whole domain. Do you have any idea of implementing this ? Are there WMI features that may help me ?

    Read the article

  • Excel sum from column based on another column

    - by jsmars
    I have two columns. The values in the first one are either blank or have a 1. The values in the second one is a number. I also have a variable field. At the bottom of each column, I'd like to have a "total" field, which checks if there is a value (of 1) in the first column, and if there is, adds this up from the value of the second column (on the same row) and multiplies it by the variable. for example: variable 10 name1 name2 counter 1 2 1 3 1 1 3 1 4 totals 100 50 since name1 has 3 1's in it's column, it takes each value from the counter column, and multiplies it by the variable, and outputs the total I'm sorry if this has been asked, I've tried searching but I have a hard time understanding the excel syntaxes. Thanks!

    Read the article

  • Apache htaccess with mod_expires Not Working for certain directories

    - by keyboarddrummer
    I have a Joomla site that I am trying to enable caching using mod_expires. I have the .htaccess in the root of the site and have added the options as found on the page http://www.pactsoftware.nl/tools/joomla-optimization.html Using the PageSpeed extension in Chrome, prior to adding this in my .htaccess, my site scores a 55 (Caching is at the top, and lists a lot of images, CSS, and JS files). After these directives, it scores 70, with caching in the yellow, but still lists some image files (some are two directories deep and the rest are four). I checked for any other .htaccess files in the Joomla root, but none are between those folders and the root. It is almost as if htaccess only works in that one directory, not the subfolders. I have tried putting a .htaccess in each affected subdirectory, but it does not work. Does anyone have any ideas?

    Read the article

  • Is It key_buffer or key_buffer_size?

    - by user176890
    I search the internet regarding the correct variable in my.cnf file. Some said that key_buffer_size is depreciated, but some said that key_buffer_size is the correct variable in my.cnf. So, what is really the correct variable here? Is it key_buffer or key_buffer_size? I'm using ubuntu 12.04. And also I have the two key_buffer variable in my.cnf file. This is what I got after installing mysql. The first one is located under this: [mysqld] key_buffer = 16M The other one is located under this: [isamchk] key_buffer = 16M

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >