Search Results

Search found 6298 results on 252 pages for 'native executable'.

Page 59/252 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Cannot open simple script application on mac

    - by streetpc
    Mac OS X 10.6 I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • Cannot open simplest mac application

    - by streetpc
    I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: * Is there some kind of validity caching for applications ? based on what ? how do I flush it ? * Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • Cannot open simplest script mac application

    - by streetpc
    I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • Need help fixing a strange path error in bash

    - by Evan
    UPDATE Ok, I found some errors in the path which I think I fixed, but now it's not running in any case - which for some reason I think is a step forward. Thanks for suggesting the following steps, here is their output: user@computer:~$ echo $PATH /usr/share/fsl/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/usr/local/matlab/bin:/usr/local/VoxBo/bin:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron:/usr/lib/voxbo/bin:/home/user/folder:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11/:/usr/games/:/usr/local/matlab/bin:/usr/local/VoxBo/bin/:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron/ user@computer:~$ typeset -p PATH declare -x PATH="/usr/share/fsl/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/usr/local/matlab/bin:/usr/local/VoxBo/bin:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron:/usr/lib/voxbo/bin:/home/user/folder:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11/:/usr/games/:/usr/local/matlab/bin:/usr/local/VoxBo/bin/:/usr/local/itt/idl64/bin:/usr/local/afni/bin/:/usr/local/mricron/" user@computer:~$ type app1 app1 is /home/user/folder/app1 user@computer:~$ type app2 app2 is /home/user/folder/app2 user@computer:~$ app1 bash: /home/user/folder/app1: No such file or directory user@computer:~$ app2 bash: /home/user/folder/app2: No such file or directory user@computer:~$ /home/user/folder/app1 bash: /home/user/folder/app1: No such file or directory user@computer:~$ /home/user/folder/app2 bash: /home/user/folder/app2: No such file or directory user@computer:~$ cd /home/user/folder user@computer:~/folder$ app1 bash: /home/user/folder/app1: No such file or directory user@computer:~/folder$ ./app1 bash: ./app1: No such file or directory user@computer:~/folder$ ./app2 bash: ./app2: No such file or directory user@computer:~/folder$ ls -l total 29384 -rwxr-xr-x 1 user user 14949776 2011-02-03 11:09 app1 -rwxr-xr-x 1 user user 15137300 2011-02-03 11:10 app2 user@computer:~/folder$ Thanks for everyone's input! ORIGINAL QUESTION I have two executable files I downloaded and am trying to add to the path. They are located in /home/user/folder and the specific files are /home/user/folder/app1 /home/user/folder/app2 Both app1 and app2 have the executable flag set to all (user, group, other). I can execute the files if I am in /home/user/folder and I execute these commands ./app1 ./app2 However I can't run them from elsewhere. I added this line to my .profile PATH="$PATH:/home/user/folder" and then sourced the path with . /home/user/.profile and I can see app1 and app2 when I use command completion (pressing tab). However here is what happens when I try to run app1 or app2 with the following commands (the following only shows 'app1' but the same is true of 'app2') user@comp:~$ app1 -bash: app1: command not found user@comp:~$ /home/user/folder/app1 -bash: app1: command not found user@comp:~/folder$ ./app1 (program runs) I'm stumped :), I must have missed something simple. Thanks for your help!!

    Read the article

  • How to Enable Google Chrome’s Secret Gold Icon

    - by The Geek
    You might not realize this, but there’s actually another icon hidden inside the Google Chrome executable file—and it’s a high-quality version of the same logo, but golden. Here’s how to use it. If you’re wondering how we got the smooth icon you’re seeing above, it’s because the latest dev channel version switched the icon from the older style.How to Enable Google Chrome’s Secret Gold IconHow to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • Haskell's cabal dependency problem with happy

    - by wirrbel
    I have problems installing ghc-mod on my linux machine. cabal worries about "happy" not being available in versione = 1.17: $ cabal install ghc-mod Resolving dependencies... [1 of 1] Compiling Main ( /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/Setup.hs, /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/Main.o ) Linking /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/setup ... Configuring haskell-src-exts-1.14.0... setup: The program happy version =1.17 is required but it could not be found. Failed to install haskell-src-exts-1.14.0 cabal: Error: some packages failed to install: ghc-mod-3.1.3 depends on haskell-src-exts-1.14.0 which failed to install. haskell-src-exts-1.14.0 failed during the configure step. The exception was: ExitFailure 1 hlint-1.8.53 depends on haskell-src-exts-1.14.0 which failed to install. However, it even is installed in v. 1.19, as you can see here: $ cabal install happy Resolving dependencies... [1 of 1] Compiling Main ( /tmp/happy-1.19.0-1124/happy-1.19.0/Setup.lhs, /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/Main.o ) Linking /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/setup ... Configuring happy-1.19.0... Building happy-1.19.0... Preprocessing executable 'happy' for happy-1.19.0... [ 1 of 18] Compiling NameSet ( src/NameSet.hs, dist/build/happy/happy-tmp/NameSet.o ) [ 2 of 18] Compiling Target ( src/Target.lhs, dist/build/happy/happy-tmp/Target.o ) [ 3 of 18] Compiling AbsSyn ( src/AbsSyn.lhs, dist/build/happy/happy-tmp/AbsSyn.o ) [ 4 of 18] Compiling ParamRules ( src/ParamRules.hs, dist/build/happy/happy-tmp/ParamRules.o ) [ 5 of 18] Compiling GenUtils ( src/GenUtils.lhs, dist/build/happy/happy-tmp/GenUtils.o ) [ 6 of 18] Compiling ParseMonad ( src/ParseMonad.lhs, dist/build/happy/happy-tmp/ParseMonad.o ) [ 7 of 18] Compiling Lexer ( src/Lexer.lhs, dist/build/happy/happy-tmp/Lexer.o ) [ 8 of 18] Compiling Parser ( dist/build/happy/happy-tmp/Parser.hs, dist/build/happy/happy-tmp/Parser.o ) [ 9 of 18] Compiling AttrGrammar ( src/AttrGrammar.lhs, dist/build/happy/happy-tmp/AttrGrammar.o ) [10 of 18] Compiling AttrGrammarParser ( dist/build/happy/happy-tmp/AttrGrammarParser.hs, dist/build/happy/happy-tmp/AttrGrammarParser.o ) [11 of 18] Compiling Grammar ( src/Grammar.lhs, dist/build/happy/happy-tmp/Grammar.o ) [12 of 18] Compiling First ( src/First.lhs, dist/build/happy/happy-tmp/First.o ) [13 of 18] Compiling LALR ( src/LALR.lhs, dist/build/happy/happy-tmp/LALR.o ) [14 of 18] Compiling Paths_happy ( dist/build/autogen/Paths_happy.hs, dist/build/happy/happy-tmp/Paths_happy.o ) [15 of 18] Compiling ProduceCode ( src/ProduceCode.lhs, dist/build/happy/happy-tmp/ProduceCode.o ) [16 of 18] Compiling ProduceGLRCode ( src/ProduceGLRCode.lhs, dist/build/happy/happy-tmp/ProduceGLRCode.o ) [17 of 18] Compiling Info ( src/Info.lhs, dist/build/happy/happy-tmp/Info.o ) [18 of 18] Compiling Main ( src/Main.lhs, dist/build/happy/happy-tmp/Main.o ) Linking dist/build/happy/happy ... Installing executable(s) in /home/hope/.cabal/bin Installed happy-1.19.0 Any ideas? cabal-install version 1.16.0.2 using version 1.16.0 of the Cabal library

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • Parent Objects

    - by Ali Bahrami
    Support for Parent Objects was added in Solaris 11 Update 1. The following material is adapted from the PSARC arc case, and the Solaris Linker and Libraries Manual. A "plugin" is a shared object, usually loaded via dlopen(), that is used by a program in order to allow the end user to add functionality to the program. Examples of plugins include those used by web browsers (flash, acrobat, etc), as well as mdb and elfedit modules. The object that loads the plugin at runtime is called the "parent object". Unlike most object dependencies, the parent is not identified by name, but by its status as the object doing the load. Historically, building a good plugin is has been more complicated than it should be: A parent and its plugin usually share a 2-way dependency: The plugin provides one or more routines for the parent to call, and the parent supplies support routines for use by the plugin for things like memory allocation and error reporting. It is a best practice to build all objects, including plugins, with the -z defs option, in order to ensure that the object specifies all of its dependencies, and is self contained. However: The parent is usually an executable, which cannot be linked to via the usual library mechanisms provided by the link editor. Even if the parent is a shared object, which could be a normal library dependency to the plugin, it may be desirable to build plugins that can be used by more than one parent, in which case embedding a dependency NEEDED entry for one of the parents is undesirable. The usual way to build a high quality plugin with -z defs uses a special mapfile provided by the parent. This mapfile defines the parent routines, specifying the PARENT attribute (see example below). This works, but is inconvenient, and error prone. The symbol table in the parent already describes what it makes available to plugins — ideally the plugin would obtain that information directly rather than from a separate mapfile. The new -z parent option to ld allows a plugin to link to the parent and access the parent symbol table. This differs from a typical dependency: No NEEDED record is created. The relationship is recorded as a logical connection to the parent, rather than as an explicit object name However, it operates in the same manner as any other dependency in terms of making symbols available to the plugin. When the -z parent option is used, the link-editor records the basename of the parent object in the dynamic section, using the new tag DT_SUNW_PARENT. This is an informational tag, which is not used by the runtime linker to locate the parent, but which is available for diagnostic purposes. The ld(1) manpage documentation for the -z parent option is: -z parent=object Specifies a "parent object", which can be an executable or shared object, against which to link the output object. This option is typically used when creating "plugin" shared objects intended to be loaded by an executable at runtime via the dlopen() function. The symbol table from the parent object is used to satisfy references from the plugin object. The use of the -z parent option makes symbols from the object calling dlopen() available to the plugin. Example For this example, we use a main program, and a plugin. The parent provides a function named parent_callback() for the plugin to call. The plugin provides a function named plugin_func() to the parent: % cat main.c #include <stdio.h> #include <dlfcn.h> #include <link.h> void parent_callback(void) { printf("plugin_func() has called parent_callback()\n"); } int main(int argc, char **argv) { typedef void plugin_func_t(void); void *hdl; plugin_func_t *plugin_func; if (argc != 2) { fprintf(stderr, "usage: main plugin\n"); return (1); } if ((hdl = dlopen(argv[1], RTLD_LAZY)) == NULL) { fprintf(stderr, "unable to load plugin: %s\n", dlerror()); return (1); } plugin_func = (plugin_func_t *) dlsym(hdl, "plugin_func"); if (plugin_func == NULL) { fprintf(stderr, "unable to find plugin_func: %s\n", dlerror()); return (1); } (*plugin_func)(); return (0); } % cat plugin.c #include <stdio.h> extern void parent_callback(void); void plugin_func(void) { printf("parent has called plugin_func() from plugin.so\n"); parent_callback(); } Building this in the traditional manner, without -zdefs: % cc -o main main.c % cc -G -o plugin.so plugin.c % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() As noted above, when building any shared object, the -z defs option is recommended, in order to ensure that the object is self contained and specifies all of its dependencies. However, the use of -z defs prevents the plugin object from linking due to the unsatisfied symbol from the parent object: % cc -zdefs -G -o plugin.so plugin.c Undefined first referenced symbol in file parent_callback plugin.o ld: fatal: symbol referencing errors. No output written to plugin.so A mapfile can be used to specify to ld that the parent_callback symbol is supplied by the parent object. % cat plugin.mapfile $mapfile_version 2 SYMBOL_SCOPE { global: parent_callback { FLAGS = PARENT }; }; % cc -zdefs -Mplugin.mapfile -G -o plugin.so plugin.c However, the -z parent option to ld is the most direct solution to this problem, allowing the plugin to actually link against the parent object, and obtain the available symbols from it. An added benefit of using -z parent instead of a mapfile, is that the name of the parent object is recorded in the dynamic section of the plugin, and can be displayed by the file utility: % cc -zdefs -zparent=main -G -o plugin.so plugin.c % elfdump -d plugin.so | grep PARENT [0] SUNW_PARENT 0xcc main % file plugin.so plugin.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent main, dynamically linked, not stripped % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() We can also observe this in elfedit plugins on Solaris systems running Solaris 11 Update 1 or newer: % file /usr/lib/elfedit/dyn.so /usr/lib/elfedit/dyn.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent elfedit, dynamically linked, not stripped, no debugging information available Related Other Work The GNU ld has an option named --just-symbols that can be used in a similar manner: --just-symbols=filename Read symbol names and their addresses from filename, but do not relocate it or include it in the output. This allows your output file to refer symbolically to absolute locations of memory defined in other programs. You may use this option more than once. -z parent is a higher level operation aimed specifically at simplifying the construction of high quality plugins. Although it employs the same operation, it differs from --just symbols in 2 significant ways: There can only be one parent. The parent is recorded in the created object, and can be displayed by 'file', or other similar tools.

    Read the article

  • How to install Eclipse J2EE IDE from a tarball?

    - by Silambarasan
    I have downloaded Eclipse 3.6.1 as a tar.gz file from eclipse site. Then I extract using cmd: tar -zxvf eclipse-jee-helios-SR1-linux-gtk.tar.gz after execute this cmd I got eclipse folder in this there is eclipse file. When I double click on this eclipse file I'm getting following error: Could not display "/media/D-DEVELOPME/eclipse/eclipse". There is no application installed for executable files is there any solution for it?

    Read the article

  • Presenting at SQLConnections!

    - by andyleonard
    Introduction This year I'm honored to present at SQLConnections in Orlando 27-30 Mar 2011! My topics are Database Design for Developers, Build Your First SSIS Package, and Introduction to Incremental Loads. Database Design for Developers This interactive session is for software developers tasked with database development. Attend and learn about patterns and anti-patterns of database development, one method for building re-executable Transact-SQL deployment scripts, a method for using SqlCmd to deploy...(read more)

    Read the article

  • How to select the first ocurrence in the auto-completion menu by pressing Enter?

    - by janoChen
    Every time there's is a pop up menu. I select the first occurrence and press enter but nothing happens (the word is not completed with he selected occurrence). The only way is to press Tab until you reach the term for a second time. Is there a way of selecting the first occurrence pressing Enter (or other Vim hotkey)? My .vimrc: " SHORTCUTS nnoremap <F4> :set filetype=html<CR> nnoremap <F5> :set filetype=php<CR> nnoremap <F3> :TlistToggle<CR> " press space to turn off highlighting and clear any message already displayed. nnoremap <silent> <Space> :nohlsearch<Bar>:echo<CR> " set buffers commands nnoremap <silent> <M-F8> :BufExplorer<CR> nnoremap <silent> <F8> :bn<CR> nnoremap <silent> <S-F8> :bp<CR> " open NERDTree with start directory: D:\wamp\www nnoremap <F9> :NERDTree /home/alex/www<CR> " open MRU nnoremap <F10> :MRU<CR> " open current file (silently) nnoremap <silent> <F11> :let old_reg=@"<CR>:let @"=substitute(expand("%:p"), "/", "\\", "g")<CR>:silent!!cmd /cstart <C-R><C-R>"<CR><CR>:let @"=old_reg<CR> " open current file in localhost (default browser) nnoremap <F12> :! start "http://localhost" file:///"%:p""<CR> " open Vim's default Explorer nnoremap <silent> <F2> :Explore<CR> nnoremap <C-F2> :%s/\.html/.php/g<CR> " REMAPPING " map leader to , let mapleader = "," " remap ` to ' nnoremap ' ` nnoremap ` ' " remap increment numbers nnoremap <C-kPlus> <C-A> " COMPRESSION function Js_css_compress () let cwd = expand('<afile>:p:h') let nam = expand('<afile>:t:r') let ext = expand('<afile>:e') if -1 == match(nam, "[\._]src$") let minfname = nam.".min.".ext else let minfname = substitute(nam, "[\._]src$", "", "g").".".ext endif if ext == 'less' if executable('lessc') cal system( 'lessc '.cwd.'/'.nam.'.'.ext.' &') endif else if filewritable(cwd.'/'.minfname) if ext == 'js' && executable('closure-compiler') cal system( 'closure-compiler --js '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') elseif executable('yuicompressor') cal system( 'yuicompressor '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') endif endif endif endfunction autocmd FileWritePost,BufWritePost *.js :call Js_css_compress() autocmd FileWritePost,BufWritePost *.css :call Js_css_compress() autocmd FileWritePost,BufWritePost *.less :call Js_css_compress() " GUI " taglist right side let Tlist_Use_Right_Window = 1 " hide tool bar set guioptions-=T "remove scroll bars set guioptions+=LlRrb set guioptions-=LlRrb " set the initial size of window set lines=46 columns=180 " set default font set guifont=Monospace " set guifont=Monospace\ 10 " show line number set number " set default theme colorscheme molokai-2 " encoding set encoding=utf-8 setglobal fileencoding=utf-8 bomb set fileencodings=ucs-bom,utf-8,latin1 " SCSS syntax highlight au BufRead,BufNewFile *.scss set filetype=scss " LESS syntax highlight syntax on au BufNewFile,BufRead *.less set filetype=less " Haml syntax highlight "au! BufRead,BufNewFile *.haml "setfiletype haml " Sass syntax highlight "au! BufRead,BufNewFile *.sass "setfiletype sass " set filetype indent filetype indent on " for snipMate to work filetype plugin on " show breaks set showbreak=-----> " coding format set tabstop=4 set shiftwidth=4 set linespace=1 " CONFIG " set location of ctags let Tlist_Ctags_Cmd='D:\ctags58\ctags.exe' " keep the buffer around when left set hidden " enable matchit plugin source $VIMRUNTIME/macros/matchit.vim " folding set foldmethod=marker set foldmarker={,} let g:FoldMethod = 0 map <leader>ff :call ToggleFold()<cr> fun! ToggleFold() if g:FoldMethod == 0 exe 'set foldmethod=indent' let g:FoldMethod = 1 else exe 'set foldmethod=marker' let g:FoldMethod = 0 endif endfun " save and restore folds when a file is closed and re-opened "au BufWrite ?* mkview "au BufRead ?* silent loadview " auto-open NERDTree everytime Vim is invoked au VimEnter * NERDTree /home/alex/www " set omnicomplete autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " Remove trailing white-space once the file is saved au BufWritePre * silent g/\s\+$/s/// " Use CTRL-S for saving, also in Insert mode noremap <C-S> :update!<CR> vnoremap <C-S> <C-C>:update!<CR> inoremap <C-S> <C-O>:update!<CR> " DEFAULT set nocompatible source $VIMRUNTIME/vimrc_example.vim "source $VIMRUNTIME/mswin.vim "behave mswin " disable creation of swap files set noswapfile " no back ups wwhile editing set nowritebackup " disable creation of backups set nobackup " no file change pop up warning autocmd FileChangedShell * echohl WarningMsg | echo "File changed shell." | echohl None set diffexpr=MyDiff() function MyDiff() let opt = '-a --binary ' if &diffopt =~ 'icase' | let opt = opt . '-i ' | endif if &diffopt =~ 'iwhite' | let opt = opt . '-b ' | endif let arg1 = v:fname_in if arg1 =~ ' ' | let arg1 = '"' . arg1 . '"' | endif let arg2 = v:fname_new if arg2 =~ ' ' | let arg2 = '"' . arg2 . '"' | endif let arg3 = v:fname_out if arg3 =~ ' ' | let arg3 = '"' . arg3 . '"' | endif let eq = '' if $VIMRUNTIME =~ ' ' if &sh =~ '\<cmd' let cmd = '""' . $VIMRUNTIME . '\diff"' let eq = '"' else let cmd = substitute($VIMRUNTIME, ' ', '" ', '') . '\diff"' endif else let cmd = $VIMRUNTIME . '\diff' endif silent execute '!' . cmd . ' ' . opt . arg1 . ' ' . arg2 . ' > ' . arg3 . eq endfunction

    Read the article

  • How to install Eclipse J2EE IDE from a tarball?

    - by Silambarasan
    I have downloaded Eclipse 3.6.1 as a tar.gz file from eclipse site. Then I extract using cmd: tar -zxvf eclipse-jee-helios-SR1-linux-gtk.tar.gz after execute this cmd I got eclipse folder in this there is eclipse file. When I double click on this eclipse file I'm getting following error: Could not display "/media/D-DEVELOPME/eclipse/eclipse". There is no application installed for executable files is there any solution for it?

    Read the article

  • Eclipse won't start on Ubuntu 12.10

    - by Rajat Saxena
    I've set up my Eclipse with all the plugins necessary for Android development on Ubuntu 12.04.Now I'm on Ubuntu 12.10 and Eclipse wont start.When I navigate to the Eclipse directory and issue ./eclipse in the command prompt,it says command not found.I've already chmodded +x the eclipse executable file.Just for the record,my eclipse directory is not in home folder,it's on another partition of my hdd.Any ideas?

    Read the article

  • How can I make Eclipse work (bash permission)?

    - by Binarylife
    Well , I've tried to compile and install Eclipse 3.6.2 using the instructions in : How to update Eclipse 3.5.2 to 3.6.2? . But Eclipse doesn't open. user@s-HP-550:~$ eclipse bash: /usr/bin/eclipse: Permission denied Edit : user@s-HP-550:~$ ls -l /usr/bin/eclipse -rw-r--r-- 1 root root 70 2011-06-12 18:15 /usr/bin/eclipse user@s-HP-550:~$ file /usr/bin/eclipse /usr/bin/eclipse: POSIX shell script text executable

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • What can a Service do on Windows?

    - by Akemi Iwaya
    If you open up Task Manager or Process Explorer on your system, you will see many services running. But how much of an impact can a service have on your system, especially if it is ‘corrupted’ by malware? Today’s SuperUser Q&A post has the answers to a curious reader’s questions. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Forivin wants to know how much impact a service can have on a Windows system, especially if it is ‘corrupted’ by malware: What kind malware/spyware could someone put into a service that does not have its own process on Windows? I mean services that use svchost.exe for example, like this: Could a service spy on my keyboard input? Take screenshots? Send and/or receive data over the internet? Infect other processes or files? Delete files? Kill processes? How much impact could a service have on a Windows installation? Are there any limits to what a malware ‘corrupted’ service could do? The Answer SuperUser contributor Keltari has the answer for us: What is a service? A service is an application, no more, no less. The advantage is that a service can run without a user session. This allows things like databases, backups, the ability to login, etc. to run when needed and without a user logged in. What is svchost? According to Microsoft: “svchost.exe is a generic host process name for services that run from dynamic-link libraries”. Could we have that in English please? Some time ago, Microsoft started moving all of the functionality from internal Windows services into .dll files instead of .exe files. From a programming perspective, this makes more sense for reusability…but the problem is that you can not launch a .dll file directly from Windows, it has to be loaded up from a running executable (exe). Thus the svchost.exe process was born. So, essentially a service which uses svchost is just calling a .dll and can do pretty much anything with the right credentials and/or permissions. If I remember correctly, there are viruses and other malware that do hide behind the svchost process, or name the executable svchost.exe to avoid detection. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • An Epic Question "How to call a method when the page loads"

    - by Arunkumar Ramamoorthy
    Quite often, there comes a question in OTN, with different subjects, all meaning "How to call a method when my ADF page loads?". More often, people tend to take the approach of ADF Phase Listener by overriding before/afterPhase methods.In this blog, we will go through different options in achieving it.1. Method Call Activity as default activity in Taskflow :If the application is built with taskflows, then this is the best suited approach to take. 1.a. Calling a Data Control Method :To call a Data Control method (ex: A method in AMImpl exposed as client interface), simply Drag and Drop the method as Default Method Call Activity, then draw a control flow case from the method to your page. Once after this, drop the taskflow as region in main page. When we run the main page, the Method Call Activity would be called first, and then the page will be rendered.1.b. Calling a Method in Backing Bean: To call a method in the backing bean before pageload, we can follow the similar approach as above. Instead of binding the Method Call Activity to an action/method binding in pagedef, we bind to the method. Insert a Method Call Activity (and make it as default) from the Component Palette. Double click on to select a method to bind. This approach can also be used, to perform some action in backing bean along with calling a method Data Control (just need to add bindings code in backing bean to execute DC method). 2. Using invokeAction Executable :If the application is built with pages and no taskflows are involved, then this option can be taken into consideration.In the page definition of the page, add an invokeAction Executable and bind it to the method needed to be executed. 3. Using combination of Server and Client Listeners : If the page does not have any page definition, then to call a method in backing bean, this approach can be taken. In this, a serverListener would be added at the document level, which would be calling the method in backing bean. Along with this, a clientListener would be added with "load" type (i.e will be triggered when the page loads), which would queue a serverEvent to trigger the method. 4. Using Page Phase Listener :This should be the last resort. Care should be taken when using this approach since the Phase Listener would be called for each request sent by the client.Zeeshan Baig's blog covers this scenario.

    Read the article

  • An Epic Question "How to call a method when the page loads"

    - by Arunkumar Ramamoorthy
    Quite often, there comes a question in OTN, with different subjects, all meaning "How to call a method when my ADF page loads?". More often, people tend to take the approach of ADF Phase Listener by overriding before/afterPhase methods.In this blog, we will go through different options in achieving it.1. Method Call Activity as default activity in Taskflow :If the application is built with taskflows, then this is the best suited approach to take. 1.a. Calling a Data Control Method :To call a Data Control method (ex: A method in AMImpl exposed as client interface), simply Drag and Drop the method as Default Method Call Activity, then draw a control flow case from the method to your page. Once after this, drop the taskflow as region in main page. When we run the main page, the Method Call Activity would be called first, and then the page will be rendered.1.b. Calling a Method in Backing Bean: To call a method in the backing bean before pageload, we can follow the similar approach as above. Instead of binding the Method Call Activity to an action/method binding in pagedef, we bind to the method. Insert a Method Call Activity (and make it as default) from the Component Palette. Double click on to select a method to bind. This approach can also be used, to perform some action in backing bean along with calling a method Data Control (just need to add bindings code in backing bean to execute DC method). 2. Using invokeAction Executable :If the application is built with pages and no taskflows are involved, then this option can be taken into consideration.In the page definition of the page, add an invokeAction Executable and bind it to the method needed to be executed. 3. Using combination of Server and Client Listeners : If the page does not have any page definition, then to call a method in backing bean, this approach can be taken. In this, a serverListener would be added at the document level, which would be calling the method in backing bean. Along with this, a clientListener would be added with "load" type (i.e will be triggered when the page loads), which would queue a serverEvent to trigger the method. 4. Using Page Phase Listener :This should be the last resort. Care should be taken when using this approach since the Phase Listener would be called for each request sent by the client.Zeeshan Baig's blog covers this scenario.

    Read the article

  • ANTS Memory Profiler 7.0

    - by Sam Abraham
    In the next few lines, I would like to briefly review ANTS Memory Profiler 7.0.  I was honored to be extended the opportunity to review this valuable tool as part of the GeeksWithBlogs influencers Program, a quarterly award providing its recipients access to valuable tools and enabling them with an opportunity to provide a brief write-up reviewing the complimentary tools they receive.   Typical Usage   ANTS Memory Profiler 7.0 is very intuitive and easy to use for any user be it novice or expert. A simple yet comprehensive menu screen enables the selection of the appropriate program type to profile as well as the executable or site for this program.   A typical use case starts with establishing a baseline memory snapshot, which tells us the initial memory cost used by the program under normal or low activity conditions. We would then take a second snapshot after the program has performed an activity which we want to investigate for memory leaks. We can then compare the initial baseline snapshot against the snapshot when the program has completed processing the activity in question to study anomalies in memory that did not get freed-up after the program has completed its performed function. The following are some screenshots outlining the selection of the program to profile (an executable for this demonstration’s purposes).   Figure 1 - Getting Started   Figure 2 - Selecting an Application to Profile     Features and Options   Right after the second snapshot is generated, Memory Profiler gives us immediate access to information on memory fragmentation, size differences between snapshots, unmanaged memory allocation and statistics on the largest classes taking up un-freed memory space.   We would also have the option to itemize objects held in memory grouped by object types within which we can study the instances allocated of each type. Filtering options enable us to quickly narrow object instances we are interested in.   Figure 3 - Easily accessible Execution Memory Information   Figure 4 - Class List   Figure 5 - Instance List   Figure 6-  Retention Graph for a Particular Instance   Conclusion I greatly enjoyed the opportunity to evaluate ANTS Memory Profiler 7.0. The tool's intuitive User Interface design and easily accessible menu options enabled me to quickly identify problem areas where memory was left unfreed in my code.     Tutorials and References  FInd out more About ANTS Memory Profiler 7.0 http://www.red-gate.com/supportcenter/Product?p=ANTS Memory Profiler   Checkout what other reviewers of this valuable tool have already shared: http://geekswithblogs.net/BlackRabbitCoder/archive/2011/03/10/ants-memory-profiler-7.0.aspx http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx

    Read the article

  • How could I shutdown, over my network, with one click?

    - by DeLiK
    The question is simple. What would be the script I would have to use to shut down a computer in my network thru ssh. Normaly i would go to command line and: ssh desktop delik@desktop's password: delik@desktop:~$ sudo shutdown -P 0 To power on I created a file and wrote: wakeonlan xx:xx:xx:xx:xx:xx And gave it the executable bit That way to power on it requires only a double click. Would i be capable of doing the same to shutdown?

    Read the article

  • Debugging HLSL for Windows 8 application [migrated]

    - by Shervanator
    i'm currently in the process of creating a Windows 8 applicaiton using SharpDX (the managed c# directx wrapper). However I have ran into problems with one of my shaders and I want to know if its possible to debug such applications. PIX doesn't seem to work of directX apps as the executable does not like opening directly, and the new visual studio graphics debugging toolkit in VS2012 always states "unable to start the experiment" when I try to capture any information about my session. Thanks!

    Read the article

  • Dell bios nightmare on optiplex

    - by Minster
    I own a Dell optiplex gx260 pc and I got bios revision a03 and want to go higher like a09 I tried the Dell support page using the Wine program down loader after a little configuring and set up on Install Shield Wizard I get the message "unable to obtain required information about your system..setup cannot complete" the pc doesn't have a floppy diskette drive any more and the other option using download manager is only executable via Windows Internet explorer I don't blame wine or anybody else but this is frustrating.

    Read the article

  • computer:// in gksudo nautilus

    - by MetaChrome
    I am curious what computer:// is, in terms its implementation on the file system/in the nautilus executable/as a configuration provided to nautilus? Perhaps it is a configurable group (of paths) for nautilus, configurable on a user basis. The reason I ask is because it is not accessible with root's nautilus. If #1 is correct, how does one create computer:// and/or how does one create such path groups?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >