Search Results

Search found 17876 results on 716 pages for 'ejb jar xml'.

Page 362/716 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • Jack Audio ubuntu 12.10

    - by Shaneo1
    I used to have Jack Server working with 10.10, 11.04, 11.10 but not 12.04 and now 12.10. I have installed jackd jackd2 qjackctl surfed many forums and even given advice of how to get jack working, but now I am stuck. Tue Nov 27 22:30:46 2012: Saving settings to "/home/shane/.config/jack/conf.xml" ... 22:31:19.960 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Tue Nov 27 22:31:19 2012: Starting jack server... Tue Nov 27 22:31:19 2012: JACK server starting in realtime mode with priority 10 Tue Nov 27 22:31:19 2012: [1m[31mERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Audio device hw:0,0 cannot be acquired...[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Cannot initialize driver[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: JackServer::Open failed with -1[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Failed to open server[0m Tue Nov 27 22:31:21 2012: Saving settings to "/home/shane/.config/jack/conf.xml" ... 22:31:22.047 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Can anyone assist?

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • convert bat to sh

    - by Cris
    I am totally new to scripting in linux...so i want to port some simple window bat files to ubuntu. First file is easy setenv.bat set ANT_HOME=c:\ant\apache-ant-1.7.1 set JAVA_HOME=c:\java in linux i did this and it seems ok setenv.sh #!/bin/bash JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.24/ ANT_HOME=/usr/share/ant echo $JAVA_HOME echo $ANT_HOME but now i want to port this bat file: startserver.bat call ../config/setenv call %ANT_HOME%/bin/ant -f ../config/common.xml start_db call %ANT_HOME%/bin/ant -f ../config/common.xml start_server pause but i have no clue how can i do this in linux call ../config/setenv thank you for any help , direction given.

    Read the article

  • [Resolved] Finishing the install of php-xmlrpc on a VPS

    - by wp
    Hi, please help if possible: 1) I was able to completely install php-xmlrpc on a different VPS which uses lxAdmin control panel, without even needing to rebuild php. 2) On a VPS with Direct Admin, I followed detailed instructions (found at the DA site), this included rebuilding php, and after reboot, xml/rpc still doesn't show up in phpinfo.php. Details: centOS 5.3 php5.2.10 php-xmlrpc is installed on the VPS, and the installation "success" was confirmed at the time. Several days later, PHP was rebuilt following detailed instructions (for adding extra modules) provided by direct admin at their site. In the end, xml/rpc still doesn't show up in phpinfo.php. Anyone know how to make this work with Direct Admin? Thank you.

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by user38511
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • yum issue - error msg

    - by Monkey
    i am using oracle linux server 6.2. yum does not work. a manual wget was already used according to https://blogs.oracle.com/OTNGarage/entry/how_to_subscribe_to_the . there is always something about dropbox. yum update firefox Loaded plugins: refresh-packagekit, security http://linux.dropbox.com/fedora/6Server/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: Dropbox. Please verify its path and try again does anybody know a workaround?

    Read the article

  • Welcome files are not loaded! Need help with Railo, mappings and J2EE configuration!

    - by mrt181
    I have installed a J2EE Server (tried it with Glassfish3, Tomcat6 and Resin4) on Win7 64bit and deployed Railo3.1. I have then added a virtual host to the J2EE server, i.e. Resin: <host host-name="railo"> C:/resin/webapps/railo In the Railo Admin i have added this mapping: Virtual Physical / C:/webapps/ When i access http://railo:8080/ my index.cfm welcome file in C:/webapps/ is loaded (index.cfm is definded in Railos web.xml). When i try to access http://railo:8080/test which contains the same index.cfm i get an 500 Servlet Exception java.io.FileNotFoundException: C:\webapps\test (access denied) (on all J2EE Servers i tried so far). http://railo:8080/test/index.cfm works fine. I already tried to add index.cfm to Resins welcome-file-list in app-default.xml to no avail. I want to be able to access deployed apps without this url: http://localhost:8080/app/ Instead i want to use this: http://app:8080/

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Why Standards Only Get You So Far

    - by Tim Murphy
    Over the years I have been exposed to a number of standards.  EDI was the first.  More recently it has been the CIECA standard for Insurance and now the embattled document standards of Open XML and ODF. Standards actually came up at the last CAG meeting.  The debate was over how effective they really are.  Even back in the late 80’s to early 90’s people found they had to customize these standards to get any work done.  I even had one vendor about a year ago tell me that they really weren’t standards, they were more of a guideline. The problem is that standards are created either by committee or by companies trying to sell a product.  They never fit all situations.  This is why most of them leave extension points in their definition.  Of course if you use those extension points everyone has to have custom code to know how to consume the new product. Standards increase reliability but they stifle innovation and slow the time to market cycle of products.  In this age of ever shortening windows of opportunity that could mean that a company could lose its competitive advantage. I believe that standards are not only good, but essential.  I also believe that they are not a silver bullet.  People who turn competing standards into a type of holy war are really missing the point.  I think we should make the best standards we can, whether that is for a product so that customers can use API, or by committee so that they cross products.  But they also need to be as feature rich and flexible as possible.  They can’t be just the lowest common denominator since this type of standard will be broken the day it is published.  In the end though, it is the market will vote with their dollars. del.icio.us Tags: Office Open XML,ODF,Standards,EDI

    Read the article

  • Cross-submission robots.txt for multiple domains on single host

    - by sidd.darko
    We are running a site with multiple languages hosted in a single environment on IIS7. For example, oursite.com - english oursite.de - german oursite.es - spanish This is a single-host environment. All of these sites are in the same application space on the same physical machine. I need to do cross-submission of sitemaps via robots.txt. Looking at the sitemap.org guidelines for this suggest this is possible, but the example indicates different physical machines. Will the following entries in oursite.com/robots.txt work? http://www.oursite.com/sitemap-oursite-de.xml http://www.oursite.com/sitemap-oursite-es.xml

    Read the article

  • yum install php-tidy - no more mirrors

    - by Lylo
    Hi im trying to get the tidy extensions installed on centos running php 5.3 Thanks Downloading Packages: http://repo.webtatic.com/yum/centos/5/x86_64/php-tidy-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-pdo-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-mysql-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-gd-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-xml-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-common-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-devel-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://repo.webtatic.com/yum/centos/5/x86_64/php-cli-5.3.5-1.w5.x86_64.rpm: [Errno 14] HTTP Error 404: Not Found Trying other mirror. Error Downloading Packages: php-tidy-5.3.5-1.w5.x86_64: failure: php-tidy-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-cli-5.3.5-1.w5.x86_64: failure: php-cli-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-5.3.5-1.w5.x86_64: failure: php-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-pdo-5.3.5-1.w5.x86_64: failure: php-pdo-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-devel-5.3.5-1.w5.x86_64: failure: php-devel-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-mysql-5.3.5-1.w5.x86_64: failure: php-mysql-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-common-5.3.5-1.w5.x86_64: failure: php-common-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-xml-5.3.5-1.w5.x86_64: failure: php-xml-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php-gd-5.3.5-1.w5.x86_64: failure: php-gd-5.3.5-1.w5.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try.

    Read the article

  • Logparser and Powershell

    - by Michel Klomp
    Logparser in powershell One of the few examples how to use logparser in powershell is from the Microsoft.com Operations blog. This script is a good base to create more advanced logparser scripts: $myQuery = new-object -com MSUtil.LogQuery $szQuery = “Select top 10 * from r:\ex07011210.log”; $recordSet = $myQuery.Execute($szQuery) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {             $record=$recordSet.getRecord();             write-host ($record.GetValue(0) + “,”+ $record.GetValue(1)); } $recordSet.Close(); Logparser input formats The previous example uses the default logparser object, you can extent this with the logparser input formats. with this formats get information from the event-log, different types of logfiles, the Active Directory, the registry and XML files. Here are the different ProgId’s you can use. Input Format ProgId ADS MSUtil.LogQuery.ADSInputFormat BIN MSUtil.LogQuery.IISBINInputFormat CSV MSUtil.LogQuery.CSVInputFormat ETW MSUtil.LogQuery.ETWInputFormat EVT MSUtil.LogQuery.EventLogInputFormat FS MSUtil.LogQuery.FileSystemInputFormat HTTPERR MSUtil.LogQuery.HttpErrorInputFormat IIS MSUtil.LogQuery.IISIISInputFormat IISODBC MSUtil.LogQuery.IISODBCInputFormat IISW3C MSUtil.LogQuery.IISW3CInputFormat NCSA MSUtil.LogQuery.IISNCSAInputFormat NETMON MSUtil.LogQuery.NetMonInputFormat REG MSUtil.LogQuery.RegistryInputFormat TEXTLINE MSUtil.LogQuery.TextLineInputFormat TEXTWORD MSUtil.LogQuery.TextWordInputFormat TSV MSUtil.LogQuery.TSVInputFormat URLSCAN MSUtil.LogQuery.URLScanLogInputFormat W3C MSUtil.LogQuery.W3CInputFormat XML MSUtil.LogQuery.XMLInputFormat Using logparser to parse IIS logs if you use the IISW3CinputFormat you can use the field names instead of de row number to get the information from an IIS logfile, it also skips the comment rows in the logfile. $ObjLogparser = new-object -com MSUtil.LogQuery $objInputFormat = new-object -com MSUtil.LogQuery.IISW3CInputFormat $Query = “Select top 10 * from c:\temp\hb\ex071002.log”; $recordSet = $ObjLogparser.Execute($Query, $objInputFormat) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {     $record=$recordSet.getRecord();     write-host ($record.GetValue(“s-ip”) + “,”+ $record.GetValue(“cs-uri-query”)); } $recordSet.Close();

    Read the article

  • How to choose python version to install in gentoo

    - by Shamanu4
    Hello, I'm using linux gentoo and i want to install python2.5 but it's a problem. emerge -av python shows These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild U ] dev-lang/python-3.1.2-r3 [3.1.1-r1] USE="gdbm ipv6 ncurses readline ssl threads (wide-unicode%*) xml -build -doc -examples -sqlite* -tk -wininst (-ucs2%)" 9,558 kB [ebuild U ] app-admin/python-updater-0.8 [0.7] 8 kB and there are ebuild for more versions: # ls /usr/portage/dev-lang/python ChangeLog files Manifest metadata.xml python-2.4.6.ebuild python-2.5.4-r4.ebuild python-2.6.4-r1.ebuild python-2.6.5-r2.ebuild python-3.1.2-r3.ebuild How to choose ebuild that I want? (python-2.5.4-r4)

    Read the article

  • AS11 Oracle B2B Sync Support - Series 2

    - by sinkarbabu.kirubanithi
    In the earlier series, we discussed about how to model "Sync Support" in Oracle B2B. And, we haven't discussed how the response can be consumed synchronously by the back-end application or initiator of sync request. In this sequel, we will see how we can extend it to the SOA composite applications to model the end-to-end usecase, this would help the initiator of sync request to receive the response synchronously. Series 2 - is little lengthier for blog standards so be prepared before you continue further :). Let's start our discussion with a high-level scenario where one need to initiate a synchronous request and get response synchronously. There are various approaches available, we will see one simplest approach here. Components Involved: 1. Oracle B2B 2. Oracle JCA JMS Adapter 3. Oracle BPEL 4. All of the above are wrapped up in a single SOA composite application. Oracle B2B: Skipping the "Sync Support" setup part in B2B, as we have already discussed that in the earlier series 1. Here we have provided "Sync Support" samples that can be imported to B2B directly and users can start testing the same in few minutes. Initiator Sample: This requires two JMS queues to be created, one for B2B to receive initial outbound sync request and the other is for B2B to deliver the incoming sync response to the back-end. Please enable "Use JMS Id" option in both internal listening and delivery channels. This would enable JCA JMS Adapter to correlate the initial B2B request and response and in turn it would be returned as synchronous response of BPEL. Internal Listening Channel Image: Internal Delivery Channel Image: To get going without much challenges, just create queues in Weblogic with the JNDI mentioned in the above two screenshots. If you want to use different names, then you may have to change the queue jndi names in sample after importing it into B2B. Here are the Queue related JNDI names used in the sample, 1. Internal Listening Channel Queue details, Name: JNDI Name: jms/b2b/syncreplyqueue 2. Internal Delivery Channel Queue details, Name: JNDI Name: jms/b2b/syncrequestqueue Here is the Initiator Sample Acme.zip Note: You may have to adjust the ip address of GlobalChips endpoint in the Delivery Channel. Responder Sample: Contains B2B meta-data and the Callout. Just import the sample and place the callout binary under "/tmp/callout" directory. If you choose to use a different location for callout, then you may have to change the same in B2B Configuration after importing the sample. Here are the artifacts, 1. Callout Source SampleCallout.java 2. Callout Binary sample-callout.jar 3. Responder Sample GlobalChips.zip Callout Details: Just gives the static response XML that needs to be sent back as response for the inbound sync request. For a sample purpose, we have given static response but in production you may have to invoke a web service or something similar to get the response. IMPORTANT NOTE: For Sync Support use case, responder is not expected to deliver the inbound sync request to backend as the process of delivering and getting the response from backend are expected from the Callout. This default behavior can be overridden by enabling the config property "b2b.SyncAppDelivery=true" in B2B config mbean (b2b-config.xml). This makes B2B to deliver the inbound sync request to be delivered to backend queue but the response to be sent to remote caller still has to come from Callout. 2. Oracle JCA JMS Adapter: On the initiator side, we have used JCA JMS Request/Reply pattern to send/receive the synchronous message from B2B. 3. Oracle BPEL: Exposes WS-SOAP Endpoint that takes payload as input and passes the same to B2B and returns the synchronous response of B2B as SOAP response. For outside world, it looks as if it is the synchronous web service endpoint but under the cover it uses JMS to trigger/initiate B2B to send and receive the synchronous response. 4. Composite application: All the components discussed above are wired in SOA composite application that helps to model a end-to-end synchronous use case. Here's the composite application sca_B2BSyncSample_rev1.0.jar, you may just deploy this to your AS11 SOA to make use of it. For any editing, you can just import the project in your JDEV under any SOA Application. Here are the composite application screenshots, Composite Application: BPEL With JCA JMS Adapter (Request/Reply):

    Read the article

  • Publish Static Content to WebLogic

    - by James Taylor
    Most people know WebLogic has a built in web server. Typically this is not an issue as you deploy java applications and WebLogic publishes to the web. But what if you just want to display a simple static HTML page. In WebLogic you can develop a simple web application to display static HTML content. In this example I used WLS 10.3.3. I want to display 2 files, an HTML file, and an xsd for reference. Create a directory of your choice, this is what I will call the document root. mkdir /u01/oracle/doc_root Copy the static files to this directory  In the document root directory created in step 1 create the directory WEB-INF mkdir WEB-INF In the WEB-INF directory create a file called web.xml with the following content <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"> <web-app> </web-app> Login to the WebLogic console to deploy application Click on Deployments Click on Lock & Edit Click Install and set the path to the directory created in step 1 Leave default "Install this deployment as an application" and click Next Select a Managed Server to deploy to and click Next Accept the defaults and click Finish  Deployment completes successfully, now click the Activate Changes You should now see the application started in the deployments You can now access your static content via the following URL http://localhost:7001/doc_root/helloworld.html

    Read the article

  • Configuring jdbc-pool (tomcat 7)

    - by john
    i'm having some problems with tomcat 7 for configuring jdbc-pool : i`ve tried to follow this example: http://www.tomcatexpert.com/blog/2010/04/01/configuring-jdbc-pool-high-concurrency so i have: conf/server.xml <GlobalNamingResources> <Resource type="javax.sql.DataSource" name="jdbc/DB" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydb" username="user" password="password" /> </GlobalNamingResources> conf/context.xml <Context> <ResourceLink type="javax.sql.DataSource" name="jdbc/LocalDB" global="jdbc/DB" /> <Context> and when i try to do this: Context initContext = new InitialContext(); Context envContext = (Context)initContext.lookup("java:/comp/env"); DataSource datasource = (DataSource)envContext.lookup("jdbc/LocalDB"); Connection con = datasource.getConnection(); i keep getting this error: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context at org.apache.naming.NamingContext.lookup(NamingContext.java:803) at org.apache.naming.NamingContext.lookup(NamingContext.java:159) pls help tnx

    Read the article

  • Silverlight Cream for April 19, 2010 -- #841

    - by Dave Campbell
    In this Issue: Michael Washington, Jeremy Likness, Giorgetti Alessandro, Antoni Dol, Mike Taulty, and Braulio Diez. Shoutout: Bart Czernicki lists compelling reasons to use Silverlight 4 for LOB apps: Silverlight 4 - What is New for Business Intelligence Scenarios From SilverlightCream.com: Silverlight Advanced MVVM Video Player After the initial posting on his Simple MVVM Video player, Michael Washington got some feedback and decided to do a part 2 demonstrating exactly how easy it is to customize... great tutorial and all the code. Model-View-ViewModel (MVVM) Explained Jeremy Likness has a post up that begins "The purpose of this post is to provide an introduction to the Model-View-ViewModel (MVVM) pattern." -- 'nuff said... If you're not there yet, get there now :) Castle Windsor – Silverlight 4 binaries Giorgetti Alessandro has produced workable Castle Windsor binaries for Silverlight 4. No Unit Tests at this point, but read the post for that information. Silverlight Togglebutton Push Pin Style with IsoStore Antoni Dol has a very nice ToggleButton redone as a pushpin for pinning an app, plus it saves the pinned information to Isolated Storage ... all with source! Silverlight and Xml Binding Mike Taulty fleshes out a sketchy idea he has surrounding databinding Silverlight to XML data by using the ability to databind to string indexers and XPath support. WinToolbar Silverlight widget available on Codeplex Braulio Diez announced a Toolbar library that he and Sebastian Stehlehave posted on CodePlex that looks awesome... you may as well just go get it now, you're going to want to! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Configuration issue with HttpRealipModule (CloudFlare) in nginx configuration file

    - by Tyrx
    I've been attempting to use HttpRealipModule with the CloudFlare IP range in my main nginx configuration file but upon restarting nginx I'll just get a standard `"configuration file /etc/nginx/nginx.conf test failed" and my site will go down. This is what I've been attempting to do with my nginx.conf; user www-data; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { # Basic Settings set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; real_ip_header CF-Connecting-IP; client_max_body_size 50m; client_header_timeout 5; keepalive_timeout 5; port_in_redirect off; sendfile on; server_tokens off; server_name_in_redirect off; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; # MIME include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; # Gzip Settings gzip on; gzip_disable "msie6"; gzip_min_length 1400; gzip_types text/plain text/css text/javascript text/xml application/x-javascript application/xml application/xml+rss; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } What's wrong with that configuration file?

    Read the article

  • How do I install Apache Tomcat 7?

    - by Anupesh
    Troubleshooting with apache Tomcat 7 steps should be followed .... after downloading apache-Tomcat 7 // open terminal go to that downloading file folder in which tar.gz file is still. Untar the tar file tar -xzvf filename.tar.gz move a extracted directory of apache-Tomcat7 in current path sudo mv Directory_Name(Extracted eg.apache-Tomcat 7.0.32) /usr/local/Tomcat7 to set environment variable for Tomcat7 sudo nano /usr/local/Tomcat7/bin/setenv.sh then Nano editor will open .. JAVA_HOME =/usr/lib/jvm/java-6-sun export CATALINA_OPTS="$CATALINA_OPTS -Xms128m -Xmx1024m -XX:MaxPermSize=256m after hit ctrl+X to save file to set the role and username and password. sudo nano /usr/local/Tomcat7/conf/./tomcat-users.xml <role rolename="manager-gui" /> <role rolename="manager-script" /> <role rolename="manager-jmx" /> <role rolename="manager-status" /> <user username="admin" password="admin" roles="manager-gui,manager-script,manager-jmx,manager-status"/> Ctrl+X to save file If Port no confliction comes in a way then open file to change the port no. sudo nano /usr/local/Tomcat7/conf/./server.xml then change the connector port as you want....

    Read the article

  • Can mediatomb VLC profile transcode audio as MP3 rather than mpga ?

    - by djangofan
    In the /etc/mediatomb/config.xml, can mediatomb VLC profile transcode audo as MP3 rather than mpga ? My Sony GoogleTV wont render streamed .avi files files with a mpga audio in them. The original files are Divx encoded with 128kbs MP3 audio but mediatomb is transcoding them. How can I change this? Any ideas? Can I turn off the audio and video transcoding somehow? I need some ideas to try. <profile name="vlcprof" enabled="yes" type="external"> <mimetype>video/mpeg</mimetype> <agent command="vlc" arguments="-I dummy %in --sout #transcode{venc=ffmpeg,vcodec=mp2v,vb=4096,fps=25,aenc=ffmpeg,acodec=mpga,ab=192,samplerate=44100,channels=2}:standard{access=file,mux=ps,dst=%out} vlc:quit"/> <buffer size="10485760" chunk-size="131072" fill-size="2621440"/> <accept-url>yes</accept-url> <first-resource>yes</first-resource> </profile> I know that that MP3 encoding support is external to FFmpeg and must be configured appropriately, but I have no idea how to handle that. I would guess I can work around that by somehow telling ffmpeg to not transcode the audio stream? Also, should I create a separate vlcprof entry for video/avi ? Can you create more than one profile for VLC in the config.xml for mediatomb?

    Read the article

  • Introducing the Oracle Parcel Service&ndash;Example/Reference Application

    - by Jeffrey West
    Over the last few weeks the product management team has been working on a webcast series that is airing in EMEA.  It is a 5-episode series where we talk about different features of WebLogic and show how to build applications that take advantage of these features.  Each session is focused at a different layer of the technology stack, and you can find the schedule below. The application we are building in this series is named the ‘Oracle Parcel Service’.  It is an example application and not a product of Oracle by any stretch of the imagination.  Over the next few weeks we will be finalizing the code and will be releasing it for you to check out.  For updates, request membership to the Oracle Parcel Service project on SampleCode.oracle.com: https://www.samplecode.oracle.com/sf/projects/oracle-parcel-svc/. Here are some of the key features that we are highlighting: JPA 2.0 (new in WebLogic 10.3.4) with EclipseLink Coherence TopLink Grid Level 2 cache for JPA JAX-RS (new in WebLogic 10.3.4) 1.0 for RESTful services Lightweight JQuery Web UI for consuming RESTful services JSF 2.0 (new in WebLogic 10.3.4) utilizing PrimeFaces EJB 3.0 Spring-WS Web Services JAX-WS Web Services Spring MDP’s for Event Driven Architectures Java MDB’s for Event Driven Architectures Partitioned Distributed Topics for Event Driven Architectures   Accessing the Code on SampleCode.Oracle.com You will need to log in using your Oracle.com username and password.  If you have not created an account, you will need to do so.  It’s a simple one-page form and we don’t bother you with too many emails.   Please join the project to be kept up to date on changes to the code and new projects.  Joining the project is not required, but very much appreciated. Once you have signed in you should see an icon for accessing the Source Code via Subversion.  You can also download a zip file containing the code.

    Read the article

  • Virtualbox does not run: NS_ERROR_FAILURE

    - by dschinn1001
    here is ubuntu 12.10 virtual-box is somehow not working: I was trying to install win7 on to an usb-hard-disk. boinc is switched off and RAM-size is set to 4096 MB (too big ? of possible 8 Gibi ) report of virtual-box is: the com-object for virtualbox could not be created. the application is now ended. Start tag expected, '&lt;' not found. Location: '/home/$user/.VirtualBox/VirtualBox.xml', line 1 (0), column 1. /build/buildd/virtualbox-4.1.18-dfsg/src/VBox/Main/src-server/VirtualBoxImpl.cpp[484] (nsresult VirtualBox::init()). Fehlercode:NS_ERROR_FAILURE (0x80004005) Komponente:VirtualBox Interface:IVirtualBox {c28be65f-1a8f-43b4-81f1-eb60cb516e66} comment of me: why is virtualbox installing xml into folder of $user in .VirtualBox ? should it not be on usb-harddisk ? (with 500 Gibi ) first installation attempt was breaking off (with win7 in 64Bit) should I try virtual-box (ubuntu 64Bit) with win7 in 32Bit ? should I leave RAM-size of virtual-box to default 512 MB ? thanks for reply

    Read the article

  • Minecraft shows black screen on watt-os 64 after logon

    - by uffe hellum
    Minecraft appears to launch with oracle java 7, but crashes after logon. $ java -Xmx1024M -Xms512M -cp ./minecraft.jar net.minecraft.LauncherFrame asdf Exception in thread "Thread-3" java.lang.UnsatisfiedLinkError: /home/uffeh/.minecraft/bin/natives/liblwjgl.so: /home/uffeh/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1939) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1864) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1825) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at org.lwjgl.Sys$1.run(Sys.java:69) at java.security.AccessController.doPrivileged(Native Method) at org.lwjgl.Sys.doLoadLibrary(Sys.java:65) at org.lwjgl.Sys.loadLibrary(Sys.java:81) at org.lwjgl.Sys.(Sys.java:98) at net.minecraft.client.Minecraft.F(SourceFile:1857) at aof.(SourceFile:20) at net.minecraft.client.Minecraft.(SourceFile:77) at anw.(SourceFile:36) at net.minecraft.client.MinecraftApplet.init(SourceFile:36) at net.minecraft.Launcher.replace(Launcher.java:136) at net.minecraft.Launcher$1.run(Launcher.java:79)

    Read the article

  • Recommended Approach to Secure your ADFdi Spreadsheets

    - by juan.ruiz
    ADF desktop integration leverages ADF security to provide access to published spreadsheets within your application. In this article I discussed a good security practice for your existing as well as any new spreadsheets that you create. ADF Desktop integration uses the adfdiRemoteServlet to process and send request back and fort from and to the ADFmodel which is allocated in the Java EE container where our application is deployed. In other words this is one of the entry points to the application server. Having said that, we need to make sure that container-based security is provided to avoid vulnerabilities. So what is needed? For existing an new ADFdi applications you need to create a Security Constraint for the ADFdi servlet on the Web.xml file of our application. Fortunately JDeveloper 11g provides a nice visual editor to do this. Open the web.xml file and go to the security category Add a new Web Resource Collection give it a meaningful name and on the URL Pattern add /adfdiRemoteServlet click on the Authorization tab and make sure the valid-users  role is selected for authorization and Voila! your application now is more secured.

    Read the article

  • handling various frame layouts in android

    - by vaibhav
    i'm new to game development and am trying create a Contra or the old tmnt game (but a simple one) like game for android. for the game i decided to divide my main screen in three parts - upper for stats,mid for the game and lower for controls. my main.xml is <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <FrameLayout android:id="@+id/upper_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > </FrameLayout> <FrameLayout android:id="@+id/fl" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.5" > </FrameLayout> <FrameLayout android:id="@+id/low_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.85" > </FrameLayout> </LinearLayout> so i have created the gameview and gameloopthread classes for the mid surface(which is pretty standard). my problem is that how do i draw in the upper and lower frame layouts? should i make new classes for view and thread for each layout , should i do all this in the gameview class itself or is there any better way to implement this?

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >