Search Results

Search found 56300 results on 2252 pages for 'local working'.

Page 413/2252 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Network issues with DNS not being found

    - by Anriëtte Combrink
    Hi there This is exactly like how our network looks like: Single server with a network router Everything is setup, but I cannot connect our Macs under the Login Options - Join... to this server. Our server's name is Toolbox and I have tried Toolbox.local, Toolbox.private, prepended the afp:// protocol to the name, but nothing, our Macs just don't want to connect this way. Our router has DHCP and gives out all the IP addresses naturally, would I have to add Toolbox.local to the DNS on the router and like it via static internal IP to the server? Our Macs keep giving the following error while trying to join the Network Account Server: Unable to add server Could not resolve the address (2200) What am I doing wrong?

    Read the article

  • Umbraco Gold Partner and the last 6 months.

    - by Vizioz Limited
    As with a lot of blogs, unfortunately over the last 6 months our blog has been feeling some what neglected, the good news, is this has been due to us going from strength to strength :)In the last 6 months we have developed 5 more Microsites for Microsoft, we have helped a London agency fix a dire Umbraco implementation for a global drinks brand, built a great site for a famous food product range and most recently we are working with DairyMaster in Ireland building them a new website for their global distribution network and over the next couple of months we will be launching their new global marketing websites in 9 different languages.As well as working with these great clients, we also helped ResourceiT launch their new website in time for the Microsoft Global Partners conference.In December, Umbraco HQ launched their Umbraco Gold Partner programme, Vizioz was proud to be one of the first Gold Partners in the UK, showing our clients that we are investing our money in the product we promote, ensuring that Umbraco continues to go from strength to strength.

    Read the article

  • ASP.NET Security Exception when Switch IIS7 to Use UNC Path for Content

    - by Jeremy H.
    I have a Windows Server 2008 R2 box running IIS7.5 with Medium Trust configured for ASP.NET. When I have the website running from local content (e.g.: c:\inetpub\wwwroot) everything works fine. When I change IIS to use a UNC path for the content (e.g.: \\computer\wwwroot) I get the following error: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I'm trying to figure out why ASP.NET/IIS would allow for the SQL call when using local content but not when using a UNC path. Any ideas what I need to do to use a UNC path from IIS7 properly?

    Read the article

  • How to port email from evolution to thunderbird?

    - by jim
    I updated ubuntu to 11.10 using the update notification. I am also switching from Xubuntu to ubuntu - gnome interface. I have been using evolution for years and would like to port the emails to thunderbird. I have looked at the similar questions with no luck and the thunderbird help on manually importing. Most of these assume that the evolution file structure is similar to the evolution file structure. When I set up thunderbird it seems to have imported the contacts from evolution (and actually removed them from evolution. However no mail got transferred. I found the evolution mail in ~/.local/share/evolution/mail/local . this has folders.db and 3 directories - cur ,tmp, and new. then there are the hidden files and directories. Each directory has three related files with extensions .cmeta, .ibex.index, and .ibex.index.data. Then all the directories had files that seem to contain the individual messages. I have not looked at rhyme or reason to the file numbering/naming scheme. is there a nice way to import these files?

    Read the article

  • Cannot configure NAP DCOM security.

    - by mattdwen
    I've just added a new 2K8 domain controller to an existing domain as part of a transition from 2k3. I am getting a lot of DCOM 10016 errors, indicating launch security permission problems on a specific CLSID, which ends up being the NAP Agent Service. I've dealt with this before by granting the Network Service local launch and local activation permissions, but the secuirty options are all disabled for this component in the Component Services snap-in. The NAP agent service is not running, and startup is set to Manual. Any ideas on how to remove the errors for the unrequried NAP agent?

    Read the article

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • NuGet, ASP.Net MVC and WebMatrix - DB Coders Cafe - March 1st, 2011 With Sam Abraham

    - by Sam Abraham
    I am scheduled to share on NuGet (http://nuget.codeplex.com/) at the Deerfield Beach Coder’s Café on March 1st, 2011. My goal for this talk is to present demos and content covering how to leverage this new neat utility to easily “package” .Net-based binaries or tools and share them with others, who in-turn, can just as easy reference and readily use that same package in their Visual Studio 2010 .Net projects. Scott Hanselman has recently blogged in great detail on creating NuGet packages. For hosting a local NuGet package repository, Jon Galloway has a nice article update with a complete PowerShell script to simplify downloading the default feed packages which can be accessed here. Information on my upcoming talk can be found at: http://www.fladotnet.com/Reg.aspx?EventID=514 The following is a brief abstract of the talk: NuGet (formerly known as NuPack) is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. NuGet is a member of the ASP.NET Gallery in the Outercurve Foundation. In this session we will: Discuss the concept, vision and goal behind NuGet See NuGet in action within an ASP.Net MVC project Look at the NuGet integration in Microsoft WebMatrix Create a NuGet package for our demo library Explore the NuGet Project site Configure a NuGet package feed for a local network Solicit attendees input and feedback on the tool  Look forward to meeting you all there. --Sam

    Read the article

  • Mouse scroll wheel in Citrix

    - by molecule
    Hi all, One of my users is experiencing a problem whereby the mouse scroll does not work on the Citrix published application. I have tried to install mouse drivers on his local PC as well as swapping the USB with a PS/2 mouse but still no go. I have researched this issue for a few days but have not been able to find a fix. PC is running Windows XP and ICA client is version 10 I am pretty sure its a problem local to that PC since every other user using the application published on the same server are not having any issues. What could it be?

    Read the article

  • Network config with pppoe and Ubuntu 13.10

    - by Pavel
    I have an internet connection that is using pppoe. In my windows I do not assign an ip address for my network and I am able to connect using password and username. I installed Ubintu 13.10 today, and I did pppoeconf, and I setup an ip/network mask and changed the mac address in the options, and I was able to connect to the Internet. I restarted the computer, and I got a message saying that the wired connection is not managed. Internet was not working. I went to network manager file, and I changed the option to true, but I still can't connect to the Internet. I am pretty new to linux. How can I get my Internet working? Thanks

    Read the article

  • Software consultancy or in-house development?

    - by JefClaes
    What are the benefits and drawbacks of working as an in-house developer versus working as a consultant and vice versa? I am pretty sure both breeds can be found on these forums and I hope you are willing to share your experience. Edit: Let me clarify the question. I wonder what the experience is like being a developer. For example: being an in-house developer, you are able to learn from your mistakes. Being a consultant is often more challenging, because there is more variety in the problems you have to solve. PS: Although I realise that this is a subjective question, I don't necessarily see it as one of those bad-subjective questions.

    Read the article

  • Why are all the tabs except MBeans disabled in JConsole?

    - by Kem Mason
    I'm trying to connect to JConsole on a server running: java version "1.6.0_0" OpenJDK Runtime Environment (IcedTea6 1.4.1) (6b14-1.4.1-0ubuntu12) OpenJDK 64-Bit Server VM (build 14.0-b08, mixed mode) from my local machine, running: java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) When I connect to jconsole on the server, all the tabs except the MBeans tab are disabled. If I run the same application on my local machine, and use jconsole to connect to it there (using the remote interface still), all the tabs are enabled just fine. Everything I have seen only suggested the possibility that the server was 1.4 or earlier.

    Read the article

  • Booting Ubuntu EFI on Macbook Pro Apple Bootmanager

    - by user279771
    Following: http://www.rodsbooks.com/ubuntu-efi/ and https://help.ubuntu.com/community/UEFIBooting#Detect_.28U.29EFI_firmware_processor_architecture I have ubuntu booting on my macbook pro with both refined and the native apple loader working, however, I would like to get the "Improving the Boot Method" (i believe it is called the kernel efi stub loader) working with the Apple Boot manager (the article only explains for refined), but have found no articles explaining this. Can anyone help with this? Below is what I have: What I have is the following: /dev/sda apple partitions /boot ext3 (root) ext3 swap side notes: From what I understand, I should have had my boot partition as fat32/hfs+... I can always switch it or copy the kernel to the apple EFI partition. (I tried creating /boot as an hfs+ partition during installation but was unable. Even after installing hfsprogs, although I was able to create an hfs+ partition in gparted I couldn't use the partition as /boot in ubiquity).

    Read the article

  • OpenVPN on Ubuntu 11.10 - unable to redirect default gateway

    - by Vladimir Kadalashvili
    I'm trying to connect to connect to OpenVPN server from my Ubuntu 11.10 machine. I use the following command to do it (under root user): openvpn --config /home/vladimir/client.ovpn Everything seems to be OK, it connects normally without any warnings and errors, but when I try to browse the internet I see that I still use my own IP address, so VPN connection doesn't work. When I run openvpn command, it displays the following message among others: NOTE: unable to redirect default gateway -- Cannot read current default gateway from system I think it's the cause of this problem, but unfortunately I don't know how to fix it. Below is full output of openvpn command: Sat Jun 9 23:51:36 2012 OpenVPN 2.2.0 x86_64-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Sat Jun 9 23:51:36 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Sat Jun 9 23:51:36 2012 Control Channel Authentication: tls-auth using INLINE static key file Sat Jun 9 23:51:36 2012 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:36 2012 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:36 2012 LZO compression initialized Sat Jun 9 23:51:36 2012 Control Channel MTU parms [ L:1542 D:166 EF:66 EB:0 ET:0 EL:0 ] Sat Jun 9 23:51:36 2012 Socket Buffers: R=[126976->200000] S=[126976->200000] Sat Jun 9 23:51:36 2012 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Sat Jun 9 23:51:36 2012 Local Options hash (VER=V4): '504e774e' Sat Jun 9 23:51:36 2012 Expected Remote Options hash (VER=V4): '14168603' Sat Jun 9 23:51:36 2012 UDPv4 link local: [undef] Sat Jun 9 23:51:36 2012 UDPv4 link remote: [AF_INET]94.229.78.130:1194 Sat Jun 9 23:51:37 2012 TLS: Initial packet from [AF_INET]94.229.78.130:1194, sid=13fd921b b42072ab Sat Jun 9 23:51:37 2012 VERIFY OK: depth=1, /CN=OpenVPN_CA Sat Jun 9 23:51:37 2012 VERIFY OK: nsCertType=SERVER Sat Jun 9 23:51:37 2012 VERIFY OK: depth=0, /CN=OpenVPN_Server Sat Jun 9 23:51:38 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Sat Jun 9 23:51:38 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:38 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Sat Jun 9 23:51:38 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:38 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Sat Jun 9 23:51:38 2012 [OpenVPN_Server] Peer Connection Initiated with [AF_INET]94.229.78.130:1194 Sat Jun 9 23:51:40 2012 SENT CONTROL [OpenVPN_Server]: 'PUSH_REQUEST' (status=1) Sat Jun 9 23:51:40 2012 PUSH: Received control message: 'PUSH_REPLY,explicit-exit-notify,topology subnet,route-delay 5 30,dhcp-pre-release,dhcp-renew,dhcp-release,route-metric 101,ping 5,ping-restart 40,redirect-gateway def1,redirect-gateway bypass-dhcp,redirect-gateway autolocal,route-gateway 5.5.0.1,dhcp-option DNS 8.8.8.8,dhcp-option DNS 8.8.4.4,register-dns,comp-lzo yes,ifconfig 5.5.117.43 255.255.0.0' Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:4: dhcp-pre-release (2.2.0) Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:5: dhcp-renew (2.2.0) Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:6: dhcp-release (2.2.0) Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:16: register-dns (2.2.0) Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: timers and/or timeouts modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: explicit notify parm(s) modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: LZO parms modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: --ifconfig/up options modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: route options modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: route-related options modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Sat Jun 9 23:51:40 2012 ROUTE: default_gateway=UNDEF Sat Jun 9 23:51:40 2012 TUN/TAP device tun0 opened Sat Jun 9 23:51:40 2012 TUN/TAP TX queue length set to 100 Sat Jun 9 23:51:40 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Sat Jun 9 23:51:40 2012 /sbin/ifconfig tun0 5.5.117.43 netmask 255.255.0.0 mtu 1500 broadcast 5.5.255.255 Sat Jun 9 23:51:45 2012 NOTE: unable to redirect default gateway -- Cannot read current default gateway from system Sat Jun 9 23:51:45 2012 Initialization Sequence Completed Output of route command: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default * 0.0.0.0 U 0 0 0 ppp0 5.5.0.0 * 255.255.0.0 U 0 0 0 tun0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.0.0 * 255.255.255.0 U 0 0 0 wlan0 stream-ts1.net. * 255.255.255.255 UH 0 0 0 ppp0 Output of ifconfig command: eth0 Link encap:Ethernet HWaddr 6c:62:6d:44:0d:12 inet6 addr: fe80::6e62:6dff:fe44:d12/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54594 errors:0 dropped:0 overruns:0 frame:0 TX packets:59897 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:44922107 (44.9 MB) TX bytes:8839969 (8.8 MB) Interrupt:41 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4561 errors:0 dropped:0 overruns:0 frame:0 TX packets:4561 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:685425 (685.4 KB) TX bytes:685425 (685.4 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:213.206.63.44 P-t-P:213.206.34.4 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:53577 errors:0 dropped:0 overruns:0 frame:0 TX packets:58892 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:43667387 (43.6 MB) TX bytes:7504776 (7.5 MB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.117.43 P-t-P:5.5.117.43 Mask:255.255.0.0 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 00:27:19:f6:b5:cf inet addr:192.168.0.1 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::227:19ff:fef6:b5cf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12079 errors:0 dropped:0 overruns:0 frame:0 TX packets:11178 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1483691 (1.4 MB) TX bytes:4307899 (4.3 MB) So my question is - how to make OpenVPN redirect default gateway? Thanks!

    Read the article

  • Pixel Perfect Collision Detection in Cocos2dx

    - by Happybirthday
    I am trying to port the pixel perfect collision detection in Cocos2d-x the original version was made for Cocos2D and can be found here: http://www.cocos2d-iphone.org/forums/topic/pixel-perfect-collision-detection-using-color-blending/ Here is my code for the Cocos2d-x version bool CollisionDetection::areTheSpritesColliding(cocos2d::CCSprite *spr1, cocos2d::CCSprite *spr2, bool pp, CCRenderTexture* _rt) { bool isColliding = false; CCRect intersection; CCRect r1 = spr1-boundingBox(); CCRect r2 = spr2-boundingBox(); intersection = CCRectMake(fmax(r1.getMinX(),r2.getMinX()), fmax( r1.getMinY(), r2.getMinY()) ,0,0); intersection.size.width = fmin(r1.getMaxX(), r2.getMaxX() - intersection.getMinX()); intersection.size.height = fmin(r1.getMaxY(), r2.getMaxY() - intersection.getMinY()); // Look for simple bounding box collision if ( (intersection.size.width0) && (intersection.size.height0) ) { // If we're not checking for pixel perfect collisions, return true if (!pp) { return true; } unsigned int x = intersection.origin.x; unsigned int y = intersection.origin.y; unsigned int w = intersection.size.width; unsigned int h = intersection.size.height; unsigned int numPixels = w * h; //CCLog("Intersection X and Y %d, %d", x, y); //CCLog("Number of pixels %d", numPixels); // Draw into the RenderTexture _rt-beginWithClear( 0, 0, 0, 0); // Render both sprites: first one in RED and second one in GREEN glColorMask(1, 0, 0, 1); spr1-visit(); glColorMask(0, 1, 0, 1); spr2-visit(); glColorMask(1, 1, 1, 1); // Get color values of intersection area ccColor4B *buffer = (ccColor4B *)malloc( sizeof(ccColor4B) * numPixels ); glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer); _rt-end(); // Read buffer unsigned int step = 1; for(unsigned int i=0; i 0 && color.g 0) { isColliding = true; break; } } // Free buffer memory free(buffer); } return isColliding; } My code is working perfectly if I send the "pp" parameter as false. That is if I do only a bounding box collision but I am not able to get it working correctly for the case when I need Pixel Perfect collision. I think the opengl masking code is not working as I intended. Here is the code for "_rt" _rt = CCRenderTexture::create(visibleSize.width, visibleSize.height); _rt-setPosition(ccp(origin.x + visibleSize.width * 0.5f, origin.y + visibleSize.height * 0.5f)); this-addChild(_rt, 1000000); _rt-setVisible(true); //For testing I think I am making a mistake with the implementation of this CCRenderTexture Can anyone guide me with what I am doing wrong ? Thank you for your time :)

    Read the article

  • How to troubleshoot a remote wmi query/access failure?

    - by Roman
    Hi I'm using Powershell to query a remote computer in a domain for a wmi object, eg: "gwmi -computer test -class win32_bios". I get this error message: Value does not fall within the expected range Executing the query local under the same user works fine. It seems to happen on both windows 2003 and also 2008 systems. The user that runs the shell has admin rights on the local and remote server. I checked wmi and dcom permissions as far as I know how to do this, they seem to be the same on a server where it works, and another where it does not. I think it is not a network issue, all ports are open that are needed, and it also happens within the same subnet. When sniffing the traffic we see the following errors: RPC: c/o Alter Cont Resp: Call=0x2 Assoc Grp=0x4E4E Xmit=0x16D0 Recv=0x16D0 Warning: GssAPIMechanism is not found, either caused by not reassembled, conversation off or filtering. And an errormessage from Kerberos: Kerberos: KRB_ERROR - KDC_ERR_BADOPTION (13) The option code in the packet is 0x40830000 Any idea what I should look into?

    Read the article

  • The provider is not compatible with the version of Oracle Client

    - by jkrebsbach
    There was a recent upgrade of the Oracle client on a production web server housing serveral app pools each containing multiple web sites.  The upgrade consisted of adding the 11.2 client tools on the web server, but leaving the 9.2 and 10.2 versions alone.   After learning the upgrade occured, I updated Oracle.DataAccess.dll on my site to the 11.2 dll, and the web site ran smoothly.   The next day I discover the site is throwing an error:   Oracle.DataAccess.Client.OracleException:   The provider is not compatible with the version of Oracle Client   The App Pool will connect with an installed version of the Oracle client provider, and coordinate database interactions.  If a request comes in for a web site configured using an earlier version of the provider, that is the version the App Pool will use for Oracle interactions.  If a request is made for a site configured for a different version of the provider, the App Pool will see a conflict and throw the above error.   This web site: http://splinter.com.au/blog/?p=156 described the steps to grab the DLLs for a provider and bring them local (for Oracle 11.1), but even when I brought the DLLs for the Oracle Provider into the local bin directory, the App Pool would still determine the version of the oracle client and continue to throw the exception.   My only resolution for the above problem has been to have dedicated IIS app pools per version of Oracle client being leveraged.

    Read the article

  • Using WSUS Admin Console from outside domain

    - by Nick
    Environment: I have a workstation on our primary domain. We have a primary WSUS Server that is the upstream server of 8 different testing domains. The Primary WSUS server is not part of any domain. Routing is configured between my workstation and the Primary WSUS server. I can RDP to the Primary WSUS sever without any problem. The router is configured to forward any any between my workstation and the Primary WSUS server. This WSUS server cannot be part of a domain due to external requirements (I can't change them) on the lab I work in. The version of WSUS is WSUS 3.0 SP 2 What I want to do: I need to connect to the WSUS server with the WSUS Admin console from my local workstation. The end goal is to connect via Powershell and manage with that. I also need to take what I do here and port it to the 8 test domains so I can manage those WSUS servers. The routing is all in place so I can talk to the servers, it's just connecting to the WSUS console that is causing problems. The problem: I cannot get my workstation to connect to the WSUS Console. I get one of the following errors depending on the setup. 1st error: Cannot connect to 'WSUS'. You do not have the permissions required to access this WSUS server. To connect to the server you must be a member of the WSUS Administrators or WSUS Reporters security groups I also get the warning 7012 from the event log that says the same thing. 2nd error: Cannot connect to 'WSUS'. The server may be using another port or different Secure Sockets Layer setting. What I have tried: So far I have configured IIS for Anonymous Authentication on both the WSUS Administration and ApiRemoting30 using an account will call WSUS_User. With this in place, I get the 1st error. When I do this though, the local WSUS Console cannot be used either. Reverting back to only Windows Authentication allows the local console to work, but the remote console now give the 2nd error. I have confirmed the port, and that there is no SSL in use (which is a policy that is pushed from above, that I cannot effect). I have placed WSUS_User in the groups mentioned above, but it still does not connect. I made sure WSUS_User has full access on C:\Program Files\Update Services and C:\Program Files\Update Services\WebServices I am not very familiar with the workings of WSUS or IIS, and have gone as far as I can figure out on my own. Googling these errors all take me to the same steps about Anonymous Authentication and configuring permissions on folders. Note: I have cross-posted this to StackOverflow as well.

    Read the article

  • How to allow wget to overwrite files

    - by Gnanam
    Using wget command, how do I allow/instruct to overwrite my local file everytime, irrespective of how many times I invoke. Let's say, I want to download a file from the location: http://server/folder/file1.html Here, whenever I say wget http://server/folder/file1.html, I want this file1.html to be overwritten in my local system irrespective of the time it is changed, already downloaded, etc. My intention/use case here is that when I call wget, I'm very sure that I want to replace/overwrite the existing file. I've tried out the following options, but each option is intended/meant for some other purpose. -nc = --no-clobber -N = Turn on time-stamping -r = Turn on recursive retrieving

    Read the article

  • How to tweet automatically when you push a new package to nuget.org

    - by Daniel Cazzulino
    Wouldn’t it be nice if your followers could be notified whenever you publish a new version of a NuGet package? Currently, nuget.org offers no support for this, but with the following tricks, you can get it working without programming. The essential idea is to use the OData feed that nuget.org exposes to build an RSS feed with new items as you publish them, and have IFTTT do the tweeting from it. The tools we’ll use to get this working are: LinqPad: to examine the nuget.org OData feed at https://nuget.org/api/v2  Yahoo Pipes: to tweak the OData feed output so that it looks like a “plain” feed IFTTT: to consume the pipe output and auto-tweet on new items   Exploring NuGet OData Feed with LinqPad In order to build the query that will become your tweets’ source, we will add a new connection in LinqPad by clicking on the “Add Connection” link:...Read full article

    Read the article

  • PLESK Original php.ini files needed

    - by Saif Bechan
    Hello, I am using PLESK 9.2.3 with CentOS 5. I recently tried to change some things in the php.ini files, but i ended up clearing both files. stupid mistake! Is there a possibility i can get the content of both files back. I am talking about the files: /etc/php.ini /usr/local/psa/admin/conf/php.ini or was it /etc/php.ini /usr/local/psa/admin/conf/.php.ini When i browse to my PLESK panel i get the error message: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'auth.php3' (include_path='.:') in Unknown on line 0 Thats as much information i can get, i hope someone can make some sense of all this Regrads EDIT: I ended up just reinstalling plesk which was fairly easy. Something like yum install psa-plesk. There was a guide i followed, and everything worked out ok

    Read the article

  • Essential management tools for a small/medium software development shop

    - by mikera
    I've recently started work with an organisation that is rapidly expanding and is recruiting or growing several development teams (including two web-based products and a data warehouse/BI team). They are basically working to agile methodologies but haven't formalised a standard way of working yet. Despite the fact that it is early days, I've been surprised by the lack of tools being used to manage the development processes (e.g. no issue tracker, no tool to manage the product backlog etc.) Although it's not my primary responsibility, I'd like to help them out with some recommendations on the most important tools they should get in place. What are the 3-5 top priority tools to establish for management of a good development shop? Why are they necessary? How do they improve the software development process, and how do I justify them to my bosses?

    Read the article

  • Sitefinity SimpleImageSelector to return Url of image instead of Guid

    - by Joey Brenn
    It's been quite a while but I've found something to blog about!I've been working with Sitefinity for some time now and one of the things that I've struggled with, and I'm not the only one is something that should be simple.  See, all I want to do is be able to choose a picture from one of the libraries within Sitefinity and be able to display it via the GUID it returns or the path of the URL.  I want to do this from my user control or a custom control.Well, it turns out that this is not built in, at least I've not been able to get anything working correctly until I found this post and was able to get it to work.  However, I want to store the relative URL of the image so I made a small change to make it return the URL instead of the GUID.To make the change, in the SimpleImageSelectorDialog.js file, on line 43, change the original line:var selectedValue = this.get_imageSelector().get_selectedImageId();to the new line:var selectedValue = this.get_imageSelector().get_selectedImageUrl();var selectedValue = this.get_imageSelector().get_selectedImageUrl();Of course, save and recomple the project and now it will return the URL instead of the GUID of the image from the choosen Album.

    Read the article

  • Resolving DNS queries for two disconnected, private, networks

    - by Mikeage
    I'm trying to setup two PCs (one Windows, one Linux, but my understanding is that this problem is more DNS and less OS) as follows: Home network: 192.168.1.0/24 VPN (via OpenVPN server not within the home network): 192.168.2.0/24 . I would like a PC on both networks to be able to access three different types of site: Internet addresses Addresses on the home network Addresses on the vpn However, I'm not sure how/which DNS servers to use. If I prioritize my home DNS server, I can resolve (1) and (2), but not (3). If I prioritize my VPN DNS server, I can't resolve addresses of type (2). Of course, looking up addresses via nslookup and explicitly setting the correct server works, so I know my local DNS servers are OK. Is there any way I can set up my PCs to fallback on the second DNS server if there is no response? Alternatively, is there any way I can tell different queries to go to different servers [maybe by setting up different subdomains; foo.local.something vs. bar.vpn.something]? Thanks

    Read the article

  • DNS/Apache config to change ServerName on Mac OS X and LAN

    - by nickyc
    Hi, I want to run an apache web server on a machine running OS X, with the server running on a small intranet LAN with no internet connection. I've set up web sharing and the web server is now accessible from other machines on the LAN using the custom name a.local - but what I would like to do is remove the .local part if possible. Does anyone know how i would go about configuring this in OS X? I wasn't sure if it would be the apache httpd.conf file or some DNS config that would be required.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >