Search Results

Search found 4110 results on 165 pages for 'vm'.

Page 159/165 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • JVM throws OutOfMemory during gc though there are plenty memory left...

    - by Shu L.
    I have my java application configured to use 5G memory. I got an OutOfMemory out of blue. I inspected the gc log and found plenty of memory left: young generation occupies 4% allocated space, tenure generation occupancy is 5% and perm generation is 43%. I am puzzled why JVM throws an OutOfMemory at the gc time. Does anyone know why this is happening? Your help is greatly appreciated. JVM memory and gc settings: -server -Xms5g -Xmx5g -Xss256k -XX:NewSize=2g -XX:MaxNewSize=2g -XX:+UseParallelOldGC -XX:+UseTLAB -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+DisableExplicitGC gc.log 2009-09-19T03:34:59.741+0000: 92836.778: [GC Desired survivor size 152567808 bytes, new threshold 1 (max 15) [PSYoungGen: 1941492K-144057K(1947072K)] 3138022K-1340830K(5092800K), 0.1947640 secs] [Times: user=0.61 sys=0.01, real=0.19 secs] 2009-09-19T03:35:29.918+0000: 92866.954: [GC Desired survivor size 152109056 bytes, new threshold 1 (max 15) [PSYoungGen: 1941625K-144049K(1948608K)] 3138398K-1341080K(5094336K), 0.1942000 secs] [Times: user=0.61 sys=0.01, real=0.20 secs] 2009-09-19T03:35:56.883+0000: 92893.920: [GC Desired survivor size 156565504 bytes, new threshold 1 (max 15) [PSYoungGen: 1567994K-115427K(1915072K)] 2765026K-1312820K(5060800K), 0.1586320 secs] [Times: user=0.50 sys=0.01, real=0.16 secs] 2009-09-19T03:35:57.042+0000: 92894.079: [GC Desired survivor size 179961856 bytes, new threshold 1 (max 15) [PSYoungGen: 115427K-0K(1898560K)] 1312820K-1313987K(5044288K), 0.0775650 secs] [Times: user=0.42 sys=0.19, real=0.08 secs] 2009-09-19T03:35:57.120+0000: 92894.157: [Full GC [PSYoungGen: 0K-0K(1898560K)] [ParOldGen: 1313987K-159522K(3145728K)] 1313987K-159522K(5044288K) [PSPermGen: 20025K-19942K(40256K)], 0.56923 00 secs] [Times: user=2.18 sys=0.05, real=0.57 secs] 2009-09-19T03:35:57.690+0000: 92894.726: [GC Desired survivor size 197066752 bytes, new threshold 1 (max 15) [PSYoungGen: 0K-0K(1745728K)] 159522K-159522K(4891456K), 0.0072590 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 2009-09-19T03:35:57.698+0000: 92894.734: [Full GC [PSYoungGen: 0K-0K(1745728K)] [ParOldGen: 159522K-158627K(3145728K)] 159522K-158627K(4891456K) [PSPermGen: 19942K-19934K(45504K)], 0.3280480 secs] [Times: user=1.46 sys=0.00, real=0.33 secs] Heap PSYoungGen total 1745728K, used 87233K [0x00002aab73650000, 0x00002aabf3650000, 0x00002aabf3650000) eden space 1745664K, 4% used [0x00002aab73650000,0x00002aab78b80778,0x00002aabddf10000) from space 64K, 0% used [0x00002aabddf10000,0x00002aabddf10000,0x00002aabddf20000) to space 192448K, 0% used [0x00002aabe7a60000,0x00002aabe7a60000,0x00002aabf3650000) ParOldGen total 3145728K, used 158627K [0x00002aaab3650000, 0x00002aab73650000, 0x00002aab73650000) object space 3145728K, 5% used [0x00002aaab3650000,0x00002aaabd138d28,0x00002aab73650000) PSPermGen total 45504K, used 19965K [0x00002aaaae250000, 0x00002aaab0ec0000, 0x00002aaab3650000) object space 45504K, 43% used [0x00002aaaae250000,0x00002aaaaf5cf668,0x00002aaab0ec0000) I am on 64-bit Linux and JRE 1.6.0_10: $uname -a Linux x 2.6.24-etchnhalf.1-amd64 #1 SMP Tue Oct 14 03:11:45 UTC 2008 x86_64 GNU/Linux $java -version java version "1.6.0_10" Java(TM) SE Runtime Environment (build 1.6.0_10-b33) Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

    Read the article

  • Android Developing App crashes on startup

    - by alexnavratil
    I currently develop an application which contains a custom ListView. I developed a custom array adapter. I think my app crashes here: ListView DirectoryView = (ListView) findViewById(R.id.fileListView); So i think the error is in the activity_main.xml: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="fill_parent" android:layout_height="fill_parent" > <ListView android:id="@+id/fileListView" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > </ListView> Here is my LogCat: 09-09 11:19:21.254: E/Trace(1152): error opening trace file: No such file or directory (2) 09-09 11:19:21.484: D/AndroidRuntime(1152): Shutting down VM 09-09 11:19:21.484: W/dalvikvm(1152): threadid=1: thread exiting with uncaught exception (group=0x40a13300) 09-09 11:19:21.504: E/AndroidRuntime(1152): FATAL EXCEPTION: main 09-09 11:19:21.504: E/AndroidRuntime(1152): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.teamdroid.explorer/com.teamdroid.explorer.MainActivity}: java.lang.NullPointerException 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2059) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2084) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.ActivityThread.access$600(ActivityThread.java:130) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1195) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.os.Handler.dispatchMessage(Handler.java:99) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.os.Looper.loop(Looper.java:137) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.ActivityThread.main(ActivityThread.java:4745) 09-09 11:19:21.504: E/AndroidRuntime(1152): at java.lang.reflect.Method.invokeNative(Native Method) 09-09 11:19:21.504: E/AndroidRuntime(1152): at java.lang.reflect.Method.invoke(Method.java:511) 09-09 11:19:21.504: E/AndroidRuntime(1152): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) 09-09 11:19:21.504: E/AndroidRuntime(1152): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) 09-09 11:19:21.504: E/AndroidRuntime(1152): at dalvik.system.NativeStart.main(Native Method) 09-09 11:19:21.504: E/AndroidRuntime(1152): Caused by: java.lang.NullPointerException 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.Activity.findViewById(Activity.java:1825) 09-09 11:19:21.504: E/AndroidRuntime(1152): at com.teamdroid.explorer.listDirectory.getDirectory(listDirectory.java:20) 09-09 11:19:21.504: E/AndroidRuntime(1152): at com.teamdroid.explorer.MainActivity.onCreate(MainActivity.java:33) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.Activity.performCreate(Activity.java:5008) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1079) 09-09 11:19:21.504: E/AndroidRuntime(1152): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2023) 09-09 11:19:21.504: E/AndroidRuntime(1152): ... 11 more Please can you help me. I am searching this error for 2 day. thanks!

    Read the article

  • ActiveMQ 5.2.0 + REST + HTTP POST = java.lang.OutOfMemoryError

    - by Bruce Loth
    First off, I am a newbie when it comes to JMS & ActiveMQ. I have been looking into a messaging solution to serve as middleware for a message producer that will insert XML messages into a queue via HTTP POST. The producer is an existing system written in C++ that cannot be modified (so Java and the C++ API are out). Using the "demo" examples and some trial and error, I have cobbled together a working example of what I want to do (on a windows box). The web.xml I configured in a test directory under "webapps" specifies that the HTTP POST messages received from the producer are to be handled by the MessageServlet. I added a line for the text app in "activemq.xml" ('ow' is the test app dir): I created a test script to "insert" messages into the queue which works well. The problem I am running into is that it as I continue to insert messages via REST/HTTP POST, the memory consumption and thread count used by ActiveMQ continues to rise (It happens when I have timely consumers as well as slow or non-existent consumers). When memory consumption gets around 250MB's and the thread count exceeds 5000 (as shown in windows task manager), ActiveMQ crashes and I see this in the log: Exception in thread "ActiveMQ Transport Initiator: vm://localhost#3564" java.lang.OutOfMemoryError: unable to create new native thread It is as if Jetty is spawning a new thread to handle each HTTP POST and the thread never dies. I did look at this page: http://activemq.apache.org/javalangoutofmemory.html and tried but that didn't fix the problem (although I didn't fully understand the implications of the change either). Does anyone have any ideas? Thanks! Bruce Loth PS - I included the "test message producer" python script below for what it is worth. I created batches of 100 messages and continued to run the script manually from the command line while watching the memory consumption and thread count of ActiveMQ in task manager. def foo(): import httplib, urllib body = "<?xml version='1.0' encoding='UTF-8'?>\n \ <ROOT>\n \ [snip: xml deleted to save space] </ROOT>" headers = {"content-type": "text/xml", "content-length": str(len(body))} conn = httplib.HTTPConnection("127.0.0.1:8161") conn.request("POST", "/ow/message/RDRCP_Inbox?type=queue", body, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() ## end method definition ## Begin test code count = 0; while(count < 100): # Test with batches of 100 msgs count += 1 foo()

    Read the article

  • Android: simple webview code. ERR: Unable to start Activity

    - by vnshetty
    I have following code: public class reader extends Activity { WebView mWebView; String mFilename; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mWebView = (WebView) findViewById(R.id.webView1); setContentView(R.layout.webview); mWebView.loadUrl("http://www.google.com"); } } When I run this, the emulator shows "Sorry:.. mireader Stopped unexpectedly" error.. Why ? DDMS log dump: 03-02 12:25:26.430: INFO/AndroidRuntime(2837): NOTE: attach of thread 'Binder Thread #3' failed 03-02 12:25:26.729: INFO/ActivityManager(72): Start proc com.mireader for activity com.mireader/.reader: pid=2846 uid=10032 gids={3003, 1015} 03-02 12:25:29.621: DEBUG/AndroidRuntime(2846): Shutting down VM 03-02 12:25:29.621: WARN/dalvikvm(2846): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): FATAL EXCEPTION: main 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.mireader/com.mireader.reader}: java.lang.NullPointerException 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.os.Handler.dispatchMessage(Handler.java:99) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.os.Looper.loop(Looper.java:123) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.app.ActivityThread.main(ActivityThread.java:4627) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at java.lang.reflect.Method.invokeNative(Native Method) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at java.lang.reflect.Method.invoke(Method.java:521) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at dalvik.system.NativeStart.main(Native Method) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): Caused by: java.lang.NullPointerException 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at com.mireader.reader.onCreate(reader.java:36) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 03-02 12:25:29.660: ERROR/AndroidRuntime(2846): ... 11 more 03-02 12:25:29.699: WARN/ActivityManager(72): Force finishing activity com.mireader/.reader 03-02 12:25:30.550: WARN/ActivityManager(72): Activity pause timeout for HistoryRecord{44f98868 com.mireader/.reader} 03-02 12:25:33.230: DEBUG/dalvikvm(200): GC_EXPLICIT freed 164 objects / 11312 bytes in 7516ms 03-02 12:25:35.020: INFO/Process(2846): Sending signal. PID: 2846 SIG: 9 03-02 12:25:35.080: WARN/InputManagerService(72): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@44fc35e0 03-02 12:25:35.809: INFO/ActivityManager(72): Process com.mireader (pid 2846) has died. 03-02 12:25:37.960: DEBUG/dalvikvm(320): GC_EXPLICIT freed 83 objects / 4000 bytes in 78ms 03-02 12:25:42.674: WARN/ActivityManager(72): Activity destroy timeout for HistoryRecord{44f98868 com.mireader/.reader}

    Read the article

  • Can't find method in the activity

    - by Synesso
    I'm starting with Scala + Android. I'm trying to wire a button action to a button without the activity implementing View.OnClickListener. The button click fails at runtime because the method cannot be found. The document I'm working through says that I need only declare a public void method taking a View on the action, and use that method name in the layout. What have I done wrong? MainActivity.scala package net.badgerhunt.hwa import android.app.Activity import android.os.Bundle import android.widget.Button import android.view.View import java.util.Date class MainActivity extends Activity { override def onCreate(savedInstanceState: Bundle) = { super.onCreate(savedInstanceState) setContentView(R.layout.main) } def calculate(button: View): Unit = println("calculating with %s ...".format(button)) } res/layout/main.xml <?xml version="1.0" encoding="utf-8"?> <Button xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/button" android:text="" android:onClick="calculate" android:layout_width="fill_parent" android:layout_height="fill_parent"/> the failure onclick D/AndroidRuntime( 362): Shutting down VM W/dalvikvm( 362): threadid=3: thread exiting with uncaught exception (group=0x4001b188) E/AndroidRuntime( 362): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 362): java.lang.IllegalStateException: Could not find a method calculate(View) in the activity E/AndroidRuntime( 362): at android.view.View$1.onClick(View.java:2020) E/AndroidRuntime( 362): at android.view.View.performClick(View.java:2364) E/AndroidRuntime( 362): at android.view.View.onTouchEvent(View.java:4179) E/AndroidRuntime( 362): at android.widget.TextView.onTouchEvent(TextView.java:6540) E/AndroidRuntime( 362): at android.view.View.dispatchTouchEvent(View.java:3709) E/AndroidRuntime( 362): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) E/AndroidRuntime( 362): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) E/AndroidRuntime( 362): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) E/AndroidRuntime( 362): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1659) E/AndroidRuntime( 362): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1107) E/AndroidRuntime( 362): at android.app.Activity.dispatchTouchEvent(Activity.java:2061) E/AndroidRuntime( 362): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1643) E/AndroidRuntime( 362): at android.view.ViewRoot.handleMessage(ViewRoot.java:1691) E/AndroidRuntime( 362): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 362): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 362): at android.app.ActivityThread.main(ActivityThread.java:4363) E/AndroidRuntime( 362): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 362): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 362): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) E/AndroidRuntime( 362): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) E/AndroidRuntime( 362): at dalvik.system.NativeStart.main(Native Method) E/AndroidRuntime( 362): Caused by: java.lang.NoSuchMethodException: calculate E/AndroidRuntime( 362): at java.lang.ClassCache.findMethodByName(ClassCache.java:308) E/AndroidRuntime( 362): at java.lang.Class.getMethod(Class.java:1014) E/AndroidRuntime( 362): at android.view.View$1.onClick(View.java:2017) E/AndroidRuntime( 362): ... 20 more

    Read the article

  • Android Facebook SDK : post message without showing facebook page

    - by someone_ smiley
    I am trying to integrate Facebook to my Android app for social post. I have downloaded latest Facebook sdk from here and apply all setup require to post to facebook. Now i can post to facebook. But problem is that, when i run sample program from facebook sdk, a browser like page is open and user have to enter message himself there. but i dont want this page to showed up. i want a fixed message to post directly without opening facebook dialog box. But if there is noway to avoid this, please tell me how can i fixed certain part of message so that user can't modified it. Thanks in Advance Edit: this is message i got after using this project 04-10 11:44:34.691: I/dalvikvm(719): Failed resolving Lnet/xeomax/TestRocket/TestRocket; interface 22 'Lnet/xeomax/FBRocket/LoginListener;' 04-10 11:44:34.691: W/dalvikvm(719): Link of class 'Lnet/xeomax/TestRocket/TestRocket;' failed 04-10 11:44:34.691: D/AndroidRuntime(719): Shutting down VM 04-10 11:44:34.701: W/dalvikvm(719): threadid=1: thread exiting with uncaught exception (group=0x40015560) 04-10 11:44:34.781: E/AndroidRuntime(719): FATAL EXCEPTION: main 04-10 11:44:34.781: E/AndroidRuntime(719): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{net.xeomax.TestRocket/net.xeomax.TestRocket.TestRocket}: java.lang.ClassNotFoundException: net.xeomax.TestRocket.TestRocket in loader dalvik.system.PathClassLoader[/data/app/net.xeomax.TestRocket-1.apk] 04-10 11:44:34.781: E/AndroidRuntime(719): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1569) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1663) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.app.ActivityThread.access$1500(ActivityThread.java:117) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:931) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.os.Handler.dispatchMessage(Handler.java:99) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.os.Looper.loop(Looper.java:130) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.app.ActivityThread.main(ActivityThread.java:3683) 04-10 11:44:34.781: E/AndroidRuntime(719): at java.lang.reflect.Method.invokeNative(Native Method) 04-10 11:44:34.781: E/AndroidRuntime(719): at java.lang.reflect.Method.invoke(Method.java:507) 04-10 11:44:34.781: E/AndroidRuntime(719): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 04-10 11:44:34.781: E/AndroidRuntime(719): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 04-10 11:44:34.781: E/AndroidRuntime(719): at dalvik.system.NativeStart.main(Native Method) 04-10 11:44:34.781: E/AndroidRuntime(719): Caused by: java.lang.ClassNotFoundException: net.xeomax.TestRocket.TestRocket in loader dalvik.system.PathClassLoader[/data/app/net.xeomax.TestRocket-1.apk] 04-10 11:44:34.781: E/AndroidRuntime(719): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:240) 04-10 11:44:34.781: E/AndroidRuntime(719): at java.lang.ClassLoader.loadClass(ClassLoader.java:551) 04-10 11:44:34.781: E/AndroidRuntime(719): at java.lang.ClassLoader.loadClass(ClassLoader.java:511) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.app.Instrumentation.newActivity(Instrumentation.java:1021) 04-10 11:44:34.781: E/AndroidRuntime(719): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1561) 04-10 11:44:34.781: E/AndroidRuntime(719): ... 11 more

    Read the article

  • Available Coroutine Libraries in Java

    - by JUST MY correct OPINION
    I would like to do some stuff in Java that would be clearer if written using concurrent routines, but for which full-on threads are serious overkill. The answer, of course, is the use of coroutines, but there doesn't appear to be any coroutine support in the standard Java libraries and a quick Google on it brings up tantalising hints here or there, but nothing substantial. Here's what I've found so far: JSIM has a coroutine class, but it looks pretty heavyweight and conflates, seemingly, with threads at points. The point of this is to reduce the complexity of full-on threading, not to add to it. Further I'm not sure that the class can be extracted from the library and used independently. Xalan has a coroutine set class that does coroutine-like stuff, but again it's dubious if this can be meaningfully extracted from the overall library. It also looks like it's implemented as a tightly-controlled form of thread pool, not as actual coroutines. There's a Google Code project which looks like what I'm after, but if anything it looks more heavyweight than using threads would be. I'm basically nervous of something that requires software to dynamically change the JVM bytecode at runtime to do its work. This looks like overkill and like something that will cause more problems than coroutines would solve. Further it looks like it doesn't implement the whole coroutine concept. By my glance-over it gives a yield feature that just returns to the invoker. Proper coroutines allow yields to transfer control to any known coroutine directly. Basically this library, heavyweight and scary as it is, only gives you support for iterators, not fully-general coroutines. The promisingly-named Coroutine for Java fails because it's a platform-specific (obviously using JNI) solution. And that's about all I've found. I know about the native JVM support for coroutines in the Da Vinci Machine and I also know about the JNI continuations trick for doing this. These are not really good solutions for me, however, as I would not necessarily have control over which VM or platform my code would run on. (Indeed any bytecode manipulation system would suffer similar problems -- it would be best were this pure Java if possible. Runtime bytecode manipulation would restrict me from using this on Android, for example.) So does anybody have any pointers? Is this even possible? If not, will it be possible in Java 7? Edited to add: Just to ensure that confusion is contained, this is a related question to my other one, but not the same. This one is looking for an existing implementation in a bid to avoid reinventing the wheel unnecessarily. The other one is a question relating to how one would go about implementing coroutines in Java should this question prove unanswerable. The intent is to keep different questions on different threads.

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Tomcat JMX connection - Authentication failed.

    - by ziggy
    I am having some problems setting up Tomcat for JMX. I added the following properties to CATALINA_OPTS="-Dcom.sun.management.jmxremote.port=18070 -Dcom.sun.management.jmxremote.password.file=$CATALINA_BASE/conf/jmxremote.password -Dcom.sun .management.jmxremote.ssl=false" And have added the jmxremote.password file in to the conf directory. I wrote a client tool that connects to the JMX server running on port 18070. When i run the client program i get the following error. Exception in thread "main" java.lang.SecurityException: Authentication failed! Credentials required at com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticationFailure(JMXPluggableAuthenticator.java:193) at com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticate(JMXPluggableAuthenticator.java:145) at sun.management.jmxremote.ConnectorBootstrap$AccessFileCheckerAuthenticator.authenticate(ConnectorBootstrap.java:185) at javax.management.remote.rmi.RMIServerImpl.doNewClient(RMIServerImpl.java:213) at javax.management.remote.rmi.RMIServerImpl.newClient(RMIServerImpl.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142) at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) at javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2312) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:277) at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248) at com.bt.c21sc.c21tkprobe.accessors.C21TkProbeJmxDAO.connect(Unknown Source) at com.bt.c21sc.c21tkprobe.service.C21TkProbeBD.execute(Unknown Source) at com.bt.c21sc.c21tkprobe.C21AppserverProbe.main(Unknown Source) If i change the CATALINA_OPTS properties to CATALINA_OPTS="-Dcom.sun.management.jmxremote.port=18070 -Dcom.sun.management.jmxremote.password.file=$CATALINA_BASE/conf/jmxremote.password -Dcom.sun .management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false" Then it works fine. I think what i am confused of is what is classed as remote access. I am running the client program away from the Tomcat instance but both Tomcat and the client tool are on the same machine (i.e. different virtual machines but same environemnt). I thought i had to configure the remote authentication if i access the JMX server remotely from a different machine. By remote access do they mean accessing the JMX server from any VM either locally or remotely?

    Read the article

  • Coroutines in Java

    - by JUST MY correct OPINION
    I would like to do some stuff in Java that would be clearer if written using concurrent routines, but for which full-on threads are serious overkill. The answer, of course, is the use of coroutines, but there doesn't appear to be any coroutine support in the standard Java libraries and a quick Google on it brings up tantalising hints here or there, but nothing substantial. Here's what I've found so far: JSIM has a coroutine class, but it looks pretty heavyweight and conflates, seemingly, with threads at points. The point of this is to reduce the complexity of full-on threading, not to add to it. Further I'm not sure that the class can be extracted from the library and used independently. Xalan has a coroutine set class that does coroutine-like stuff, but again it's dubious if this can be meaningfully extracted from the overall library. It also looks like it's implemented as a tightly-controlled form of thread pool, not as actual coroutines. There's a Google Code project which looks like what I'm after, but if anything it looks more heavyweight than using threads would be. I'm basically nervous of something that requires software to dynamically change the JVM bytecode at runtime to do its work. This looks like overkill and like something that will cause more problems than coroutines would solve. Further it looks like it doesn't implement the whole coroutine concept. By my glance-over it gives a yield feature that just returns to the invoker. Proper coroutines allow yields to transfer control to any known coroutine directly. Basically this library, heavyweight and scary as it is, only gives you support for iterators, not fully-general coroutines. The promisingly-named Coroutine for Java fails because it's a platform-specific (obviously using JNI) solution. And that's about all I've found. I know about the native JVM support for coroutines in the Da Vinci Machine and I also know about the JNI continuations trick for doing this. These are not really good solutions for me, however, as I would not necessarily have control over which VM or platform my code would run on. (Indeed any bytecode manipulation system would suffer similar problems -- it would be best were this pure Java if possible. Runtime bytecode manipulation would restrict me from using this on Android, for example.) So does anybody have any pointers? Is this even possible? If not, will it be possible in Java 7?

    Read the article

  • Runtime.exec causes duplicate JVM to hang indefinitely until killed (Solaris 10)

    - by John
    All, We are running a J2EE application on WebLogic server 9.2 MP2 with a jrockit 64-bit JVM (27.3.1) on Solaris 10. We call use runtime.exec to call an executable called jfmerge to create PDF documents. We have found that in Solaris, when runtime.exec is called, a duplicate JVM is temporarily spawned to kick off the jfmerge process. While this is inefficient (our JVM is 5 GB, thus the duplicated shell JVM is also 5 GB), the major problem lies in the fact that when there is heavy load on this functionality (PDF generation) in our application, sometimes the duplicated JVM never exits. When the JVM hangs, the servers create large issues (extreme application slowness and terminated user sessions) as the entire duplicate JVM get's all of its 5 GB of process size written to disk swap. We have noted the following hung thread correlated with a hung JVM process until the process is manually killed: "[STUCK] ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'" id=3463 idx=0x158 tid=3460 prio=1 alive, in native, daemon at jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native Method) at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:30) at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java) at java/io/FileInputStream.read(FileInputStream.java:194) at java/lang/UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:227) at java/io/BufferedInputStream.fill(BufferedInputStream.java:218) at java/io/BufferedInputStream.read(BufferedInputStream.java:235) ^-- Holding lock: java/io/BufferedInputStream@0xfffffffec6510470[thin lock] at gov/v3/common/formgeneration/sessionbean/FormsBean.getProcessStatus(FormsBean.java:809) at gov/v3/common/formgeneration/sessionbean/FormsBean.createPDF(FormsBean.java:750) at gov/v3/common/formgeneration/sessionbean/FormsBean.getTemplateDetails(FormsBean.java:450) at gov/v3/common/formgeneration/sessionbean/FormsBean.generateSinglePDF(FormsBean.java:1371) at gov/v3/common/formgeneration/sessionbean/FormsBean.generatePDF(FormsBean.java:263) at gov/v3/common/formgeneration/sessionbean/FormsBean.endorseDocument(FormsBean.java:2377) at gov/v3/common/formgeneration/sessionbean/Forms_qaco28_EOImpl.endorseDocument(Forms_qaco28_EOImpl.java:214) at gov/v3/delegates/common/FormsAndNoticesDelegate.endorseDocument(FormsAndNoticesDelegate.java:128) at gov/v3/actions/common/EndorseDocumentAction.executeRequest(EndorseDocumentAction.java:68) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.dispatchToExecuteMethod(V3CommonDispatchAction.java:532) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.executeBaseAction(V3CommonDispatchAction.java:336) at gov/v3/fwk/controller/struts/action/V3BaseDispatchAction.execute(V3BaseDispatchAction.java:69) at org/apache/struts/action/RequestProcessor.processActionPerform(RequestProcessor.java:484) at gov/v3/fwk/controller/struts/requestprocessor/V3TilesRequestProcessor.processActionPerform(V3TilesRequestProcessor.java:384) at org/apache/struts/action/RequestProcessor.process(RequestProcessor.java:274) at org/apache/struts/action/ActionServlet.process(ActionServlet.java:1482) at org/apache/struts/action/ActionServlet.doGet(ActionServlet.java:507) at gov/v3/fwk/controller/struts/servlet/V3ControllerServlet.doGet(V3ControllerServlet.java:110) at javax/servlet/http/HttpServlet.service(HttpServlet.java:743) at javax/servlet/http/HttpServlet.service(HttpServlet.java:856) at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3231) at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:121) at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209) at weblogic/work/ExecuteThread.run(ExecuteThread.java:181) at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method) -- end of trace We would like to do a couple of things: 1.) Prevent the spawning of a duplicate JVM, as we do not need any of it's functions when executing the simple jfmerge executable, and it creates massive overhead. 2.) In the short term at least prevent this duplicate JVM from handing indefinitely.

    Read the article

  • Android - exception from an AsynchTask call

    - by GeekedOut
    I have an Activity that makes a remote server call and tries to populate a list. The call to the server works fine, and the call returns some JSON which is good. But then the system throws this exception: 04-06 18:43:19.626: D/AndroidRuntime(2564): Shutting down VM 04-06 18:43:19.626: W/dalvikvm(2564): threadid=1: thread exiting with uncaught exception (group=0x409c01f8) 04-06 18:43:19.686: E/AndroidRuntime(2564): FATAL EXCEPTION: main 04-06 18:43:19.686: E/AndroidRuntime(2564): java.lang.NullPointerException 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ArrayAdapter.createViewFromResource(ArrayAdapter.java:394) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ArrayAdapter.getView(ArrayAdapter.java:362) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.AbsListView.obtainView(AbsListView.java:2033) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ListView.measureHeightOfChildren(ListView.java:1244) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ListView.onMeasure(ListView.java:1155) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:4698) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1369) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.measureVertical(LinearLayout.java:660) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.onMeasure(LinearLayout.java:553) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:4698) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.FrameLayout.onMeasure(FrameLayout.java:293) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.measureVertical(LinearLayout.java:812) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.onMeasure(LinearLayout.java:553) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:4698) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.FrameLayout.onMeasure(FrameLayout.java:293) 04-06 18:43:19.686: E/AndroidRuntime(2564): at com.android.internal.policy.impl.PhoneWindow$DecorView.onMeasure(PhoneWindow.java:2092) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1064) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewRootImpl.handleMessage(ViewRootImpl.java:2442) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.os.Handler.dispatchMessage(Handler.java:99) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.os.Looper.loop(Looper.java:137) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.app.ActivityThread.main(ActivityThread.java:4424) 04-06 18:43:19.686: E/AndroidRuntime(2564): at java.lang.reflect.Method.invokeNative(Native Method) 04-06 18:43:19.686: E/AndroidRuntime(2564): at java.lang.reflect.Method.invoke(Method.java:511) 04-06 18:43:19.686: E/AndroidRuntime(2564): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) 04-06 18:43:19.686: E/AndroidRuntime(2564): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) 04-06 18:43:19.686: E/AndroidRuntime(2564): at dalvik.system.NativeStart.main(Native Method) Why would this happen? It doesn't point to any of my code so its a bit strange. the protected void onPostExecute(String result) never gets called on the callback. Thanks!

    Read the article

  • One intent is working, second is giving me a crash

    - by user1480742
    ok, so both intents receiver sides are on the same activite and they are sending from different ones....second one is not working, first one does, dont know why...all 3 activites are ok in manifest and all that //second intent on senders side public void onItemSelected(AdapterView<?> arg0, View users, int i, long l) { FILENAME = (adapter.getItem(i)).toString(); Bundle viewBag2 = new Bundle(); viewBag2.putString("profile_name", FILENAME); Intent b = new Intent(OptionsMenu.this, CoreActivity.class); b.putExtras(viewBag2); startActivity(b); } //second intent on receiver side private void Data_transfer() { Bundle gotbasket2 = getIntent().getExtras(); profileName = gotbasket2.getString("profile_name"); } //first (working intent) on senders side public void onClick(View v) { Bundle viewBag = new Bundle(); viewBag.putString("spinner_result", s); a.putExtras(viewBag); } //first (working intent) on receiver side private void Data_transfer() { // TODO Auto-generated method stub Bundle gotbasket = getIntent().getExtras(); x = gotbasket.getString("spinner_result"); } 06-26 20:22:09.787: D/AndroidRuntime(1802): Shutting down VM 06-26 20:22:09.787: W/dalvikvm(1802): threadid=1: thread exiting with uncaught exception (group=0x40015560) 06-26 20:22:09.847: E/AndroidRuntime(1802): FATAL EXCEPTION: main 06-26 20:22:09.847: E/AndroidRuntime(1802): java.lang.RuntimeException: Unable to start activity ComponentInfo{mioc.diver/mioc.diver.CoreActivity}: java.lang.NullPointerException 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1647) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1663) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.app.ActivityThread.access$1500(ActivityThread.java:117) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:931) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.os.Handler.dispatchMessage(Handler.java:99) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.os.Looper.loop(Looper.java:123) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.app.ActivityThread.main(ActivityThread.java:3683) 06-26 20:22:09.847: E/AndroidRuntime(1802): at java.lang.reflect.Method.invokeNative(Native Method) 06-26 20:22:09.847: E/AndroidRuntime(1802): at java.lang.reflect.Method.invoke(Method.java:507) 06-26 20:22:09.847: E/AndroidRuntime(1802): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 06-26 20:22:09.847: E/AndroidRuntime(1802): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 06-26 20:22:09.847: E/AndroidRuntime(1802): at dalvik.system.NativeStart.main(Native Method) 06-26 20:22:09.847: E/AndroidRuntime(1802): Caused by: java.lang.NullPointerException 06-26 20:22:09.847: E/AndroidRuntime(1802): at mioc.diver.CoreActivity.Data_transfer(CoreActivity.java:189) 06-26 20:22:09.847: E/AndroidRuntime(1802): at mioc.diver.CoreActivity.onCreate(CoreActivity.java:88) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 06-26 20:22:09.847: E/AndroidRuntime(1802): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1611) 06-26 20:22:09.847: E/AndroidRuntime(1802): ... 11 more

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • Ops Center Solaris 11 IPS Repository Management: Using ISO Images

    - by S Stelting
    Please join us for a live WebEx presentation of this topic on Tuesday, November 20th at 9am MDT. Details for the call are provided below: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209834017&UID=1512096072&PW=NYTVlZTYxMzdm&RT=MiMxMQ%3D%3D Meeting password: oracle123 Call-in toll-free number: 1-866-682-4770 International numbers: http://www.intercall.com/oracle/access_numbers.htm Conference Code: 762 9343 # Security Code: 7777 # With Enterprise Manager Ops Center 12c, you can provision, patch, monitor and manage Oracle Solaris 11 instances. To do this, Ops Center creates and maintains a Solaris 11 Image Packaging System (IPS) repository on the Enterprise Controller. During the Enterprise Controller configuration, you can load repository content directly from Oracle's Support Web site and subsequently synchronize the repository as new content becomes available. Of course, you can also use Solaris 11 ISO images to create and update your Ops Center repository. There are a few excellent reasons for doing this: You're running Ops Center in disconnected mode, and don't have Internet access on your Enterprise Controller You'd rather avoid the bandwidth associated with live synchronization of a Solaris 11 package repository This demo will show you how to use Solaris 11 ISO images to set up and update your Ops Center repository. Prerequisites This tip assumes that you've already installed the Enterprise Controller on a Solaris 11 OS instance and that you're ready for post-install configuration. In addition, there are specific Ops Center and OS version requirements depending on which version of Solaris 11 you plan to install.You can get full details about the requirements in the Release Notes for Ops Center 12c update 2. Additional information is available in the Ops Center update 2 Readme document. Part 1: Using a Solaris 11 ISO Image to Create an Ops Center Repository Step 1 – Download the Solaris 11 Repository Image The Oracle Web site provides a number of download links for official Solaris 11 images. Among those links is a two-part downloadable repository image, which provides repository content for Solaris 11 SPARC and X86 architectures. In this case, I used the Solaris 11 11/11 image. First, navigate to the Oracle Web site and accept the OTN License agreement: http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html Next, download both parts of the Solaris 11 repository image. I recommend using the Solaris 11 11/11 image, and have provided the URLs here: http://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-ahttp://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-b Finally, use the cat command to generate an ISO image you can use to create your repository: # cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > sol-11-1111-repo-full.iso The process is very similar if you plan to set up a Solaris 11.1 release in Ops Center. In that case, navigate to the Solaris 11 download page, accept the license agreement and download both parts of the Solaris 11.1 repository image. Use the cat command to create a single ISO image for Solaris 11.1 Step 2 – Mount the Solaris 11 ISO Image in your Local Filesystem Once you have created the Solaris 11 ISO file, use the mount command to attach it to your local filesystem. After the image has been mounted, you can browse the repository from the ./repo subdirectory, and use the pkgrepo command to verify that Solaris 11 recognizes the content: Step 3 – Use the Image to Create your Ops Center Repository When you have confirmed the repository is available, you can use the image to create the Enterprise Controller repository. The operation will be slightly different depending on whether you configure Ops Center for Connected or Disconnected Mode operation.For connected mode operation, specify the mounted ./repo directory in step 4.1 of the configuration wizard, replacing the default Web-based URL. Since you're synchronizing from an OS repository image, you don't need to specify a key or certificate for the operation. For disconnected mode configuration, specify the Solaris 11 directory along with the path to the disconnected mode bundle downloaded by running the Ops Center harvester script: Ops Center will run a job to import package content from the mounted ISO image. A synchronization job can take several hours to run – in my case, the job ran for 3 hours, 22 minutes on a SunFire X4200 M2 server. During the job, Ops Center performs three important tasks: Synchronizes all content from the image and refreshes the repository Updates the IPS publisher information Creates OS Provisioning profiles and policies based on the content When the job is complete, you can unmount the ISO image from your Enterprise Controller. At that time, you can view the repository contents in your Ops Center Solaris 11 library. For the Solaris 11 11/11 release, you should see 8,668 packages and patches in the contents. You should also see default deployment plans for Solaris 11 provisioning. As part of the repository import, Ops Center generates plans and profiles for desktop, small and large servers for the SPARC and X86 architecture. Part 2: Using a Solaris 11 SRU to update an Ops Center Repository It's possible to use the same approach to upgrade your Ops Center repository to a Solaris 11 Support Repository Update, or SRU. Each SRU provides packages and updates to Solaris 11 - for example, SRU 8.5 provided the packaged for Oracle VM Server for SPARC 2.2 SRUs are available for download as ISO images from My Oracle Support, under document ID 1372094.1. The document provides download links for all SRUs which have been released by Oracle for Solaris 11. SRUs are cumulative, so later versions include the packages from earlier SRUs. After downloading an ISO image for an SRU, you can mount it to your local filesystem using a mount command similar to the one shown for Solaris 11 11/11. When the ISO image is mounted to the file system, you can perform the Add Content action from the Solaris 11 Library to synchronize packages and patches from the mounted image. I used the same mount point, so the repository URL was file://mnt/repo once again: After the synchronization of an SRU is complete, you can verify its content in the Solaris 11 library using the search function. The version pattern is 0.175.0.#, where the # is the same value as the SRU. In this example, I upgraded to SRU 1. The update job ran in just under 8 minutes, and a quick search shows that 22 software components were added to the repository: It's also possible to search for "Support Repository Update" to confirm the SRU was successfully added to the repository. Details on any of the update content are available by clicking the "View Details" button under the Packages/Patches entry.

    Read the article

  • Interview with Lenz Grimmer about MySQL Connect

    - by Keith Larson
    Keith Larson: Thank you for allowing me to do this interview with you.  I have been talking with a few different Oracle ACEs   about the MySQL Connect Conference. I figured the MySQL community might be missing you as well. You have been very busy with Oracle Linux but I know you still have an eye on the MySQL Community. How have things been?Lenz Grimmer: Thanks for including me in this series of interviews, I feel honored! I've read the other interviews, and really liked them. I still try to follow what's going on over in the MySQL community and it's good to see that many of the familiar faces are still around. Over the course of the 9 years that I was involved with MySQL, many colleagues and contacts turned into good friends and we still maintain close relationships.It's been almost 1.5 years ago that I moved into my new role here in the Linux team at Oracle, and I really enjoy working on a Linux distribution again (I worked for SUSE before I joined MySQL AB in 2002). I'm still learning a lot - Linux in the data center has greatly evolved in so many ways and there are a lot of new and exciting technologies to explore. Keith Larson: What were your thoughts when you heard that Oracle was going to deliver the MySQL Connect conference to the MySQL Community?Lenz Grimmer: I think it's testament to the fact that Oracle deeply cares about MySQL, despite what many skeptics may say. What started as "MySQL Sunday" two years ago has now evolved into a full-blown sub-conference, with 80 sessions at one of the largest corporate IT events in the world. I find this quite telling, not many products at Oracle enjoy this level of exposure! So it certainly makes me feel proud to see how far MySQL has come. Keith Larson: Have you had a chance to look over the sessions? What are your thoughts on them?Lenz Grimmer: I did indeed look at the final schedule.The content committee did a great job with selecting these sessions. I'm glad to see that the content selection was influenced by involving well-known and respected members of the MySQL community. The sessions cover a broad range of topics and technologies, both covering established topics as well as recent developments. Keith Larson: When you get a chance, what sessions do you plan on attending?Lenz Grimmer: I will actually be manning the Oracle booth in the exhibition area on one of these days, so I'm not sure if I'll have a lot of time attending sessions. But if I do, I'd love to see the keynotes and catch some of the sessions that talk about recent developments and new features in MySQL, High Availability and Clustering . Quite a lot has happened and it's hard to keep up with this constant flow of new MySQL releases.In particular, the following sessions caught my attention: MySQL Connect Keynote: The State of the Dolphin Evaluating MySQL High-Availability Alternatives CERN’s MySQL “as a Service” Deployment with Oracle VM: Empowering Users MySQL 5.6 Replication: Taking Scalability and High Availability to the Next Level What’s New in MySQL Server 5.6? MySQL Security: Past and Present MySQL at Twitter: Development and Deployment MySQL Community BOF MySQL Connect Keynote: MySQL Perspectives Keith Larson: So I will ask you just like I have asked the others I have interviewed, any tips that you would give to people for handling the long hours at conferences?Lenz Grimmer: Wear comfortable shoes and make sure to drink a lot! Also prepare a plan of the sessions you would like to attend beforehand and familiarize yourself with the venue, so you can get to the next talk in time without scrambling to find the location. The good thing about piggybacking on such a large conference like Oracle OpenWorld is that you benefit from the whole infrastructure. For example, there is a nice schedule builder that helps you to keep track of your sessions of interest. Other than that, bring enough business cards and talk to people, build up your network among your peers and other MySQL professionals! Keith Larson: What features of the MySQL 5.6 release do you look forward to the most ?Lenz Grimmer: There has been solid progress in so many areas like the InnoDB Storage Engine, the Optimizer, Replication or Performance Schema, it's hard for me to really highlight anything in particular. All in all, MySQL 5.6 sounds like a very promising release. I'm confident it will follow the tradition that Oracle already established with MySQL 5.5, which received a lot of praise even from very critical members of the MySQL community. If I had to name a single feature, I'm particularly and personally happy that the precise GIS functions have finally made it into a GA release - that was long overdue. Keith Larson:  In your opinion what is the best reason for someone to attend this event?Lenz Grimmer: This conference is an excellent opportunity to get in touch with the key people in the MySQL community and ecosystem and to get facts and information from the domain experts and developers that work on MySQL. The broad range of topics should attract people from a variety of roles and relations to MySQL, beginning with Developers and DBAs, to CIOs considering MySQL as a viable solution for their requirements. Keith Larson: You will be attending MySQL Connect and have some Oracle Linux Demos, do you see a growing demand for MySQL on Oracle Linux ?Lenz Grimmer: Yes! Oracle Linux is our recommended Linux distribution and we have a good relationship to the MySQL engineering group. They use Oracle Linux as a base Linux platform for development and QA, so we make sure that MySQL and Oracle Linux are well tested together. Setting up a MySQL server on Oracle Linux can be done very quickly, and many customers recognize the benefits of using them both in combination.Because Oracle Linux is available for free (including free bug fixes and errata), it's an ideal choice for running MySQL in your data center. You can run the same Linux distribution on both your development/staging systems as well as on the production machines, you decide which of these should be covered by a support subscription and at which level of support. This gives you flexibility and provides some really attractive cost-saving opportunities. Keith Larson: Since I am a Linux user and fan, what is on the horizon for  Oracle Linux?Lenz Grimmer: We're working hard on broadening the ecosystem around Oracle Linux, building up partnerships with ISVs and IHVs to certify Oracle Linux as a fully supported platform for their products. We also continue to collaborate closely with the Linux kernel community on various projects, to make sure that Linux scales and performs well on large systems and meets the demands of today's data centers. These improvements and enhancements will then rolled into the Unbreakable Enterprise Kernel, which is the key ingredient that sets Oracle Linux apart from other distributions. We also have a number of ongoing projects which are making good progress, and I'm sure you'll hear more about this at the upcoming OpenWorld conference :) Keith Larson: What is something that more people should be aware of when it comes to Oracle Linux and MySQL ?Lenz Grimmer: Many people assume that Oracle Linux is just tuned for Oracle products, such as the Oracle Database or our Engineered Systems. While it's of course true that we do a lot of testing and optimization for these workloads, Oracle Linux is and will remain a general-purpose Linux distribution that is a very good foundation for setting up a LAMP-Stack, for example. We also provide MySQL RPM packages for Oracle Linux, so you can easily stay up to date if you need something newer than what's included in the stock distribution.One more thing that is really unique to Oracle Linux is Ksplice, which allows you to apply security patches to the running Linux kernel, without having to reboot. This ensures that your MySQL database server keeps up and running and is not affected by any downtime. Keith Larson: What else would you like to add ?Lenz Grimmer: Thanks again for getting in touch with me, I appreciated the opportunity. I'm looking forward to MySQL Connect and Oracle OpenWorld and to meet you and many other people from the MySQL community that I haven't seen for quite some time! Keith Larson:  Thank you Lenz!

    Read the article

  • Psychonauts crashes right after entering load save door

    - by user67974
    Psychonauts crashes right after entering the 'Load Save' door. Here is the terminal output: Shader assembly time: 0.88 seconds Found OpenAL device: 'Simple Directmedia Layer' Found OpenAL device: 'ALSA Software' Found OpenAL device: 'OSS Software' Found OpenAL device: 'PulseAudio Software' Opened OpenAL Device: '(null)' ERROR: CAudioDrv::CAudioDrv->alGenSources reports AL_INVALID_VALUE error. PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfx.isb' to 'WorkResource/Sounds/commonfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonvoice.isb' to 'WorkResource/Sounds/commonvoice.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmusic.isb' to 'WorkResource/Sounds/commonmusic.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmentalfx.isb' to 'WorkResource/Sounds/commonmentalfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmenfxmem.isb' to 'WorkResource/Sounds/commonmenfxmem.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfxmem.isb' to 'WorkResource/Sounds/commonfxmem.isb' GameApp::StartUp InitSoundFiles() completed in 0.15 seconds GameApp::StartUp Load some common textures completed in 0.00 seconds WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb GameApp::StartUp InitEntities() completed in 0.02 seconds PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' GameApp::StartUp m_pSaveLoadInterface->Startup() completed in 0.00 seconds GameApp::StartUp m_UserInterface.Setup() completed in 0.00 seconds STUBBED: multisample at EDisplayOptionsWidget (/home/icculus/projects/psychonauts/Source/game/luatest/Game/UIPCDisplayOptions.cpp:97) STUBBED: VK_* at CheckVirtualKey (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:1443) Game: Engine Running hook startup Game: Engine -> SetupGlobalObjects Game: Engine -> SetupLevelMenu Game: Engine -> InitMath GameApp::StartUp InitLua2() completed in 0.00 seconds GameApp::StartUp SetupLevelMenu() completed in 0.00 seconds STUBBED: do we even use this? at InitSocket (/home/icculus/projects/psychonauts/Source/game/luatest/Game/Gameplaylogger.cpp:210) GameApp::StartUp Post-Install total completed in 0.20 seconds Start Up completed in 1.57 seconds UnixMain: StartUp successful.. Working directory: /opt/psychonauts STUBBED: dispatch SDL events at PCMainHandleAnyWindowsMessages (/home/icculus/projects/psychonauts/Source/game/luatest/UnixMain.cpp:56) STUBBED: write me at GetJoystickInput (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:428) STUBBED: write me at GetJoystickActionValue (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:613) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' Prerender subtitle file: workresource\cutScenes\prerendered\dflogo.dfs not found PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' STUBBED: fixed function pipeline? at setColorOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2097) STUBBED: fixed function pipeline? at setColorArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2106) STUBBED: fixed function pipeline? at setColorArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2115) STUBBED: fixed function pipeline? at setAlphaOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2124) STUBBED: fixed function pipeline? at setAlphaArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2133) STUBBED: fixed function pipeline? at setAlphaArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2142) STUBBED: fixed function pipeline? at setProjected (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2223) LOC WARN: Could not open Localization file 'Localization/English/_StringTable.lub' STUBBED: memory status at UpdateMemoryTracking (/home/icculus/projects/psychonauts/Source/game/luatest/Game/GameApp.cpp:4884) WARN: Couldn't resize array to 128; out-of-bounds elements are still in use: Vertex Pool, 188 Loading new level 'STMU' STUBBED: Need multithreaded GL at DisplayLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:83) ========================= Memory post unload level ========================= ========================= LOC WARN: Could not open Localization file 'Localization/English/ST_StringTable.lub' DaveD: Info: Texture pack file contains 137 textures Doing a texture readback for locking! Game: Engine Saved[GLOBAL]: InstaHintFord_HostileRecord = [table] Game: Engine Saved[GLOBAL]: InstaHintFord_HostileOrder = [table] WARN: Redundant packfile read: anims\thought_bubble\bubblefirestarting.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleintothemind.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleinvisibility.jan WARN: Redundant packfile read: anims\thought_bubble\bubblepopperfill.jan WARN: Redundant packfile read: anims\thought_bubble\bubbletelekinesis.jan Initializing level script (if there is one) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/stfx.isb' to 'WorkResource/Sounds/stfx.isb' Game: Engine Reloading goals: Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: bUsedSalts = 0 Game: Engine Saved[GLOBAL]: bSTEntered = 1 Game: Engine Saved[GLOBAL]: memoriesST = 1 Game: Engine Saved[GLOBAL]: PsiBallColor = 'red' Game: Engine Saved[ST]: lastSubLevel = 'STMU' Game: Engine LOADING LEVEL st.STMU Game: Engine Saved[CA]: CALevelState = 1 Game: Engine Cutscene progression: CS Script moving from state nil to state nil, resultant state nil. Time: 0.124746672809124. * Stack Trace 1: (null) (line -1, file '(none)) () 2: SpawnScript (line -1, file 'C) (global) 3: onBeginLevel (line -1, file '(none)) (field) 4: (null) (line -1, file '(none)) () WARN: Cannot call GetDirectoryListing when running from the DVD Game: Engine Raz spawning at DartStart startpoint VM : LevelScript could not find script 'doorrimlight1' * Stack Trace 1: (null) (line -1, file '(none)) () WARN: (none(-1) SetEntityAlpha LevelScript: NULL script object passed Game: Engine Saved[GLOBAL]: bLoadedFromMainMenu = 1 Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: NeedRankIncrement = 0 STUBBED: Need multithreaded GL at HideLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:110) WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb Game: Engine Saved[GLOBAL]: SplineFigmentTVSizex = 4.51434326171875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizey = 46.38104248046875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizez = 47.08810424804688 WARN: (none(-1) SetNewAction LevelScript: no string passed ====================== Asset load progression ====================== Initial: 2.518 MB Vertex, 8.688 MB Texture Level : 3.719 MB Vertex, 22.535 MB Texture Scripts: 3.747 MB Vertex, 22.848 MB Texture ====================== ====================== Memory post level load ====================== ====================== WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb DaveD: Level loaded in 0.14 seconds Anim: anims\objects\tk_arrow_idle.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdn.jan: loaded (1 frames latency) Anim: anims\thought_bubble\shieldloop.jan: loaded (1 frames latency) Anim: anims\dartnew\standready.jan: loaded (1 frames latency) Anim: anims\dartnew\walkmove.jan: loaded (1 frames latency) Anim: anims\janitor\hint_end.jan: loaded (1 frames latency) Anim: anims\thought_bubble\ballstatic.jan: loaded (1 frames latency) Anim: anims\dartnew\actionfall.jan: loaded (1 frames latency) Anim: anims\dartnew\standstill.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_lf_rt.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_up_dn.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdefpose.jan: loaded (1 frames latency) 1: 1 (number) 1: 1 (number) STUBBED: This is probably wrong at GetDt (/home/icculus/projects/psychonauts/Source/CommonLibs/DFUtil/Profiler.cpp:181) STUBBED: set specular highlights at setSpecularEnable (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Renderer.cpp:2035) Anim: anims\dartnew\trnrtcycle.jan: loaded (1 frames latency) Anim: anims\dartnew\run.jan: loaded (1 frames latency) Anim: anims\dartnew\walk.jan: loaded (1 frames latency) Anim: anims\thought_bubble\bubbledoublejump.jan: loaded (1 frames latency) Anim: anims\dartnew\longjump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door1closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\180.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door3closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslide45angle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslideflat.jan: loaded (1 frames latency) Anim: anims\dartnew\trnlfcycle.jan: loaded (1 frames latency) WARN: (none(-1) SetNewAction LevelScript: no string passed Anim: anims\dartnew\mainmenu_jump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1open.jan: loaded (1 frames latency) ERROR: Assert in /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Encountered Error: Psychonauts has encountered an error /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Please contact technical support at http://www.doublefine.com. I am currently using Bumblebee for hybrid graphics, if that helps in any way.

    Read the article

  • Now Available &ndash; Windows Azure SDK 1.6

    - by Shaun
    Microsoft has just announced the Windows Azure SDK 1.6 and the Windows Azure Tools for Visual Studio 1.6. Now people can download the latest product through the WebPI. After you downloaded and installed the SDK you will find that The SDK 1.6 can be stayed side by side with the SDK 1.5, which means you can still using the 1.5 assemblies. But the Visual Studio Tools would be upgraded to 1.6. Different from the previous SDK, in this version it includes 4 components: Windows Azure Authoring Tools, Windows Azure Emulators, Windows Azure Libraries for .NET 1.6 and the Windows Azure Tools for Microsoft Visual Studio 2010. There are some significant upgrades in this version, which are Publishing Enhancement: More easily connect to the Windows Azure when publish your application by retrieving a publish setting file. It will let you configure some settings of the deployment, without getting back to the developer portal. Multi-profiles: The publish settings, cloud configuration files, etc. will be stored in one or more MSBuild files. It will be much easier to switch the settings between vary build environments. MSBuild Command-line Build Support. In-Place Upgrade Support.   Publishing Enhancement So let’s have a look about the new features of the publishing. Just create a new Windows Azure project in Visual Studio 2010 with a MVC 3 Web Role, and right-click the Windows Azure project node in the solution explorer, then select Publish, we will find the new publish dialog. In this version the first thing we need to do is to connect to our Windows Azure subscription. Click the “Sign in to download credentials” link, we will be navigated to the login page to provide the Live ID. The Windows Azure Tool will generate a certificate file and uploaded to the subscriptions those belong to us. Then we will download a PUBLISHSETTINGS file, which contains the credentials and subscriptions information. The Visual Studio Tool will generate a certificate and deployed to the subscriptions you have as the Management Certificate. The VS Tool will use this certificate to connect to the subscription in the next step. In the next step, I would back to the Visual Studio (the publish dialog should be stilling opened) and click the Import button, select the PUBLISHSETTINGS file I had just downloaded. Then all my subscriptions will be shown in the dropdown list. Select a subscription that I want the application to be published and press the Next button, then we can select the hosted service, environment, build configuration and service configuration shown in the dialog. In this version we can create a new hosted service directly here rather than go back to the developer portal. Just select the <Create New …> item in the hosted service. What we need to do is to provide the hosted service name and the location. Once clicked the OK, after several seconds the hosted service will be established. If we went to the developer portal we will find the new hosted service in my subscription. a) Currently we cannot select the Affinity Group when create a new hosted service through the Visual Studio Publish dialog. b) Although we can specify the hosted service name and DNS prefixing through the developer portal, we cannot do so from the VS Tool, which means the DNS prefixing would be the same as what we specified for the hosted service name. For example, we specified our hosted service name as “Sdk16Demo”, so the public URL would be http://sdk16demo.cloudapp.net/. After created a new hosted service we can select the cloud environment (production or staging), the build configuration (release or debug), and the service configuration (cloud or local). And we can set the Remote Desktop by check the related checkbox as well. One thing should be note is that, in this version when we set the Remote Desktop settings we don’t need to specify a certificate by default. This is because the Visual Studio will generate a new certificate for us by default. But we can still specify an existing certificate for RDC, by clicking the “More Options” button. Visual Studio Tool will create another certificate for the Remote Desktop connection. It will NOT use the certificate that managing the subscription. We also can select the “Advanced Settings” page to specify the deployment label, storage account, IntelliTrace and .NET profiling information, etc.. Press Next button, the dialog will display all settings I had just specified and it will save them as a new profile. The last step is to click the Publish button. Since we enabled the Remote Desktop feature, the first step of publishing was uploading the certificate. And then it will verify the storage account we specified and upload the package, then finally created the website in Windows Azure.   Multi-Profiles After published, if we back to the Visual Studio we can find a AZUREPUBXML file under the Profiles folder in the Azure project. It includes all settings we specified before. If we publish this project again, we can just use the current settings (hosted service, environment, RDC, etc.) from this profile without input them again. And this is very useful when we have more than one deployment settings. For example it would be able to have one AZUREPUBXML profile for deploying to testing environment (debug building, less roles with RDC and IntelliTrace) and one for production (release building, more roles but without IntelliTrace).   In-Place Upgrade Support Let’s change some codes in the MVC pages and click the Publish menu from the azure project node. No need to specify any settings,  here we can use the pervious settings by loading the azure profile file (AZUREPUBXML). After clicked the Publish button the VS Tool brought a dialog to us to indicate that there’s a deployment available in the hosted service environment, and prompt to REPLACE it or not. Notice that in this version, the dialog tool said “replace” rather than “delete”, which means by default the VS Tool will use In-Place Upgrade when we deploy to a hosted service that has a deployment already exist. After click Yes the VS Tool will upload the package and perform the In-Place Upgrade. If we back to the developer portal we can find that the status of the hosted service was turned to “Updating…”. But in the previous SDK, it will try to delete the whole deployment and publish a new one.   Summary When the Microsoft announced the features that allows the changing VM size via In-Place Upgrade, they also mentioned that in the next few versions the user experience of publishing the azure application would be improved. The target was trying to accomplish the whole publish experience in Visual Studio, which means no need to touch developer portal any more. In the SDK 1.6 we can see from the new publish dialog, as a developer we can do the whole process, includes creating hosted service, specifying the environment, configuration, remote desktop, etc. values without going back the the developer portal.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Java FAQ: Tudo o que você precisa saber

    - by Bruno.Borges
    Com frequência recebo e-mails de clientes com dúvidas sobre "quando sairá a próxima versão do Java?", ou então "quando vai expirar o Java?" ou ainda "quais as mudanças da próxima versão?". Por isso resolvi escrever aqui um FAQ, respondendo estas dúvidas e muitas outras. Este post estará sempre atualizado, então se você possui alguma dúvida, envie para mim no Twitter @brunoborges. Qual a diferença entre o Oracle JDK e o OpenJDK?O projeto OpenJDK funciona como a implementação de referência Open Source do Java Standard Edition. Empresas como a Oracle, IBM, e Azul Systems suportam e investem no projeto OpenJDK para continuar evoluindo a plataforma Java. O Oracle JDK é baseado no OpenJDK, mas traz outras ferramentas como o Mission Control, e a máquina virtual traz algumas features avançadas como por exemplo o Flight Recorder. Até a versão 6, a Oracle oferecia duas máquinas virtuais: JRockit (BEA) e HotSpot (Sun). A partir da versão 7 a Oracle unificou as máquinas virtuais, e levou as features avançadas do JRockit para dentro da VM HotSpot. Leia também o OpenJDK FAQ. Onde posso obter binários beta Early Access do JDK 7, JDK 8, JDK 9 para testar?A partir do projeto OpenJDK, existe um projeto específico para cada versão do Java. Nestes projetos você pode encontrar binários beta Early Access, além do código-fonte. JDK 6 - http://jdk6.java.net/ JDK 7 - http://jdk7.java.net/ JDK 8 - http://jdk8.java.net/ JDK 9 - http://jdk9.java.net/ Quando acaba o suporte do Oracle Java SE 6, 7, 8? Somente produtos e versões com release oficial são suportados pela Oracle (exemplo: não há suporte para binários beta do JDK 7, JDK 8, ou JDK 9). Existem duas categorias de datas que o usuriário do Java deve estar ciente:  EOPU - End of Public UpdatesMomento em que a Oracle não mais disponibiliza publicamente atualizações Oracle SupportPolítica de suporte da Oracle para produtos, incluindo o Oracle Java SE O Oracle Java SE é um produto e portando os períodos de suporte são regidos pelo Oracle Lifetime Support Policy. Consulte este documento para datas atualizadas e específicas para cada versão do Java. O Oracle Java SE 6 já atingiu EOPU (End of Public Updates) e agora é mantido e atualizado somente para clientes através de contrato comercial de suporte. Para maiores informações, consulte a página sobre Oracle Java SE Support.  O mais importante aqui é você estar ciente sobre as datas de EOPU para as versões do Java SE da Oracle.Consulte a página do Oracle Java SE Support Roadmap e busque nesta página pela tabela com nome Java SE Public Updates. Nela você encontrará a data em que determinada versão do Java irá atingir EOPU. Como funciona o versionamento do Java?Em 2013, a Oracle divulgou um novo esquema de versionamento do Java para facilmente identificar quando é um release CPU e quando é um release LFR, e também para facilitar o planejamento e desenvolvimento de correções e features para futuras versões. CPU - Critical Patch UpdateAtualizações com correções de segurança. Versão será múltipla de 5, ou com soma de 1 para manter o número ímpar. Exemplos: 7u45, 7u51, 7u55. LFR - Limited Feature ReleaseAtualizações com correções de funcionalidade, melhorias de performance, e novos recursos. Versões de números pares múltiplos de 20, com final 0. Exemplos: 7u40, 7u60, 8u20. Qual a data da próxima atualização de segurança (CPU) do Java SE?Lançamentos do tipo CPU são controlados e pré-agendados pela Oracle e se aplicam a todos os produtos, inclusive o Oracle Java SE. Estes releases acontecem a cada 3 meses, sempre na Terça-feira mais próxima do dia 17 dos meses de Janeiro, Abril, Julho, e Outubro. Consulte a página Critical Patch Updates, Security Alerts and Third Party Bulleting para saber das próximas datas. Caso tenha interesse, você pode acompanhar através de recebimentos destes boletins diretamente no seu email. Veja como assinar o Boletim de Segurança da Oracle. Qual a data da próxima atualização de features (LFR) do Java SE?A Oracle reserva o direito de não divulgar estas datas, assim como o faz para todos os seus produtos. Entretanto é possível acompanhar o desenvolvimento da próxima versão pelos sites do projeto OpenJDK. A próxima versão do JDK 7 será o update 60 e binários beta Early Access já estão disponíveis para testes. A próxima versão doJDK 8 será o update 20 e binários beta Early Access já estão disponíveis para testes. Onde posso ver as mudanças e o que foi corrigido para a próxima versão do Java?A Oracle disponibiliza um changelog para cada binário beta Early Access divulgado no portal Java.net. JDK 7 update 60 changelogs JDK 8 update 20 changelogs Quando o Java da minha máquina (ou do meu usuário) vai expirar?Conheçendo o sistema de versionamento do Java e a periodicidade dos releases de CPU, o usuário pode determinar quando que um update do Java irá expirar. De todo modo, a cada novo update, a Oracle já informa quando que este update deverá expirar diretamente no release notes da versão. Por exemplo, no release notes da versão Oracle Java SE 7 update 55, está escrito na seção JRE Expiration Date o seguinte: The JRE expires whenever a new release with security vulnerability fixes becomes available. Critical patch updates, which contain security vulnerability fixes, are announced one year in advance on Critical Patch Updates, Security Alerts and Third Party Bulletin. This JRE (version 7u55) will expire with the release of the next critical patch update scheduled for July 15, 2014. For systems unable to reach the Oracle Servers, a secondary mechanism expires this JRE (version 7u55) on August 15, 2014. After either condition is met (new release becoming available or expiration date reached), the JRE will provide additional warnings and reminders to users to update to the newer version. For more information, see JRE Expiration Date.Ou seja, a versão 7u55 irá expirar com o lançamento do próximo release CPU, pré-agendado para o dia 15 de Julho de 2014. E caso o computador do usuário não possa se comunicar com o servidor da Oracle, esta versão irá expirar forçadamente no dia 15 de Agosto de 2014 (através de um mecanismo embutido na versão 7u55). O usuário não é obrigado a atualizar para versões LFR e portanto, mesmo com o release da versão 7u60, a versão atual 7u55 não irá expirar.Veja o release notes do Oracle Java SE 8 update 5. Encontrei um bug. Como posso reportar bugs ou problemas no Java SE, para a Oracle?Sempre que possível, faça testes com os binários beta antes da versão final ser lançada. Qualquer problema que você encontrar com estes binários beta, por favor descreva o problema através do fórum de Project Feebdack do JDK.Caso você encontre algum problema em uma versão final do Java, utilize o formulário de Bug Report. Importante: bugs reportados por estes sistemas não são considerados Suporte e portanto não há SLA de atendimento. A Oracle reserva o direito de manter o bug público ou privado, e também de informar ou não o usuário sobre o progresso da resolução do problema. Tenho uma dúvida que não foi respondida aqui. Como faço?Se você possui uma pergunta que não foi respondida aqui, envie para bruno.borges_at_oracle.com e caso ela seja pertinente, tentarei responder neste artigo. Para outras dúvidas, entre em contato pelo meu Twitter @brunoborges.

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • Oracle Expands Sun Blade Portfolio for Cloud and Highly Virtualized Environments

    - by Ferhat Hatay
    Oracle announced the expansion of Sun Blade Portfolio for cloud and highly virtualized environments that deliver powerful performance and simplified management as tightly integrated systems.  Along with the SPARC T3-1B blade server, Oracle VM blade cluster reference configuration and Oracle's optimized solution for Oracle WebLogic Suite, Oracle introduced the dual-node Sun Blade X6275 M2 server module with some impressive benchmark results.   Benchmarks on the Sun Blade X6275 M2 server module demonstrate the outstanding performance characteristics critical for running varied commercial applications used in cloud and highly virtualized environments.  These include best-in-class SPEC CPU2006 results with the Intel Xeon processor 5600 series, six Fluent world records and 1.8 times the price-performance of the IBM Power 755 running NAMD, a prominent bio-informatics workload.   Benchmarks for Sun Blade X6275 M2 server module  SPEC CPU2006  The Sun Blade X6275 M2 server module demonstrated best in class SPECint_rate2006 results for all published results using the Intel Xeon processor 5600 series, with a result of 679.  This result is 97% better than the HP BL460c G7 blade, 80% better than the IBM HS22V blade, and 79% better than the Dell M710 blade.  This result demonstrates the density advantage of the new Oracle's server module for space-constrained data centers.     Sun Blade X6275M2 (2 Nodes, Intel Xeon X5670 2.93GHz) - 679 SPECint_rate2006; HP ProLiant BL460c G7 (2.93 GHz, Intel Xeon X5670) - 347 SPECint_rate2006; IBM BladeCenter HS22V (Intel Xeon X5680)  - 377 SPECint_rate2006; Dell PowerEdge M710 (Intel Xeon X5680, 3.33 GHz) - 380 SPECint_rate2006.  SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/24/2010 and this report.    For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Fluent The Sun Fire X6275 M2 server module produced world-record results on each of the six standard cases in the current "FLUENT 12" benchmark test suite at 8-, 12-, 24-, 32-, 64- and 96-core configurations. These results beat the most recent QLogic score with IBM DX 360 M series platforms and QLogic "Truescale" interconnects.  Results on sedan_4m test case on the Sun Blade X6275 M2 server module are 23% better than the HP C7000 system, and 20% better than the IBM DX 360 M2; Dell has not posted a result for this test case.  Results can be found at the FLUENT website.   ANSYS's FLUENT software solves fluid flow problems, and is based on a numerical technique called computational fluid dynamics (CFD), which is used in the automotive, aerospace, and consumer products industries. The FLUENT 12 benchmark test suite consists of seven models that are well suited for multi-node clustered environments and representative of modern engineering CFD clusters. Vendors benchmark their systems with the principal objective of providing comparative performance information for FLUENT software that, among other things, depends on compilers, optimization, interconnect, and the performance characteristics of the hardware.   FLUENT application performance is representative of other commercial applications that require memory and CPU resources to be available in a scalable cluster-ready format.  FLUENT benchmark has six conventional test cases (eddy_417k, turbo_500k, aircraft_2m, sedan_4m, truck_14m, truck_poly_14m) at various core counts.   All information on the FLUENT website (http://www.fluent.com) is Copyrighted1995-2010 by ANSYS Inc. Results as of November 24, 2010. For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   NAMD Results on the Sun Blade X6275 M2 server module running NAMD (a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems) show up to a 1.8X better price/performance than IBM's Power 7-based system.  For space-constrained environments, the ultra-dense Sun Blade X6275 M2 server module provides a 1.7X better price/performance per rack unit than IBM's system.     IBM Power 755 4-way Cluster (16U). Total price for cluster: $324,212. See IBM United States Hardware Announcement 110-008, dated February 9, 2010, pp. 4, 21 and 39-46.  Sun Blade X6275 M2 8-Blade Cluster (10U). Total price for cluster:  $193,939. Price/performance and performance/RU comparisons based on f1ATPase molecule test results. Sun Blade X6275 M2 cluster: $3,568/step/sec, 5.435 step/sec/RU. IBM Power 755 cluster: $6,355/step/sec, 3.189 step/sec/U. See http://www-03.ibm.com/systems/power/hardware/reports/system_perf.html. See http://www.ks.uiuc.edu/Research/namd/performance.html for more information, results as of 11/24/10.   For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Reverse Time Migration The Reverse Time Migration is heavily used in geophysical imaging and modeling for Oil & Gas Exploration.  The Sun Blade X6275 M2 server module showed up to a 40% performance improvement over the previous generation server module with super-linear scalability to 16 nodes for the 9-Point Stencil used in this Reverse Time Migration computational kernel.  The balanced combination of Oracle's Sun Storage 7410 system with the Sun Blade X6275 M2 server module cluster showed linear scalability for the total application throughput, including the I/O and MPI communication, to produce a final 3-D seismic depth imaged cube for interpretation. The final image write time from the Sun Blade X6275 M2 server module nodes to Oracle's Sun Storage 7410 system achieved 10GbE line speed of 1.25 GBytes/second or better performance. Between subsequent runs, the effects of I/O buffer caching on the Sun Blade X6275 M2 server module nodes and write optimized caching on the Sun Storage 7410 system gave up to 1.8 GBytes/second effective write performance. The performance results and characterization of this Reverse Time Migration benchmark could serve as a useful measure for many other I/O intensive commercial applications. 3D VTI Reverse Time Migration Seismic Depth Imaging, see http://blogs.sun.com/BestPerf/entry/3d_vti_reverse_time_migration for more information, results as of 11/14/2010.                            

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165  | Next Page >