Search Results

Search found 4586 results on 184 pages for 'supported'.

Page 151/184 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • java.awt.Desktop.open doesn’t work with PDF files?

    - by Jason S
    It looks like I cannot use Desktop.open() on PDF files regardless of location. Here's a small test program: package com.example.bugs; import java.awt.Desktop; import java.io.File; import java.io.IOException; public class DesktopOpenBug { static public void main(String[] args) { try { Desktop desktop = null; // Before more Desktop API is used, first check // whether the API is supported by this particular // virtual machine (VM) on this particular host. if (Desktop.isDesktopSupported()) { desktop = Desktop.getDesktop(); for (String path : args) { File file = new File(path); System.out.println("Opening "+file); desktop.open(file); } } } catch (IOException e) { e.printStackTrace(); } } } If I run DesktopOpenBug with arguments c:\tmp\zz1.txt c:\tmp\zz.xml c:\tmp\ss.pdf (3 files I happen to have lying around) I get this result: (the .txt and .xml files open up fine) Opening c:\tmp\zz1.txt Opening c:\tmp\zz.xml Opening c:\tmp\ss.pdf java.io.IOException: Failed to open file:/c:/tmp/ss.pdf. Error message: The parameter is incorrect. at sun.awt.windows.WDesktopPeer.ShellExecute(Unknown Source) at sun.awt.windows.WDesktopPeer.open(Unknown Source) at java.awt.Desktop.open(Unknown Source) at com.example.bugs.DesktopOpenBug.main(DesktopOpenBug.java:21) What the heck is going on? I'm running WinXP, I can type "c:\tmp\ss.pdf" at the command prompt and it opens up just fine. edit: if this is an example of Sun Java bug #6764271 please help by voting for it. What a pain. :(

    Read the article

  • localhost yes but phpmyadmin blank

    - by Giskin Leow
    WAMP people having problem with both localhost and phpmyadmin loads blank which usually the port problem. Mine is only phpmyadmin blank. sqlbuddy and phpinfo no problem. tried uninstall reinstalled wamp. tried xampp, same problem, all works well, not phpmyadmin. mysql log: 120905 8:03:08 [Note] Plugin 'FEDERATED' is disabled. 120905 8:03:08 InnoDB: The InnoDB memory heap is disabled 120905 8:03:08 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120905 8:03:08 InnoDB: Compressed tables use zlib 1.2.3 120905 8:03:09 InnoDB: Initializing buffer pool, size = 128.0M 120905 8:03:09 InnoDB: Completed initialization of buffer pool 120905 8:03:09 InnoDB: highest supported file format is Barracuda. 120905 8:03:09 InnoDB: Waiting for the background threads to start 120905 8:03:10 InnoDB: 1.1.8 started; log sequence number 1595675 120905 8:03:11 [Note] Server hostname (bind-address): '(null)'; port: 3306 120905 8:03:11 [Note] - '(null)' resolves to '::'; 120905 8:03:11 [Note] - '(null)' resolves to '0.0.0.0'; 120905 8:03:11 [Note] Server socket created on IP: '0.0.0.0'. 120905 8:03:13 [Note] Event Scheduler: Loaded 0 events 120905 8:03:13 [Note] wampmysqld: ready for connections. apache log [Wed Sep 05 08:03:09 2012] [notice] Apache/2.2.22 (Win32) PHP/5.4.3 configured -- resuming normal operations [Wed Sep 05 08:03:09 2012] [notice] Server built: May 13 2012 13:32:42 [Wed Sep 05 08:03:09 2012] [notice] Parent: Created child process 3812 [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Child process is running [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Acquired the start mutex. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting 64 worker threads. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:04:14 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:09:50 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:41:03 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/phpMyAdmin

    Read the article

  • Which Android hardware devices should I test on? [closed]

    - by Tchami
    Possible Duplicate: What hardware devices do you test your Android apps on? I'm trying to compile a list of Android hardware devices that it would make sense to buy and test against if you want to target an as broad audience as possible, while still not buying every single Android device out there. I know there's a lot of information regarding screen sizes and Android versions available elsewhere, but: when developing for Android it's not terribly useful to know if the screen size of a device is 480x800 or 320x240, unless you feel like doing the math to convert that into Android "units" (i.e. small, normal, large or xlarge screens, and ldpi, mdpi, hdpi or xhdpi densities). Even knowing the dimensions of a device, you cannot be sure of the actual Android units as there's some overlap, see Range of screens supported in the Android documentation Taking into account the distribution of Platform versions and Screen Sizes and Densities, below is my current list based on information from the Wikipedia article on Comparison of Android devices. I'm fairly sure the information in this list is correct, but I'd welcome any suggestions/changes. Phones | Model | Android Version | Screen Size | Density | | HTC Wildfire | 2.1/2.2 | Normal | mdpi | | HTC Tattoo | 1.6 | Normal | mdpi | | HTC Hero | 2.1 | Normal | mdpi | | HTC Legend | 2.1 | Normal | mdpi | | Sony Ericsson Xperia X8 | 1.6/2.1 | Normal | mdpi | | Motorola Droid | 2.0-2.2 | Normal | hdpi | | Samsung Galaxy S II | 2.3 | Normal | hdpi | | Samsung Galaxy Nexus | 4.0 | Normal | xhdpi | | Samsung Galaxy S III | 4.0 | Normal | xhdpi | **Tablets** | Model | Android Version | Screen Size | Density | | Samsung Galaxy Tab 7" | 2.2 | Large | hdpi | | Samsung Galaxy Tab 10" | 3.0 | X-Large | mdpi | | Asus Transformer Prime | 4.0 | X-Large | mdpi | | Motorola Xoom | 3.1/4.0 | X-Large | mdpi | N.B.: I have seen (and read) other posts on SO on this subject, e.g. Which Android devices should I test against? and What hardware devices do you test your Android apps on? but they don't seem very canonical. Maybe this should be marked community wiki?

    Read the article

  • How to store a captured image into MySQL database using JavaScript

    - by R J.
    I am capturing image using canvas and i want to store a captured image in MySQL Database using Javascript. This is my code: <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, maximum-scale=1.0"> <style> body {width: 100%;} canvas {display: none;} </style> <title>Instant Camera - Remote</title> <script> var video, canvas, msg; var load = function () { video = document.getElementById('video'); canvas = document.getElementById('canvas'); msg = document.getElementById('error'); if( navigator.getUserMedia ) { video.onclick = function () { var context = canvas.getContext("2d"); context.drawImage(video, 0, 0, 240, 320); var image1 = canvas.toDataURL("image/png"); document.write('<img src="' + image1 + '" />'); }; } else { msg.innerHTML = "Native web camera not supported :("; } }; window.addEventListener('DOMContentLoaded', load, false); </script> </head> <body> <video id="video" width="240" height="320" autoplay> </video> <p id="error">Click on the video to send a snapshot to the receiving screen</p> <canvas id="canvas" width="240" height="320"> </canvas> </body> </html>

    Read the article

  • Pthread - setting scheduler parameters

    - by Andna
    I wanted to use read-writer locks from pthread library in a way, that writers have priority over readers. I read in my man pages that If the Thread Execution Scheduling option is supported, and the threads involved in the lock are executing with the scheduling policies SCHED_FIFO or SCHED_RR, the calling thread shall not acquire the lock if a writer holds the lock or if writers of higher or equal priority are blocked on the lock; otherwise, the calling thread shall acquire the lock. so I wrote small function that sets up thread scheduling options. void thread_set_up(int _thread) { struct sched_param *_param=malloc(sizeof (struct sched_param)); int *c=malloc(sizeof(int)); *c=sched_get_priority_min(SCHED_FIFO)+1; _param->__sched_priority=*c; long *a=malloc(sizeof(long)); *a=syscall(SYS_gettid); int *b=malloc(sizeof(int)); *b=SCHED_FIFO; if (pthread_setschedparam(*a,*b,_param) == -1) { //depending on which thread calls this functions, few thing can happen if (_thread == MAIN_THREAD) client_cleanup(); else if (_thread==ACCEPT_THREAD) { pthread_kill(params.main_thread_id,SIGINT); pthread_exit(NULL); } } } sorry for those a,b,c but I tried to malloc everything, still I get SIGSEGV on the call to pthread_setschedparam, I am wondering why?

    Read the article

  • for loop vs std::for_each with lambda

    - by Andrey
    Let's consider a template function written in C++11 which iterates over a container. Please exclude from consideration the range loop syntax because it is not yet supported by the compiler I'm working with. template <typename Container> void DoSomething(const Container& i_container) { // Option #1 for (auto it = std::begin(i_container); it != std::end(i_container); ++it) { // do something with *it } // Option #2 std::for_each(std::begin(i_container), std::end(i_container), [] (typename Container::const_reference element) { // do something with element }); } What are pros/cons of for loop vs std::for_each in terms of: a) performance? (I don't expect any difference) b) readability and maintainability? Here I see many disadvantages of for_each. It wouldn't accept a c-style array while the loop would. The declaration of the lambda formal parameter is so verbose, not possible to use auto there. It is not possible to break out of for_each. In pre- C++11 days arguments against for were a need of specifying the type for the iterator (doesn't hold any more) and an easy possibility of mistyping the loop condition (I've never done such mistake in 10 years). As a conclusion, my thoughts about for_each contradict the common opinion. What am I missing here?

    Read the article

  • C++ unrestricted union workaround

    - by Chris
    #include <stdio.h> struct B { int x,y; }; struct A : public B { // This whines about "copy assignment operator not allowed in union" //A& operator =(const A& a) { printf("A=A should do the exact same thing as A=B\n"); } A& operator =(const B& b) { printf("A = B\n"); } }; union U { A a; B b; }; int main(int argc, const char* argv[]) { U u1, u2; u1.a = u2.b; // You can do this and it calls the operator = u1.a = (B)u2.a; // This works too u1.a = u2.a; // This calls the default assignment operator >:@ } Is there any workaround to be able to do that last line u1.a = u2.a with the exact same syntax, but have it call the operator = (don't care if it's =(B&) or =(A&)) instead of just copying data? Or are unrestricted unions (not supported even in Visual Studio 2010) the only option?

    Read the article

  • Graph not following orientation

    - by user1214037
    so I am trying to draw some grid lines that in landscape go all the way down to the bottom, however when I switch to landscape the the graph doesn't follow and the grid lines go smaller. I have set the - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return YES; } However it still doesn't work, here is my code. Can anyone spot the problem? this is the custom view file, the view controller is the default apart from the code above that returns yes. .h file #import <UIKit/UIKit.h> #define kGraphHeight 300 #define kDefaultGraphWidth 900 #define kOffsetX 10 #define kStepX 50 #define kGraphBottom 300 #define kGraphTop 0 @interface GraphView : UIView @end And here is the implementation - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 0.6); CGContextSetStrokeColorWithColor(context, [[UIColor lightGrayColor] CGColor]); // How many lines? int howMany = (kDefaultGraphWidth - kOffsetX) / kStepX; // Here the lines go for (int i = 0; i < howMany; i++) { CGContextMoveToPoint(context, kOffsetX + i * kStepX, kGraphTop); CGContextAddLineToPoint(context, kOffsetX + i * kStepX, kGraphBottom); } CGContextStrokePath(context); } Any help would be appreciated btw I am following this tutorial http://buildmobile.com/creating-a-graph-with-quartz-2d/#fbid=YDPLqDHZ_9X

    Read the article

  • Java getMethod with subclass parameter

    - by SelectricSimian
    I'm writing a library that uses reflection to find and call methods dynamically. Given just an object, a method name, and a parameter list, I need to call the given method as though the method call were explicitly written in the code. I've been using the following approach, which works in most cases: static void callMethod(Object receiver, String methodName, Object[] params) { Class<?>[] paramTypes = new Class<?>[params.length]; for (int i = 0; i < param.length; i++) { paramTypes[i] = params[i].getClass(); } receiver.getClass().getMethod(methodName, paramTypes).invoke(receiver, params); } However, when one of the parameters is a subclass of one of the supported types for the method, the reflection API throws a NoSuchMethodException. For example, if the receiver's class has testMethod(Foo) defined, the following fails: receiver.getClass().getMethod("testMethod", FooSubclass.class).invoke(receiver, new FooSubclass()); even though this works: receiver.testMethod(new FooSubclass()); How do I resolve this? If the method call is hard-coded there's no issue - the compiler just uses the overloading algorithm to pick the best applicable method to use. It doesn't work with reflection, though, which is what I need. Thanks in advance!

    Read the article

  • using buttons to open webviews

    - by A-P
    hey guys im trying to make the buttons on my project to open a different webview url. Im new to ios programming, but im used to andriod programming. Is this possible? Ive a,ready created another webview view that sits in the supporting files folderHere is my code below Viewcontroller.m #import "ViewController.h" @implementation ViewController - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } #pragma mark - View lifecycle - (void)viewDidLoad { NSString *urlString = @"http://www.athletic-profile.com/Application"; //Create a URL object. NSURL *url = [NSURL URLWithString:urlString]; //URL Requst Object NSURLRequest *webRequest = [NSURLRequest requestWithURL:url]; //Load the request in the UIWebView. [webView loadRequest:webRequest]; [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @synthesize webView; @end

    Read the article

  • '??' Not a valid unicode character, but in the unicode character set?

    - by Steve Cotner
    Short story: I can't get an entity like '𠂉' to store in a MySQL database, either by using a text field in a Ruby on Rails app (with default UTF-8 encoding) or by inputting it directly with a MySQL GUI app. As far as I can tell, all Chinese characters and radicals can be entered into the database without problem, but not these rarely typed 'character components.' The character mentioned above is unicode U+20089 and html entity &#131209; I can get it to display on the page by entering <html>&#131209;</html> and removing html escaping, but I would like to store it simply as the unicode character and keep the html escaping in place. There are many other Chinese 'components' (parts of full characters, generally consisting of 2 or 3 strokes) that cause the same problem. According to this page, the character mentioned is in the UTF-8 charset: http://www.fileformat.info/info/unicode/char/20089/charset_support.htm But on the neighboring '...20089/index.htm' page, there's an alert saying it's not a valid unicode character. For reference, that entity can be found in Mac OS X by searching through the character palette (international menu, "Show Character Palette"), searching by radical, and looking under the '?' radical. Apologies if this is too open-ended... can a character like this be stored in a UTF-8-based database? How is this character both supported and unsupported, both present in the character set and not valid?

    Read the article

  • Unable to set up SSL support for Apache 2 on Debian

    - by Francesco
    I am trying to set up ssl support for Apache 2 on Debian. Versions are: Debian GNU/Linux 6.0 apache2 2.2.16-6+squeeze1 I followed a lot of how-tos for days but I couldn't make it work. Here are my steps and configuration files (ServerName and DocumentRoot are changed for privacy, in case tell me): # mkdir /etc/apache2/ssl # openssl req $@ -new -x509 -days 365 -nodes -out /etc/apache2/apache.pem -keyout /etc/apache2/apache.pem at this point I've a doubt about permissions on apache.pem, at this step they are -rw-r--r-- 1 root root Maybe it has to belong to www-data? Then I enable ssl-mod with # a2enmod ssl # /etc/init.d/apache2 restart I modify /etc/apache2/sites-available/default-ssl in this way (I put port 8080 because I need port 443 for another purpose): <VirtualHost *:8080> SSLEngine on SSLCertificateFile /etc/apache2/ssl/apache.pem SSLCertificateKeyFile /etc/apache2/ssl/apache.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options Indexes FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user1/public_html/ ServerName first.server.org # Other directives here </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user2/public_html/ ServerName second.server.org # Other directives here </VirtualHost> I have to point out that the same configuration works on http (it is a copy of /etc/apache2/sites-available/default with some differences - port and ssl support). My /etc/apache2/ports.conf is the following: # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz #NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. #NameVirtualHost *:8080 Listen 8080 </IfModule> <IfModule mod_gnutls.c> Listen 8080 </IfModule> Any suggestion? Thanks

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • How to start networking on a wired interface before logon in Ubuntu Desktop Edition

    - by Burly
    Problem Ubuntu 9.10 Desktop Edition (and possibly previous versions as well, I haven't tested them) has no network connections after boot until at least 1 user logs in. This means any services that require networking (e.g. openssh-server) are not available until someone logs in locally either via gdm, kdm, or a TTY. Background Ubuntu 9.10 Desktop Edition uses the NetworkManager service to take commands from the nm-applet in Gnome (or it's equivalent in KDE). As I understand it, while NetworkManager is running at boot, it is not issued any commands to connect until you login for the first time because nm-applet isn't running until you login and your Gnome session starts (or similar for KDE). I'm not sure what prompts NetworkManager to connect to the network when you login via a TTY. There are several relevant variables involved in starting up the network connections including: Wired vs Wireless (and the resulting drivers, SSID, passwords, and priorities) Static vs DHCP Multiple interfaces Constraints Support Ubuntu 9.10 Karmic Koala (bonus points for additional supported versions) Support wired eth0 interface Receive an IP address via DHCP Receive DNS information via DHCP (obviously the DHCP server must provide this information) Enable networking at the proper time (e.g. some time after file systems are loaded but before network services like ssh start) Switching distros or versions (e.g. to Server Edition) is not an acceptable solution Switching to a Static IP configuration is not an acceptable solution Question How to start networking on a wired interface before logon in Ubuntu Desktop Edition? What I have tried Per this guide, adding the following entry into /etc/network/interfaces so that NetworkManager won't manage the eth0 interface: auth eth0 iface inet dhcp After reboot eth0 is down. Issuing ifconfig eth0 up brings the interface up but it receives no IP address. Issuing dhclient eth0 instead Does bring up the interface and it Does receive an IP address. Completely removing the NetworkManager package in addition to the settings above. I'm a bit confused with the whole UpStart/SysVinit mangling that's going in Ubuntu currently (I'm more familiar with the CentOS world). However, directly issuing sudo /etc/init.d/networking start Or sudo start networking does not bring up the eth0 interface at all, much less get an IP address. See-Also How to force NetworkManager to make a connection before login? References Ubuntu Desktop Edition Ubuntu Networking Configuration Using Command Line Automatic Network Configuration Via Command-Line Start network connection before login

    Read the article

  • Proxying webmin with nginx

    - by TheLQ
    I am attempting to proxy webmin behind nginx for various reasons that are outside the scope of this question. However I've been trying for a while now and can't seem to figure it out and think I'm to the point where I've exhausted all the permutations of the config file I can think of. What I have now: relevant nginx config (commented out options removed, I tried many) # Proxy for webmin location /admin/quackwall-webmin { proxy_pass http://127.0.0.1:10000; # Also tried ending with /admin/quackwall-webmin proxy_set_header Host $host; } /etc/webmin/config - Relevant parts webprefix=/admin/quackwall-webmin webprefixnoredir=1 referer=(nginx domain name) Webmin itself is on the standard ports, listening on all addresses temporarily for debugging. SSL has been disabled for right now. So I make a standard request for the login page. However all the CSS and images are broken, with the standard login page returned for all of the resources. In the webmin miniserv logs I see 127.0.0.1 - - [29/Oct/2012:12:29:00 -0400] "GET /admin/quackwall-webmin/session_login.cgi HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/style.css HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/sorttable.js HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/toggleview.js HTTP/1.0" 401 2453 So all the URL's are returning 401s. Interestingly ngrep seems to show that the requests suceeded on the backend communication between nginx and webmin T 127.0.0.1:58908 -> 127.0.0.1:10000 [AP] POST /admin/quackwall-webmin/session_login.cgi HTTP/1.0..Host: (host)..Connection: close..User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW 64; rv:16.0) Gecko/20100101 Firefox/16.0..Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8..Accept-Language: en-US,en;q=0.5. .Accept-Encoding: gzip, deflate..Referer: http://(host)/admin/quackwall-webmin/session_login.cgi..Cookie: testing=1..Cache-Control: ma x-age=0..Content-Type: application/x-www-form-urlencoded..Content-Length: 41....page=%2F&user=(user)&pass=(pass) T 127.0.0.1:10000 -> 127.0.0.1:58908 [AP] HTTP/1.0 200 Document follows.. Various other permutations of these config options and others show similar results, with the URL sent to webmin by nginx either being /admin/quackwall-webmin/session_login.cgi, /admin/quackwall-webmin//session_login.cgi, and just /session_login.cgi. All give 201 Unauthenticated responses. All requests, even those that somewhat succeed (as in I can actually load the resources of the page) Is changing the webprefix in webmin even supported? What am I doing wrong? What else can I try?

    Read the article

  • Installing drivers for switchable graphics

    - by Anonymous
    I recently bought a laptop that came with Windows 7 64-bit installed. I have some older (16-bit and 32-bit) software that doesn't work with 64-bit Windows, but works just fine with 32-bit. Since I also wanted to get rid of all of the pre-installed spam, I decided to wipe the hard drive and install a fresh copy of Windows 7 32-bit. I can't get the graphics cards working. This laptop uses switchable graphics, an Intel card and a Radeon card. I first tried installing this driver from Intel, which works for the Intel card. Of course, the Radeon card doesn't work with this driver and I need it for some of the newer games I have. I also tried this driver. Windows's device manager will recognize the Radeon card, but it will still use the Intel card. Also, even though that package says it contains the Intel driver, the Intel card still isn't properly recognized by Windows (leaving me with a nasty 800x600 resolution). On top of that, the Catalyst Control Center won't open (saying "The Catalyst Control Center is not supported by the driver version of your enabled graphics adapter") I tried installing HP's driver then installing Intel's driver on top of it. Device manager will then recognize both graphics cards properly. However, the laptop still uses the Intel card. The CCC still won't start (saying the same thing as before) and I can't find any of 'switching' graphics cards. Before formatting, I could right-click the desktop and click "Configure Switchable Graphics" This option hasn't been in the context menu regardless of what driver(s) I've installed. After some research, I found out that this menu entry runs the command "cli.exe Start PowerXpressHybrid" I've tried manually running this command, but I get the same unsupported message from CCC. So, does anyone know how I can get this working? I would like to be able to switch between the Intel and Radeon. But, if there's some way to disable the Intel and use only the Radeon, that would be fine I dual-boot with Linux (framebuffer uses the Intel, haven't even tried getting X set up yet) Here's the output of lspci # lspci -v | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] (prog-if 00 [VGA controller]) The laptop is a HP Pavilion g6t-1d00. HP doesn't support installing anything but Windows 7 64-bit, so calling tech support isn't an option. Thanks for any help UPDATE: I finally got it working. After a fresh install of Windows 7, I installed the HP driver (the one linked above). Then, there's an optional Windows update I installed (don't remember the exact name, but it'll stick out). After that, graphics switching works just like it's supposed to. Moab, thanks anyways for your help

    Read the article

  • Debian Apache2 SSL Issues - Error code: ssl_error_rx_record_too_long

    - by Tone
    I'm setting up apache on Debian lenny and having issues with SSL. I've been through numberous tutorials and i had this working on Ubuntu server, but for the life of me can't get anywhere with Debian. Port 80 (http) works fine, but port 443 (https) gives me the following error (in firefox) - homeserver is my hostname and my dhcp assigned ip is 192.168.1.109. I have a feeling it's something with my config and not with the cert/key generation. An error occurred during a connection to homeserver. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Anyone see any issues with the following config files? /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin webmaster@localhost ServerName homeserver DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/server.crt SSLCertificateKeyFile /etc/ssl/private/server.key SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> /etc/apache2/ports.conf NameVirtualHost *:80 Listen 80 Listen 443 #<IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here #Listen 443 #</IfModule> /etc/hosts 127.0.0.1 localhost 127.0.0.1 homeserver #192.168.1.109 homeserver #tried this but it didn't work # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts

    Read the article

  • TPROXY Not working with HAProxy, Ubuntu 14.04

    - by Nyxynyx
    I'm trying to use HAProxy as a fully transparent proxy using TPROXY in Ubuntu 14.04. HAProxy will be setup on the first server with eth1 111.111.250.250 and eth0 10.111.128.134. The single balanced server has eth1 and eth0 as well. eth1 is the public facing network interface while eth0 is for the private network which both servers are in. Problem: I'm able to connect to the balanced server's port 1234 directly (via eth1) but am not able to reach the balanced server via Haproxy port 1234 (which redirects to 1234 via eth0). Am I missing out something in this configuration? On the HAProxy server The current kernel is: Linux extremehash-lb2 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux The kernel appears to have TPROXY support: # grep TPROXY /boot/config-3.13.0-24-generic CONFIG_NETFILTER_XT_TARGET_TPROXY=m HAProxy was compiled with TPROXY support: haproxy -vv HA-Proxy version 1.5.3 2014/07/25 Copyright 2000-2014 Willy Tarreau <[email protected]> Build options : TARGET = linux26 CPU = x86_64 CC = gcc CFLAGS = -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 USE_STATIC_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built without zlib support (USE_ZLIB not set) Compression algorithms supported : identity Built without OpenSSL support (USE_OPENSSL not set) Built with PCRE version : 8.31 2012-07-06 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. In /etc/haproxy/haproxy.cfg, I've configured a port to have the following options: listen test1235 :1234 mode tcp option tcplog balance leastconn source 0.0.0.0 usesrc clientip server balanced1 10.111.163.76:1234 check inter 5s rise 2 fall 4 weight 4 On the balanced server In /etc/networking/interfaces I've set the gateway for eth0 to be the HAProxy box 10.111.128.134 and restarted networking. auto eth0 eth1 iface eth0 inet static address 111.111.250.250 netmask 255.255.224.0 gateway 111.131.224.1 dns-nameservers 8.8.4.4 8.8.8.8 209.244.0.3 iface eth1 inet static address 10.111.163.76 netmask 255.255.0.0 gateway 10.111.128.134 ip route gives: default via 111.111.224.1 dev eth0 10.111.0.0/16 dev eth1 proto kernel scope link src 10.111.163.76 111.111.224.0/19 dev eth0 proto kernel scope link src 111.111.250.250

    Read the article

  • DNSBL listed at zen.spamhaus.org - cant get outgoing mail working? Am I interpreting the response correctly?

    - by Joe Hopfgartner
    I have problem with a mailserver and there is something I kind of not understand! I can connect, authenticate, specify the sender address - but when specifying the reciever i get a error 550 which looks like so: RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 Now the strange thing is that 62.178.15.161 is my local client address. Not the servers ip address. Also the error code 550 seems to be defined as so: 550 Requested action not taken: mailbox unavailable To me that makes totally no sense. Why this error code with this spamhaus message? Why the local ip adress and not the servers? There is exim running and there is nothing turning up in the logs mail.err mail.info mail.log mail.warn in /var/log I looked up both the servers and the clients ip adress on blacklists. The clients ip adress is listed on some (as expected), but the server is totally clean. Here is the complete telnet log when I reproduced the error. Mail clients like Evolution and Thunderbird give me the same spamhaus error message. joe@joe-desktop:~$ telnet mail.hunsynth.org 25 Trying 193.164.132.42... Connected to mail.hunsynth.org. Escape character is '^]'. 220 hunsynth.org ESMTP Exim 4.69 Sat, 01 Jan 2011 17:52:45 +0100 HELP 214-Commands supported: 214 AUTH STARTTLS HELO EHLO MAIL RCPT DATA NOOP QUIT RSET HELP EHLO AUTH 250-hunsynth.org Hello chello062178015161.6.11.univie.teleweb.at [62.178.15.161] 250-SIZE 52428800 250-PIPELINING 250-AUTH PLAIN LOGIN CRAM-MD5 250-STARTTLS 250 HELP AUTH LOGIN 334 VXNlcm5hbWU6 dGVzdEBodW5zeW50aC5vcmc= 334 UGFzc3dvcmQ6 ***** 235 Authentication succeeded MAIL FROM:[email protected] 250 OK RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 quit 221 hunsynth.org closing connection Connection closed by foreign host. joe@joe-desktop:~$ Update: I tried the same thing from my other server and could successfully send an email. So it really looks like the server does check the IP wich establiches the connection is in some blacklist. This is theoretically a good thing - but - the authentication on the server should prevent that? Or shouldn't it? Well I just think it would be absurd if I couldn't send email over my smtp server from my dynamic ISP connection because the dynamic is listed, altough i have a clean server with login?

    Read the article

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • Can an administration extraction of an MSI file perform registry and/or system wide changes?

    - by Wil
    I am always getting MSI (or setup EXEs which are basically MSI) files, and half the time they really do not need to be a setup. Microsoft is probably one of the biggest sources - almost every time I want to download a little source code sample, it has a MSI which if you install, only usually has three files. I would rather not do an install and add it to the add/remove programs and who knows what else (although I am sure it wouldn't be that bad) for the sake of three files! For this reason, I always use the following command: MSIEXEC /a <filename.msi> /qb TARGETDIR=<directory name> Now, this works fine and I have never had problems... However, I was just browsing some articles on Technet and found the following resource about administration installs. Apparently, MSI files can have two sequences: The AdminUISequence Table and the AdminExecuteSequence Table. I am not so worried about the AdminUISequence Table as it states that "The installer skips the actions in this table if the user interface level is set to basic UI or no UI", and this is what the /qb switch I use does. However, there is nothing similar written against AdminExecuteSequence Table. I realise that many people who write MSI files simply do it for a single end user and probably do not even touch the admin install options, however, is it possible for them to set items that can affect the system and if so, is there a fail proof way of extracting? I do already use 7-zip, however despite it being on the "supported" page, MSI support is lacking... well... completely sucks. It looses the file names and is generally useless. They have a bug which was closed with no reason/resolution over three years ago, and I opened a forum post and haven't had a reply. I would not really want to install any additional programs if I could help it and just want peoples opinions on this. Thanks. edit - Should also say, I run with UAC on, and I have never ever had a elevation prompt whilst performing the MSIEXEC operation, so I am guessing I have never had a system wide change, however, I am still curious as to if it is possible... As if changes (even just to the user) are possible I would do this locally/in a VM and never on a server or place of importance!

    Read the article

  • HT Link Sync Error after Ubuntu 10.04 LTS Installation

    - by marklab
    Update 1 I just assembled an exact replica of this server, and successfully installed Ubuntu 10.04 LTS in a RAID10 configuration. The success was confirmed by a login to the initial account. There must be a hardware component that is faulty. Since the error mentions HT, which I believe to be Hyper Threading, I will start with the CPUs. Please indicate if this error is more strongly associated with any other piece of hardware. Or make a recommendation of another approach that would be good for this issue. Issue I was attempting to install Ubuntu 10.04 LTS on this system with the board RAID10 configured. However, the installation failed at the partitioning stage by rebooting the system. Upon reboot, there is an error report after POST listing the following: Node0: NB WatchDog Timer Error Node1: HT Link Sync Error Node2: HT Link Sync Error ... Node7: HT Link Sync Error Press F1 to continue/resume. After pressing F1 the system will boot from the Ubuntu 10.04 LTS installation disc. However, it will fail at the same stage, and go through the same process from there. Hardware CPU: AMD OPTERON X12 6172 G34 2.1G 18MB Motherboard: Supermicro H8QG6-F HDD: WD Caviar Green 2TB 5.4K RPM Troubleshooting I disabled RAID10 on the system, and installed the Ubuntu on a single drive. It installed successfully. I then went back to a RAID10 setup and attempted to install on the system again, and was able to make it through the partitioning stage. However, upon reboot, the system reported: Error: file not found, and then booted me into the Grub Rescue console. I feel I have aggravated the problem at this point because when I attempted to install from the boot disc again, the system reboots upon hitting enter to even start the installation process. It does the same thing when trying to boot from an Ubuntu 11 disc. I have not been able to find any information on this HT Link Sync Error, which I feel may have started the problems I am experiencing now with the installation of the OS. I am also aware that Ubuntu is said not to be supported by the motherboard according to Supermicro's site. However, since I was able to install it successfully on a single drive, I do not believe it is incompatible. I would like to know a reason for why it's failing to install on/off.

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    Hello, I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >