Search Results

Search found 1598 results on 64 pages for 'warnings'.

Page 56/64 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Error with postgres and Rails in Bundle Install on Ubuntu 12.10

    - by jason328
    I'm trying to install postgres onto Ubuntu. When running the Bundle Install in terminal I'm receiving this message at the end of the running code. How do I get pg to install properly? Libpq-dev is installed as well. Using coffee-rails (3.2.2) Using diff-lcs (1.1.3) Using jquery-rails (2.0.2) Installing pg (0.12.2) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... yes checking for PQisthreadsafe()... yes checking for PQprepare()... yes checking for PQexecParams()... yes checking for PQescapeString()... yes checking for PQescapeStringConn()... yes checking for PQgetCancel()... yes checking for lo_create()... yes checking for pg_encoding_to_char()... yes checking for PQsetClientEncoding()... yes checking for rb_encdb_alias()... yes checking for rb_enc_alias()... no checking for struct pgNotify.extra in libpq-fe.h... yes checking for unistd.h... yes checking for ruby/st.h... yes creating extconf.h creating Makefile make compiling compat.c compiling pg.c pg.c: In function ‘pgconn_wait_for_notify’: pg.c:2117:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c: In function ‘pgconn_block’: pg.c:2592:3: error: format not a string literal and no format arguments [-Werror=format- security] pg.c:2598:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c:2607:4: error: format not a string literal and no format arguments [-Werror=format- security] pg.c: In function ‘pgconn_locreate’: pg.c:2866:11: warning: variable ‘lo_oid’ set but not used [-Wunused-but-set-variable] pg.c: In function ‘find_or_create_johab’: pg.c:3947:3: warning: implicit declaration of function ‘rb_encdb_alias’ [-Wimplicit- function-declaration] cc1: some warnings being treated as errors make: *** [pg.o] Error 1 Gem files will remain installed in /home/jason/.bundler/tmp/10083/gems/pg-0.12.2 for inspection. Results logged to /home/jason/.bundler/tmp/10083/gems/pg-0.12.2/ext/gem_make.out An error occurred while installing pg (0.12.2), and Bundler cannot continue. Make sure that `gem install pg -v '0.12.2'` succeeds before bundling.

    Read the article

  • Can RPM packages be installed into Cygwin?

    - by Dejian Zhao
    I noticed that there is a command - rpm - under Cygwin 1.7. Does that mean RPM packages can be installed into Cygwin? I tried to install ncbi-blast-2.2.26+-3.i686.rpm (see: ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/ ) into Cygwin 1.7.13 with the command "install -i ncbi-blast-2.2.26+-3.i686.rpm". However, error message appeared as below. I tried to search for the missing libs using the setup.exe of Cygwin. It seems that some of them were not present, such as libc.so.6, libdl.so.2, libm.so.6, libnsl.so.1, and libz.so.1. Where can I get these libs? Thanks! $ rpm -i ncbi-blast-2.2.26+-3.i686.rpm error: Failed dependencies: /usr/bin/perl is needed by ncbi-blast-2.2.26+-3 libbz2.so.1 is needed by ncbi-blast-2.2.26+-3 libc.so.6 is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.3) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.3) is needed by ncbi-blast-2.2.26+-3 libdl.so.2 is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1 is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GCC_3.0) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libm.so.6 is needed by ncbi-blast-2.2.26+-3 libnsl.so.1 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.3.2) is needed by ncbi-blast-2.2.26+-3 librt.so.1 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(CXXABI_1.3) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4.5) is needed by ncbi-blast-2.2.26+-3 libz.so.1 is needed by ncbi-blast-2.2.26+-3 perl(Archive::Tar) is needed by ncbi-blast-2.2.26+-3 perl(Digest::MD5) is needed by ncbi-blast-2.2.26+-3 perl(File::Temp) is needed by ncbi-blast-2.2.26+-3 perl(File::stat) is needed by ncbi-blast-2.2.26+-3 perl(Getopt::Long) is needed by ncbi-blast-2.2.26+-3 perl(Net::FTP) is needed by ncbi-blast-2.2.26+-3 perl(Pod::Usage) is needed by ncbi-blast-2.2.26+-3 perl(constant) is needed by ncbi-blast-2.2.26+-3 perl(strict) is needed by ncbi-blast-2.2.26+-3 perl(warnings) is needed by ncbi-blast-2.2.26+-3

    Read the article

  • Problems sending and receiving data between php and perl?

    - by Chip Gà Con
    I have a problem in sending and receiving data between php and perl socket: -Problem: +php can not send all byte data to perl socket +Perl socket can not receiving all data from php . Here code php: function save(){ unset($_SESSION['info']); unset($_SESSION['data']); global $config,$ip; $start=$_POST['config']; $fp = fsockopen($_SESSION['ip'], $config['port'], $errno, $errstr, 30); if(!$fp) { $_SESSION['info']="Not connect "; transfer("Not connect".$ip, "index.php?com=server&act=info"); } else { $_SESSION['info']="Save config - ".$ip; fwrite($fp,$start); transfer("Sending data to ".$ip, "index.php?com=server&act=info"); } } Here code perl socket: #!/usr/bin/perl use strict; use warnings; use Carp; use POSIX qw( setsid ); use IO::Socket; $| = 1; my $socket = new IO::Socket::INET ( LocalHost => '192.168.150.3', LocalPort => '5000', Proto => 'tcp', Listen => 5, Reuse => 1 ); die "Coudn't open socket" unless $socket; print "\nTCPServer Waiting for client on port 5000"; my $client_socket = ""; while ($client_socket = $socket->accept()) { my $recieved_data =" "; my $send_data=" "; my $peer_address = $client_socket->peerhost(); my $peer_port = $client_socket->peerport(); print "\n I got a connection from ( $peer_address , $peer_port ) "; print "\n SEND( TYPE q or Q to Quit):"; $client_socket->recv($recieved_data,20000); #while (defined($recieved_data = <$client_socket>)) { if ( $recieved_data eq 'q' or $recieved_data eq 'Q' ) { close $client_socket; last; } elsif ($recieved_data eq 'start' or $recieved_data eq 'START' ) { $send_data = `/etc/init.d/squid start`; } elsif ($recieved_data eq 'restart' or $recieved_data eq 'RESTART' ) { $send_data = `/etc/init.d/squid restart`; } elsif ($recieved_data eq 'stop' or $recieved_data eq 'STOP' ) { $send_data = `/etc/init.d/squid stop`; } elsif ($recieved_data eq 'hostname' or $recieved_data eq 'HOSTNAME' ) { $send_data= `hostname`; } elsif ($recieved_data eq 'view-config' or $recieved_data eq 'VIEW-CONFIG' ) { $send_data = `cat /etc/squid/squid.conf` ; } else { # print $recieved_data; open OUTPUT_FILE, '> /root/data' or die("can not open file"); print OUTPUT_FILE $recieved_data; close OUTPUT_FILE } #} if ($send_data eq 'q' or $send_data eq 'Q') { $client_socket->send ($send_data); close $client_socket; last; } else { $client_socket->send($send_data); } }

    Read the article

  • Nginx and PHP-FPM running out of connections

    - by E3pO
    I keep running into errors like these, [02-Jun-2012 01:52:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 19 idle, and 49 total children [02-Jun-2012 01:52:05] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 19 idle, and 50 total children [02-Jun-2012 01:52:06] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 19 idle, and 51 total children [02-Jun-2012 03:10:51] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 18 idle, and 91 total children I changed my settings for php-fpm to these, pm.max_children = 150 (It was at 100, i got a max_children reached and upped to 150) pm.start_servers = 75 pm.min_spare_servers = 20 pm.max_spare_servers = 150 Resulting in [02-Jun-2012 01:39:19] WARNING: [pool www] server reached pm.max_children setting (150), consider raising it I've just launched a new website that is getting a conciderable amount of traffic on it. This traffic is legitimate and users are getting 504 gateway timeouts when the limit is reached. I have limited connections to my server with IPTABLES and I'm running fail2ban and keeping track of nginx access logs. The traffic is all legitimate, i'm just running out of room for users. I'm currently running on a dual core box with ubuntu 64bit. free total used free shared buffers cached Mem: 6114284 5726984 387300 0 141612 4985384 -/+ buffers/cache: 599988 5514296 Swap: 524284 5804 518480 My php.ini max_input_time = 60 My nginx config is worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 19000; # multi_accept on; } worker_rlimit_nofile 20000; #each connection needs a filehandle (or 2 if you are proxying) client_max_body_size 30M; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; location ~ \.php$ { try_files $uri /er/error.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } What can I do to stop running out of connections? Why does this keep occurring? I'm monitoring my traffic on Google Analytics realtime and when the user count gets above about 120 my php-fpm.log is full of these warnings..

    Read the article

  • Failure to install NetFX3 on Windows Server 2012: Error 3017 -- Am I missing something here?

    - by Nick
    I am really struggling to get this installed. I have tried the suggestions here in an attempt to rectify any possible corruption. I mounted the disk image to 'G' to do an offline install. I also attempted an online install with similar results. Output as follows: Microsoft Windows [Version 6.2.9200] (c) 2012 Microsoft Corporation. All rights reserved. C:\Users\Administrator>dism /online /enable-feature /featurename:NetFX3 /All /So urce:G:\sources\sxs /LimitAccess Deployment Image Servicing and Management tool Version: 6.2.9200.16384 Image Version: 6.2.9200.16384 Enabling feature(s) [==========================100.0%==========================] Error: 3017 The requested operation failed. A system reboot is required to roll back changes made. The DISM log file can be found at C:\Windows\Logs\DISM\dism.log Log as follows (Errors/Warnings Only): 2013-04-08 23:40:17, Error DISM DISM Package Manager: PID=3756 TID=3768 Failed finalizing changes. - CDISMPackageManager::Internal_Finalize(hr:0x80070bc9) 2013-04-08 23:40:17, Error DISM DISM Package Manager: PID=3756 TID=3768 Failed processing package changes with session options - CDISMPackageManager::ProcessChangesWithOptions(hr:0x80070bc9) 2013-04-08 23:40:17, Error DISM DISM Package Manager: PID=3756 TID=3768 Failed ProcessChanges. - CPackageManagerCLIHandler::Private_ProcessFeatureChange(hr:0x80070bc9) 2013-04-08 23:40:17, Error DISM DISM Package Manager: PID=3756 TID=3768 Failed while processing command enable-feature. - CPackageManagerCLIHandler::ExecuteCmdLine(hr:0x80070bc9) 2013-04-08 23:40:17, Error DISM DISM.EXE: DISM Package Manager processed the command line but failed. HRESULT=80070BC9 2013-04-08 23:38:10, Warning DISM DISM Provider Store: PID=3160 TID=3172 Failed to Load the provider: C:\Windows\TEMP\505F54F1-4977-4233-835C-8B6DA83BCAEB\PEProvider.dll. - CDISMProviderStore::Internal_GetProvider(hr:0x8007007e) 2013-04-08 23:39:23, Warning DISM DISM Provider Store: PID=3756 TID=3768 Failed to Load the provider: C:\Users\ADMINI~1\AppData\Local\Temp\2\F1B7A223-F380-4F42-84BF-396D374EE80B\PEProvider.dll. - CDISMProviderStore::Internal_GetProvider(hr:0x8007007e) 2013-04-08 23:39:23, Warning DISM DISM Provider Store: PID=3756 TID=3768 Failed to Load the provider: C:\Users\ADMINI~1\AppData\Local\Temp\2\F1B7A223-F380-4F42-84BF-396D374EE80B\IBSProvider.dll. - CDISMProviderStore::Internal_GetProvider(hr:0x8007007e) 2013-04-08 23:39:23, Warning DISM DISM Provider Store: PID=3756 TID=3768 Failed to get the IDismObject Interface - CDISMProviderStore::Internal_LoadProvider(hr:0x80004002) 2013-04-08 23:39:23, Warning DISM DISM Provider Store: PID=3756 TID=3768 Failed to Load the provider: C:\Users\ADMINI~1\AppData\Local\Temp\2\F1B7A223-F380-4F42-84BF-396D374EE80B\Wow64provider.dll. - CDISMProviderStore::Internal_GetProvider(hr:0x80004002) 2013-04-08 23:39:23, Warning DISM DISM Provider Store: PID=3756 TID=3768 Failed to Load the provider: C:\Users\ADMINI~1\AppData\Local\Temp\2\F1B7A223-F380-4F42-84BF-396D374EE80B\EmbeddedProvider.dll. - CDISMProviderStore::Internal_GetProvider(hr:0x8007007e) None of my error codes align with any of those on this MS support page. I would really appreciate your assistance. I am really struggling with a solution. Am I missing something obvious here? EDIT: I have verified the checksum of my ISO image: File Name: en_windows_server_2012_x64_dvd_915478.iso SHA1: D09E752B1EE480BC7E93DFA7D5C3A9B8AAC477BA

    Read the article

  • fink hangs while compiling Octave on OS X

    - by Mark Bennett
    Disclaimer: I'm totally new to Fink. I'm trying to install Octave (Matlab open source clone) on Mountain Lion using Fink, following instructions at http://wiki.octave.org/Octave_for_MacOS_X It's a new installation of Fink, and I've also installed X11 per instructions. I'm using this command (which I believe is correct since everything's 64 bit now): sudo fink install octave-atlas It's hanging after a while, showing this as it's last output: ... Setting up xft2-dev (2.2.0-2) ... Clearing dependency_libs of .la files being installed Reading buildlock packages... All buildlocks accounted for. /sw/bin/dpkg-lockwait -i /sw/fink/dists/stable/main/binary-darwin-x86_64/x11/xinitrc_1.5-1_darwin-x86_64.deb (Reading database ... 14871 files and directories currently installed.) Preparing to replace xinitrc 1.5-1 (using .../xinitrc_1.5-1_darwin-x86_64.deb) ... Unpacking replacement xinitrc ... Setting up xinitrc (1.5-1) ... I did notice the process name on the terminal's tab was "sort", so the second time I hit this I tried Control-D (End-of-File), and this did seem to unstick it. I'm wondering if there's some misformed command and sort was trying to read from stdin? Questions: 1: Has anybody else seen this? Google wasn't helpful 2: Fink outputs a LOT of warnings and errors.... is that normal? 3: wondering if anybody's got Fink to compile Octave on Mountain Lion specifically? And whether they used just "octave" or "octave-atlas". Or if you got it working with MacPorts or Homebrew? 4: later Fink failed with "Failed: phase compiling: gnuplot-minimal-4.6.1-1 failed". I haven't started googling that yet... but wondering if anybody's see that? Also tried MacPorts but got other errors with Octave. And reading online it looks like HomeBrew also has issues with Octave on Mountain Lion. 5: Generally looking for anybody who's got Octave running on Mountain Lion with any of the package managers. I've been at this for a couple days ;-)

    Read the article

  • Adaptec 5805 after reboot don't starting

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not included. It only works if the computer is shut down and turn off. Late i update firmware "Adaptec RAID 5805 Firmware Build 18948" How to fix the problem? add Log Configuration summary Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priorityHigh Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1 List item

    Read the article

  • Ubuntu Server 10.04 Heavy Network Traffic causes disconnect

    - by K Vaughan
    I'm currently running a headless Ubuntu 10.04 server. Installed is the LAMP stack, Joomla, Virtualbox, phpvirtualbox, webmin and proFTP.. It resolves the IP address so I can access it remotely (either the apache2 webserver or the FTP) using DDClient. Any packages installed have been installed using apt-get. Webmin, although discouraged in Ubuntu Server, is used mostly to administer the webserver aspect. This issue also appeared when I was using Ubuntu Server 10.10. After periods of heavy network traffic, whether local or remote, the connect drops. I'm talking specifically about the transfer of files via FTP, SCP or Samba (the latter of which I seldom use). There is no response to ping or ssh. I can't FTP to the server nor can I load the website. There are times when the server has been on for a few days and everything runs fine because I haven't accessed it much, if at all (thus not much network traffic). I've gone through a few hardware changes although I don't believe this has cause the issue: this has been happening long before I made any changes. At first I thought it was my ISP-provided router blocking traffic because of some kind of misconfiguration (perhaps assuming it was some kind of DoS attack). I've changed routers and still found no success. I've checked syslog, dmesg and kern.log for warnings but have uncovered none. I've ran memtest via the GRUB2 menu at boot and once it turned up 4 errors. I ran again with individual sticks of RAM in various slots and everything turned up fine. I've looked through the BIOS settings and everything looks fine. I've tried unplugging unnecessary pieces of hardware (other internal hard drives, CD drives, floppy, PCI cards, etc). Any help or tips on how I can even begin to troubleshoot this would be very much appreciated. Please note that i've only started playing with servers as a hobby so my knowledge wouldn't be the most refined. I'm comfortable with command line and have the initiative to know how to look up something I can't do. Unfortunately I can't seem to find any issues like this. Additionally: If a solution can't be found some assistance to write a script that will cause the server to reboot automatically if, after x minutes, it gets no response to pinging somewhere like google. Admittedly that's not the cleanest solution should my internet end up going down but I can't think of what else to do.

    Read the article

  • .dll Solidworks Add-in not registering in COM

    - by Abhijit
    I am trying to register this .dll in COM as an Add-in to Solid Works software. The dll is building without any error or warnings.But the Add-in is not appearing in the Windows "Registry Editor" as should be the case.Kindly suggest me a solution. Thanks in advance. Below is my code:- using System; using System.Collections; using System.Reflection; using System.Collections.Generic; using System.Linq; using System.Text; using SolidWorks.Interop.sldworks; using SolidWorks.Interop.swcommands; using SolidWorks.Interop.swconst; using SolidWorks.Interop.swpublished; using SolidWorksTools; using SolidWorksTools.File; using System.Runtime.InteropServices; using System.Diagnostics; namespace SWADDIN_Test { [ComVisible(true)] [Guid("C380F7A6-771A-41EE-807A-1689C8E97720")] [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] interface ISWIntegration { void DoSWIntegration(); }//end of interface Dummy ISWIntegration [Guid("5EE80911-9567-4734-8E55-C347EA4635B5")] [ClassInterface(ClassInterfaceType.None)] [ProgId("SWADDIN_Test.SWIntegration")] [ComVisible(true)] public class SWIntegration : ISwAddin,ISWIntegration { public SldWorks mSWApplication; private int mSWCookie; public SWIntegration() { mSWApplication = null; mSWCookie = 0; }//end of parameterless constructor public void DoSWIntegration() { }//end of dummy method DoSWIntegration public bool ConnectToSW(object ThisSW, int Cookie) { mSWApplication = (SldWorks)ThisSW; mSWCookie = Cookie; // Set-up add-in call back info bool result = mSWApplication.SetAddinCallbackInfo(0, this, Cookie); this.UISetup(); return true; }//end of method ConnectToSW() public bool DisconnectFromSW() { return UITeardown(); }//end of method DisconnectFromSW() public void UISetup() { }//end of method UISetup() public bool UITeardown() { return true; }//end of method UITeardown() [ComRegisterFunction()]//Attribute private static void ComRegister(Type t) { string keyPath = String.Format(@"SOFTWARE\SolidWorks\AddIns{0:b}", t.GUID); using (Microsoft.Win32.RegistryKey rk = Microsoft.Win32.Registry.LocalMachine.CreateSubKey(keyPath)) { rk.SetValue(null, 1);// Load at startup rk.SetValue("Title", "Abhijit_SwAddin"); // Title rk.SetValue("Description", "All your pixels now belong to us"); // Description }//end of using statement }//end of method ComRegister() [ComUnregisterFunction()]//Attribute private static void ComUnregister(Type t) { string keyPath = String.Format(@"SOFTWARE\SolidWorks\AddIns{0:b}", t.GUID); Microsoft.Win32.Registry.LocalMachine.DeleteSubKeyTree(keyPath); }//end of method ComUnregister() }//end of class SWIntegration }//end of namespace SWADDIN_Test

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • Microsoft signed driver appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OurCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Microsoft signed drivers appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OutCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Postfix: Relay access denied

    - by Joseph Silvashy
    When I telnet to my server thats running postfix and try to send an email: MAIL FROM:<[email protected]> #=> 250 2.1.0 Ok RCPT TO:<[email protected]> #=> 554 5.7.1 <[email protected]>: Relay access denied I couldn't really find the answer on the site or by looking at other users question/answers, I'm not sure where to start. Ideas? Update So basically looking at the docs: http://www.postfix.org/SMTPD_ACCESS_README.html (section: Getting selective with SMTP access restriction lists), I don't seem to have any of those directives in etc/postfix/main.cf like smtpd_client_restrictions = permit_mynetworks, reject or any of the other ones, so I'm quite confused. But really I'm going to have a rails app connect to the server and send the emails, so I'm not sure how to handle it. Here is what my config file looks like: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = rerecipe-utils alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost, mail.rerecipe.com, rerecipe.com relayhost = mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all mynetworks = 127.0.0.0/8 204.232.207.0/24 10.177.64.0/19 [::1]/128 [fe80::%eth0]/64 [fe80::%eth1]/64 Something to note is that relayhost is blank, this is the default configuration file that was created when I installed Postfix, when testing to connect with openssl I get this: ~% openssl s_client -connect mail.myhostname.com:25 -starttls smtp CONNECTED(00000003) depth=0 /CN=myhostname verify error:num=18:self signed certificate verify return:1 depth=0 /CN=myhostname verify return:1 --- Certificate chain 0 s:/CN=myhostname i:/CN=myhostname --- Server certificate -----BEGIN CERTIFICATE----- MIIBqTCCARICCQDDxVr+420qvjANBgkqhkiG9w0BAQUFADAZMRcwFQYDVQQDEw5y ZXJlY2lwZS11dGlsczAeFw0xMDEwMTMwNjU1MTVaFw0yMDEwMTAwNjU1MTVaMBkx FzAVBgNVBAMTDnJlcmVjaXBlLXV0aWxzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDODh2w4A1k0qiPNPhkrPj8sfkxpKPTk28AuZhgOEBYBLeHacTKNH0jXxPv P3TyhINijvvdDPzyuPJoTTliR2EHR/nL4DLhr5FzhV+PB4PsIFUER7arx+1sMjz6 5l/Ubu1ppMzW9U0IFNbaPm2AiiGBQRCQN8L0bLUjzVzwoSRMOQIDAQABMA0GCSqG SIb3DQEBBQUAA4GBALi2vvk9TGKJubXYJbU0PKmVmsfzFK35yLqr0keiDBhK2Leg 274sWxEH3ds8mUaRftuFlXb7RYAGNlVyTuMTY3CEcnqIsH7F2McCUTpjMzu/o1mZ O/B21CelKetBd1u79Gkrv2vWyN7Csft6uTx5NIGG2+pGi3r0gX2r0Hbu2K94 -----END CERTIFICATE----- subject=/CN=myhostname issuer=/CN=myhostname --- No client certificate CA names sent --- SSL handshake has read 1203 bytes and written 360 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 1AA4B8BFAAA85DA9ED4755194C50311670E57C35B8C51F9C2749936DA11918E4 Session-ID-ctx: Master-Key: 9B432F1DE9F3580DCC6208C76F96631DC5A4BC517BDBADD5F514414DCF34AC526C30687B96C5C4742E9583555A118232 Key-Arg : None Start Time: 1292985376 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN Oddly enough when I try to send an email from the machine itself it does work: echo test | mail -s "test subject" [email protected]

    Read the article

  • Wiimote accelerometer input on Windows? (in 2013 - Glovepie alternative?)

    - by user568458
    There were a few options for getting accelerometer input into Windows using a Nintendo Wiimote. As of mid 2013, these projects seem to be dead, corrupted with malware, or both. Are there any tools out there that can do this that are still available (and not full of malware)? Quick roundup of the options that used to exist, or that still exist but aren't suitable: Glovepie, which used to be the most recommended option, appears to be dead: it's own website hacked, its creator's googlepages page full of strange stuff that sounds like hacker-humour about the end of the world... (I'd rather not link to them, very dubious stuff...), and lots of forum threads asking if it's a dead project with comments along the lines of "I heard that the author intends to return to it" dated 2011... Wiiuse seems to be dead: its sourceforge page simply says "Error.", its own website has turned into a squatter page. There apparently was an extension for Autohotkey that allowed Wiimote input, but I've seen warnings that this too is now full of malware (see final commentin above link) Everything else I can find about using Wiimotes as input on Windows - for example, Johnny Lee Cheng's work - seems to be exclusively about using infrared or sensor bar, or tied to a specific purpose (e.g. FPS gaming). My main interest is in the accelerometer, and buttons if possible (although something that supports the IR stuff too would be ideal). Is there anything that works for getting Wiimote accelerometer input into Windows that is reliable and not a malware-fest? If anyone's interested in "Why?", it's to use the Wiimote as an audio / midi controller: to use movement, pitch, roll etc to modulate lots of different sound variables at once with one hand. Wiimotes are great for this, and Glovepie used to be the standard way to make this work (e.g. see for example this tutorial, and this one, ignore the unrelated video; I've also seen musicians using wiimote/glovepie setups at gigs, creating some really unique sounds). As of 2013, however, Glovepie seems to be a dead and thoroughly hacked project, sadly. Is there anything else? With or without MotionPlus is fine (with would be better). If anyone knows of any worthy alternatives to Wiimotes in terms of price and quality that can be made to work with a PC, that would also be great: but in my research I coulnd't find any (here's a link to someone reaching the same conclusion). found some potentially relevant stuff here, not had time to test any of it yet though - http://stackoverflow.com/questions/2984450/using-accelerometer-in-wiimote-for-physics-practicals

    Read the article

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7?

    - by Timo
    The question says it all, but more detail follows ;) I've got a new computer that runs Windows 7 64-bits (Home Edition) and I'd like to connect it to my wireless home network (Sitecom wireless gigabit router 300N wl-352 v1 002) with a Sitecom wireless USB micro adaptapter 300 wl-352 V2 001. After installing the router (i.e. connected to the modem and power) and ensuring that wireless is indeed enabled, I've installed the driver of the USB adapter on the new computer described above. After the installation (drivers and utility on CD) completes successfull I rebooted my computer and inserted the USB adapter. After discovering the right network and connecting to it using the network key, a connection is succesfully made. (Using the Sitecom 300N USB Wireless LAN utility). In the LAN utility I can see that the signal strength is approximately 50% and connection quality is approximately 80%. Judging from these numbers I assumed that all was fine and started to use the connection (reading news on nu.nl, a dutch news site), but noticed that the connection was lost several times in a very short time span, but each time the connections was resumed, resulting in the 50/80 percent numbers described above. However, the website was not loaded completely and often a timeout would be reported. When inspecting the drivers through Device Management (Windows' Apparaatbeheer in dutch) there were no errors/warnings; everything seemed to be in order. In an attempt to solve this, I downloaded the latest drivers for the USB adapter, but the problems remained. Finally I tried to connect the computer with a Siemens Gigaset USB Adapter 108. This process was a troublesome since I had to download a driver (from the site above) and tell Windows (7) to use the Windows Vista driver when installing the new hardware, since there is (was) no Windows 7 driver available. This resulted in a usable connection, although not very stable when reconfiguring the router. Which took the form of selecting a different wireless channel on the router, even using the Sitecom utility mentioned above to check if there were other networks communicating on that channel (and thus picking a channel that was not used by other networks). Again no result when changing back to the Sitecom USB adapter. Note that this means (I think) that I could use the internet connection with the Siemens adapter, meaning the problem was not in the router. So: How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7? PS Sorry, but should be able to post one link, while I had links in place for the USB adapter, router and the siemens adapter in place as well, but I'm not (yet) allowed to post these... (The site says I can post one link, but only when no links are present will it allow me to post the question...)

    Read the article

  • Adaptec 5805 not recognized after reboot

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not recognized. It only works if the computer is shut down and turned off. I have recently updated the firmware to "Adaptec RAID 5805 Firmware Build 18948". How do I fix the problem? Configuration summary --------------------------- 1. Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priority High Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • Postfix + SASLAUTHD + MySQL authentication problems

    - by Or W
    I've been trying to sort this out for the past 6 hours or so, this is the error message I'm facing (Running CentOS x64): /var/log/maillog: Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: SASL authentication failure: Password verification failed Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL PLAIN authentication failed: authentication failure Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL LOGIN authentication failed: authentication failure /var/log/messages: Jun 22 20:15:38 ptroa saslauthd[9401]: do_auth : auth failure: [user=myuser] [service=smtp] [realm=domain.com] [mech=pam] [reason=PAM auth error] I have dovecot installed as well and I'm able to receive emails via the MySQL authentication. The problem is when I'm trying to use SMTP to send out emails. Some config files: /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = Server Message biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination virtual_create_maildirsize = yes virtual_maildir_extended = yes proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_cano$ virtual_transport = dovecot dovecot_destination_recipient_limit = 1 /etc/default/saslauthd: START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" /etc/pam.d/smtp: #%PAM-1.0 #auth include password-auth #account include password-auth auth required pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1 account sufficient pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • Ubuntu and Postfix Configuration Issues

    - by Obi Hill
    I recently installed postfix on Ubuntu Natty. I'm having a problem with the configuration. Firstly here is my postfix configuration file: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. mydomain = $myorigin myhostname = mail.nairanode.com alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 mydestination = $myorigin, $myhostname, localhost.localdomain, , localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all #mynetworks_style = host # ADDITIONAL unknown_local_recipient_reject_code = 550 maximal_queue_lifetime = 7d minimal_backoff_time = 1000s maximal_backoff_time = 8000s smtp_helo_timeout = 60s smtpd_recipient_limit = 16 smtpd_soft_error_limit = 3 smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_$ # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.n$ # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_do$ # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes Here is also my /etc/postfix/aliases: # See man 5 aliases for format postmaster: root Here is also my /etc/mailname: nairanode.com I've also updated my hostname to nairanode.com However, when I run postalias /etc/postfix/aliases I get the following : postalias: warning: valid_hostname: invalid character 47(decimal): /etc/mailname postalias: fatal: file /etc/postfix/main.cf: parameter mydomain: bad parameter value: /etc/mailname Is there something I'm doing wrong?! I noticed that when I replace myorigin = /etc/mailname with myorigin = nairanode.com in my postfix config, I don't see any errors anymore after calling postalias. Is this a bug or something?!

    Read the article

  • Does 64bit Windows 8 have the same 75% memory-usage limitation for applications as Windows 7?

    - by Barleyman
    64bit Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit? EDIT: Here is another older post here which discusses this same behavior on Windows 7. There is fixed 25% limit in Windows 7 and I'd like to know if it's still in Windows 8. Windows 7 / Page File Disabled / 12 GB RAM / 2+ GB RAM free and "your computer is running low on memory" Edit2: Here is another link discussing the low memory warning and how to disable it. Note he claims the limit for RAM usage is 80%, not 75%. It would seem to be correct as you can in fact allocate 6.4GB of RAM with 8GB machine. Anything above and beyond that goes to the pagefile, though. http://halflight.com.au/2011/04/06/how-to-disable-low-memory-warnings-and-the-advantages-of-removing-the-page-file/ Edit3: a Here's couple of process explorer screenshots that demonstrate how it goes down. Exhibit1: https://dl.dropbox.com/u/42068601/sysinfo.jpg Exhibit2: https://dl.dropbox.com/u/42068601/sysint2.jpg You can see that Windows 7 will use the memory 6.4GB as the very last resort. I have low memory warning switched off here so programs crashed at the last screenshot allocation. With low memory warning turned on, it starts nagging before you can push OS to use that remaining 1.6GB. The question is not "Is it OK windows does not want to allocate last 20% of RAM because X", it's "Does Windows 8 still behave this way". With 16GB this really becomes dumb.

    Read the article

  • First time installing Linux/Apache - uanble to connect

    - by bob
    I's my first time installing Linux/Apache. I loaded CentOS and LAMPP on a machine attached to a LAN. Turned off http and mysql (because I didn't want conflict with LLAMPP) chkconfig httpd off chkconfig mysqld off then successfully LAMPP started with /opt/lampp/lampp start Starting XAMPP for Linux 1.7.3a... XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Starting ProFTPD... XAMPP for Linux started. Problem: Unable to connect - Firefox can't establish a connection to the server at 179.16.51.36. I need some pointers as to where to look next. No errors in error_log file (just some warnings) I can ping server. httpd.conf looks like this: ServerRoot "/opt/lampp" Listen 80 ServerAdmin [email protected] ServerName 179.16.51.36 DocumentRoot "/opt/lampp/htdocs" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/opt/lampp/htdocs"> Options Indexes FollowSymLinks ExecCGI Includes Order allow,deny Allow from all </Directory> ErrorLog logs/error_log LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog logs/access_log common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> </IfModule> <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig etc/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi .pl AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> EnableMMAP off EnableSendfile off

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • Templated << friend not working when in interrelationship with other templated union types

    - by Dwight
    While working on my basic vector library, I've been trying to use a nice syntax for swizzle-based printing. The problem occurs when attempting to print a swizzle of a different dimension than the vector in question. In GCC 4.0, I originally had the friend << overloaded functions (with a body, even though it duplicated code) for every dimension in each vector, which caused the code to work, even if the non-native dimension code never actually was called. This failed in GCC 4.2. I recently realized (silly me) that only the function declaration was needed, not the body of the code, so I did that. Now I get the same warning on both GCC 4.0 and 4.2: LINE 50 warning: friend declaration 'std::ostream& operator<<(std::ostream&, const VECTOR3<TYPE>&)' declares a non-template function Plus the five identical warnings more for the other function declarations. The below example code shows off exactly what's going on and has all code necessary to reproduce the problem. #include <iostream> // cout, endl #include <sstream> // ostream, ostringstream, string using std::cout; using std::endl; using std::string; using std::ostream; // Predefines template <typename TYPE> union VECTOR2; template <typename TYPE> union VECTOR3; template <typename TYPE> union VECTOR4; typedef VECTOR2<float> vec2; typedef VECTOR3<float> vec3; typedef VECTOR4<float> vec4; template <typename TYPE> union VECTOR2 { private: struct { TYPE x, y; } v; struct s1 { protected: TYPE x, y; }; struct s2 { protected: TYPE x, y; }; struct s3 { protected: TYPE x, y; }; struct s4 { protected: TYPE x, y; }; struct X : s1 { operator TYPE() const { return s1::x; } }; struct XX : s2 { operator VECTOR2<TYPE>() const { return VECTOR2<TYPE>(s2::x, s2::x); } }; struct XXX : s3 { operator VECTOR3<TYPE>() const { return VECTOR3<TYPE>(s3::x, s3::x, s3::x); } }; struct XXXX : s4 { operator VECTOR4<TYPE>() const { return VECTOR4<TYPE>(s4::x, s4::x, s4::x, s4::x); } }; public: VECTOR2() {} VECTOR2(const TYPE& x, const TYPE& y) { v.x = x; v.y = y; } X x; XX xx; XXX xxx; XXXX xxxx; // Overload for cout friend ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString) { os << "(" << toString.v.x << ", " << toString.v.y << ")"; return os; } friend ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString); friend ostream& operator<<(ostream& os, const VECTOR4<TYPE>& toString); }; template <typename TYPE> union VECTOR3 { private: struct { TYPE x, y, z; } v; struct s1 { protected: TYPE x, y, z; }; struct s2 { protected: TYPE x, y, z; }; struct s3 { protected: TYPE x, y, z; }; struct s4 { protected: TYPE x, y, z; }; struct X : s1 { operator TYPE() const { return s1::x; } }; struct XX : s2 { operator VECTOR2<TYPE>() const { return VECTOR2<TYPE>(s2::x, s2::x); } }; struct XXX : s3 { operator VECTOR3<TYPE>() const { return VECTOR3<TYPE>(s3::x, s3::x, s3::x); } }; struct XXXX : s4 { operator VECTOR4<TYPE>() const { return VECTOR4<TYPE>(s4::x, s4::x, s4::x, s4::x); } }; public: VECTOR3() {} VECTOR3(const TYPE& x, const TYPE& y, const TYPE& z) { v.x = x; v.y = y; v.z = z; } X x; XX xx; XXX xxx; XXXX xxxx; // Overload for cout friend ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString) { os << "(" << toString.v.x << ", " << toString.v.y << ", " << toString.v.z << ")"; return os; } friend ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString); friend ostream& operator<<(ostream& os, const VECTOR4<TYPE>& toString); }; template <typename TYPE> union VECTOR4 { private: struct { TYPE x, y, z, w; } v; struct s1 { protected: TYPE x, y, z, w; }; struct s2 { protected: TYPE x, y, z, w; }; struct s3 { protected: TYPE x, y, z, w; }; struct s4 { protected: TYPE x, y, z, w; }; struct X : s1 { operator TYPE() const { return s1::x; } }; struct XX : s2 { operator VECTOR2<TYPE>() const { return VECTOR2<TYPE>(s2::x, s2::x); } }; struct XXX : s3 { operator VECTOR3<TYPE>() const { return VECTOR3<TYPE>(s3::x, s3::x, s3::x); } }; struct XXXX : s4 { operator VECTOR4<TYPE>() const { return VECTOR4<TYPE>(s4::x, s4::x, s4::x, s4::x); } }; public: VECTOR4() {} VECTOR4(const TYPE& x, const TYPE& y, const TYPE& z, const TYPE& w) { v.x = x; v.y = y; v.z = z; v.w = w; } X x; XX xx; XXX xxx; XXXX xxxx; // Overload for cout friend ostream& operator<<(ostream& os, const VECTOR4& toString) { os << "(" << toString.v.x << ", " << toString.v.y << ", " << toString.v.z << ", " << toString.v.w << ")"; return os; } friend ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString); friend ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString); }; // Test code int main (int argc, char * const argv[]) { vec2 my2dVector(1, 2); cout << my2dVector.x << endl; cout << my2dVector.xx << endl; cout << my2dVector.xxx << endl; cout << my2dVector.xxxx << endl; vec3 my3dVector(3, 4, 5); cout << my3dVector.x << endl; cout << my3dVector.xx << endl; cout << my3dVector.xxx << endl; cout << my3dVector.xxxx << endl; vec4 my4dVector(6, 7, 8, 9); cout << my4dVector.x << endl; cout << my4dVector.xx << endl; cout << my4dVector.xxx << endl; cout << my4dVector.xxxx << endl; return 0; } The code WORKS and produces the correct output, but I prefer warning free code whenever possible. I followed the advice the compiler gave me (summarized here and described by forums and StackOverflow as the answer to this warning) and added the two things that supposedly tells the compiler what's going on. That is, I added the function definitions as non-friends after the predefinitions of the templated unions: template <typename TYPE> ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString); template <typename TYPE> ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString); template <typename TYPE> ostream& operator<<(ostream& os, const VECTOR4<TYPE>& toString); And, to each friend function that causes the issue, I added the <> after the function name, such as for VECTOR2's case: friend ostream& operator<< <> (ostream& os, const VECTOR3<TYPE>& toString); friend ostream& operator<< <> (ostream& os, const VECTOR4<TYPE>& toString); However, doing so leads to errors, such as: LINE 139: error: no match for 'operator<<' in 'std::cout << my2dVector.VECTOR2<float>::xxx' What's going on? Is it something related to how these templated union class-like structures are interrelated, or is it due to the unions themselves? Update After rethinking the issues involved and listening to the various suggestions of Potatoswatter, I found the final solution. Unlike just about every single cout overload example on the internet, I don't need access to the private member information, but can use the public interface to do what I wish. So, I make a non-friend overload functions that are inline for the swizzle parts that call the real friend overload functions. This bypasses the issues the compiler has with templated friend functions. I've added to the latest version of my project. It now works on both versions of GCC I tried with no warnings. The code in question looks like this: template <typename SWIZZLE> inline typename EnableIf< Is2D< typename SWIZZLE::PARENT >, ostream >::type& operator<<(ostream& os, const SWIZZLE& printVector) { os << (typename SWIZZLE::PARENT(printVector)); return os; } template <typename SWIZZLE> inline typename EnableIf< Is3D< typename SWIZZLE::PARENT >, ostream >::type& operator<<(ostream& os, const SWIZZLE& printVector) { os << (typename SWIZZLE::PARENT(printVector)); return os; } template <typename SWIZZLE> inline typename EnableIf< Is4D< typename SWIZZLE::PARENT >, ostream >::type& operator<<(ostream& os, const SWIZZLE& printVector) { os << (typename SWIZZLE::PARENT(printVector)); return os; }

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >