Search Results

Search found 4187 results on 168 pages for 'secure erase'.

Page 94/168 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Can't boot flash drive on GIGABYTE motherboard

    - by Deltik
    Situation When I try to boot from my flash drive, my GIGABYTE 970A-UD3 motherboard returns this: Loading Operating System ... Boot error All other motherboards I've tried support booting from that flash drive (and a backup flash drive). The operating systems I tried on both flash drives were created with usb-creator-gtk (Ubuntu USB Startup Disk Creator). I know that the motherboard understands that there is an operating system on the flash drives because when I erase them, it complains in an ALL CAPS RAGE that there isn't an operating system, which is correct. How can I boot a flash drive that's bootable from other motherboards on this motherboard? Qualification This question is not a duplicate of this one because directly writing to the flash drive as an ISO 9660 (dd if=operating_system.iso of=/dev/sdb) still does not have the motherboard recognize the operating system. This question should be a duplicate of this one because I provide more information not provided by that poster. This forum thread has broken links and does not have a solution to my problem. Nobody knows what's going on in this forum thread.

    Read the article

  • Issues with creating USB bootable Mountain Lion

    - by Sidd
    I am trying to set up a triple boot Windows 8, Mountain Lion, and Ubuntu. I am stuck though. I have got Windows 8 on a partition, and I am trying to get Mountain Lion on there at this point. I installed a VMware with a Snow Leopard 10.6.2 image on the Windows 8 platform. I used the disk utility in this program in order to get Mountain Lion on there. This is what i did specifically: I got the installesd.dmg. I 'mounted' that file or whatever you call it, and out came something along the lines of "Install Mountain Lion OS x" (something like that - it was like a submenu under the installesd.dmg in the disk utility). I got my PNY 8 gb Attache Flash Drive and went to the Erase tab of disk utility. I erased it using the Mac OS Extended (Journaled) setting and called it "Mac". I went to the Restore tab, dragged "Mac" into destination, and dragged "Install Mountain Lion OS x" to the source. Everything seemed to go well, but it didn't. When trying to boot from the flash drive (and yes, I set the BIOS correctly), it skipped it, and loaded Windows 8 normally as if nothing was plugged in. When I try looking at the flash drive in windows 8, it comes up as a 200 mb capacity drive labeled "EFI" with nothing in it (remember, it was 8gb in the beginning). I downloaded Plop Boot Manager, but it did not recognize a USB being plugged in. Does anyone know how I could fix this?

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • ERROR 2003 (HY000): Can't connect to MySQL server on (111)

    - by JohnMerlino
    I am unable to connect to on my ubuntu installation a remote tcp/ip which contains a mysql installation: viggy@ubuntu:~$ mysql -u user.name -p -h xxx.xxx.xxx.xxx -P 3306 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111) I commented out the line below using vim in /etc/mysql/my.cnf: # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 Then I restarted the server: sudo service mysql restart But still I get the same error. This is the content of my.cnf: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ (Note that I can log into my local mysql install just fine by running mysql (and it will log me in as root) and also note that I can get into mysql in the remote server by logging into via ssh and then invoking mysql), but I am unable to connect to the remote server via my terminal using the host, and I need to do it that way so that I can then use mysql workbench.

    Read the article

  • set proxy in apache for XMPP chat

    - by Hunt
    I want to setup a proxy settings in Apache to use Facebook XMPP Chat So far I have setup ejabber server and I am able to access xmpp service using http://mydomain.com:5280/xmpp-http-bind I am able to create Jabber Account too. Now as I want to integrate Facebook XMPP chat , I want my server to sit in between client and chat.facebook.com because I want to implement Facebook chat and custom chat too. So I have read this article and come to know that I need to serve BOSH Service as a proxy in apache to access Facebook Chat service. So I don't know how to set up a proxy in a apache httpd.conf as I have tried following <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /xmpp-httpbind http://www.mydomain.com:5280/xmpp-http-bind ProxyPassReverse /xmpp-httpbind http://www.mydomain.com:5280/xmpp-http-bind But whenever I request http://www.mydomain.com:5280/xmpp-http-bind from strophe.js I am getting following response from server <body type='terminate' condition='internal-server-error' xmlns='http://jabber.org/protocol/httpbind'> BOSH module not started </body> and server log says following E(<0.567.0:ejabberd_http_bind:1239) : You are trying to use BOSH (HTTP Bind) in host "chat.facebook.com", but the module mod_http_bind is not started in that host. Configure your BOSH client to connect to the correct host, or add your desired host to the configuration, or check your 'modules' section in your ejabberd configuration file. here is my existing settings of ejabberd.cfg , but still no luck {5280, ejabberd_http, [ {access,all}, {request_handlers, [ {["pub", "archive"], mod_http_fileserver}, {["xmpp-http-bind"], mod_http_bind} ]}, captcha, http_bind, http_poll, register, web_admin ]} ]}. in a module section {mod_http_bind, [{max_inactivity, 120}]}, and whenever i fire http://www.mydomain.com:5280/xmpp-http-bind url independently am getting following message ejabberd mod_http_bind An implementation of XMPP over BOSH (XEP-0206) This web page is only informative. To use HTTP-Bind you need a Jabber/XMPP client that supports it. I have added chat.facebook.com in a list of host in ejabber.cfg as follows {hosts, ["localhost","mydomain.com","chat.facebook.com"]} and now i am getting following response <body xmlns='http://jabber.org/protocol/httpbind' sid='710da2568460512eeb546545a65980c2704d9a27' wait='300' requests='2' inactivity='120' maxpause='120' polling='2' ver='1.8' from='chat.facebook.com' secure='true' authid='1917430584' xmlns:xmpp='urn:xmpp:xbosh' xmlns:stream='http://etherx.jabber.org/streams' xmpp:version='1.0'> <stream:features xmlns:stream='http://etherx.jabber.org/streams'> <mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'> <mechanism>DIGEST-MD5</mechanism> <mechanism>PLAIN</mechanism> </mechanisms> <c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://www.process-one.net/en/ejabberd/' ver='yy7di5kE0syuCXOQTXNBTclpNTo='/> <register xmlns='http://jabber.org/features/iq-register'/> </stream:features> </body> if i use valid BOSH service created my jack moffit http://bosh.metajack.im:5280/xmpp-httpbind then i am getting following valid XML from facebook , but from my server i am not getting this <body xmlns='http://jabber.org/protocol/httpbind' inactivity='60' secure='true' authid='B8732AA1' content='text/xml; charset=utf-8' window='3' polling='15' sid='928073b02da55d34eb3c3464b4a40a37' requests='2' wait='300'> <stream:features xmlns:stream='http://etherx.jabber.org/streams' xmlns='jabber:client'> <mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'> <mechanism>X-FACEBOOK-PLATFORM</mechanism> <mechanism>DIGEST-MD5</mechanism> </mechanisms> </stream:features> </body> Can anyone please help me to resolve the issue

    Read the article

  • Postfix certificate verification failed for smtp.gmail.com

    - by Andi Unpam
    I have problem, my email server using postfix with gmail smtp, i use account google apps, but always ask for SASL authentication failed, I sent an email using php script, after I see the error logs in the wrong password, after I open the URL from the browser and no verification postfixnya captcha and could return, but after 2-3 days later happen like that again. This my config postfix #myorigin = /etc/mailname smtpd_banner = Hostingbitnet Mail Server biff = no append_dot_mydomain = no readme_directory = no myhostname = webmaster.hostingbitnet.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost, webmaster.hostingbitnet.com, localhost.localdomain, 103.9.126.163 relayhost = [smtp.googlemail.com]:587 relay_transport = relay relay_destination_concurrency_limit = 1 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 103.9.126.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = smtp relayhost = [smtp.gmail.com]:587 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/google-apps smtp_sasl_security_options = noanonymous smtp_use_tls = yes smtp_sender_dependent_authentication = yes tls_random_source = dev:/dev/urandom default_destination_concurrency_limit = 1 smtp_tls_CAfile = /etc/postfix/tls/root.crt smtp_tls_cert_file = /etc/postfix/tls/cert.pem smtp_tls_key_file = /etc/postfix/tls/privatekey.pem smtp_tls_session_cache_database = btree:$data_directory/smtp_tls_session_cache smtp_tls_security_level = may smtp_tls_loglevel = 1 smtpd_tls_CAfile = /etc/postfix/tls/root.crt smtpd_tls_cert_file = /etc/postfix/tls/cert.pem smtpd_tls_key_file = /etc/postfix/tls/privatekey.pem smtpd_tls_session_cache_database = btree:$data_directory/smtpd_tls_session_cache smtpd_tls_security_level = may smtpd_tls_loglevel = 1 #secure smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination Log from mail.log Oct 30 14:51:13 webmaster postfix/smtp[9506]: Untrusted TLS connection established to smtp.gmail.com[74.125.25.109]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 30 14:51:15 webmaster postfix/smtp[9506]: 87E2739400B1: SASL authentication failed; server smtp.gmail.com[74.125.25.109] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 ix9sm156630pbc.7 Oct 30 14:51:15 webmaster postfix/smtp[9506]: setting up TLS connection to smtp.gmail.com[74.125.25.108]:587 Oct 30 14:51:15 webmaster postfix/smtp[9506]: certificate verification failed for smtp.gmail.com[74.125.25.108]:587: untrusted issuer /C=US/O=Equifax/OU=Equifax Secure Certificate Authority Oct 30 14:51:16 webmaster postfix/smtp[9506]: Untrusted TLS connection established to smtp.gmail.com[74.125.25.108]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 30 14:51:17 webmaster postfix/smtp[9506]: 87E2739400B1: to=<[email protected]>, relay=smtp.gmail.com[74.125.25.108]:587, delay=972, delays=967/0.03/5.5/0, dsn=4.7.1, status=deferred (SASL authentication failed; server smtp.gmail.com[74.125.25.108] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 s1sm3850paz.0) Oct 30 14:51:17 webmaster postfix/error[9508]: B3960394009D: to=<[email protected]>, orig_to=<root>, relay=none, delay=29992, delays=29986/5.6/0/0.07, dsn=4.7.1, status=deferred (delivery temporarily suspended: SASL authentication failed; server smtp.gmail.com[74.125.25.108] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 s1sm3850paz.0) BTW I made cert follow the link here http://koti.kapsi.fi/ptk/postfix/postfix-tls-cacert.shtml and it worked, but after 2/3 days my email back to problem invalid SASL, and then i'm required to log in use a browser and enter the captcha there but success log in after input captcha, and my email server can send emails from telnet or php script. but it will be back in trouble after 2/3days later. My question is how to make it permanent certificate? Thanks n greeting.

    Read the article

  • Migrate Domain from Server 2008 R2 to Small Business Server 2011

    - by josecortesp
    I'm looking for some advice here, rather than the big how to do it I'm looking for what do to I have this home server, quad core and 4 GB of ram (I really can't afford more right now). With a Windows Serve 2008 R2 With ActiveDirectory and a Hyper-V-Virtual machine with SharePoint, TFS and a couple of more thigs. I have a least 10 remote users, all of them joined a Hamachi VPN (working great by the way). But I want to migrate that to a Small Business Server 2011 Standard. I tried to make a VM to join the domain and then promote that VM, back up it and then format the physical server, boot up the VM, Promote the Phisical and then erase the VM, but I can't do that because of SBS requiring a least 4 GB of ram to install (so I can't give all the 4 GB of physical ram to a VM). I was thinking in using a laptop (All the clients are laptop) as a temporal server, join the domain, promote it, then format the server and install SBS on the server and do all again. I really need some advice. Thanks in advance. BTW, I know that the software I'm using is kindda expensive, and I can't afford more hardware. I have access to MS downloads by a University partnership so I have all this software for free.

    Read the article

  • Hard drive not correctly recognized on a new Windows 7 installation, but works correctly on Windows XP

    - by david
    I'm having problems configuring a hard disk in a brand new, clean Windows 7 installation. System specs: Hard disk: WD VelociRaptor WD6000HLHX (600 GB, 10000 RPM) Motherboard: Gigabyte Z77X-UD3H BIOS SATA mode set to AHCI (not RAID), with disk connected to SATA0 (6 Gb/s port). Windows 7 Enterprise SP1 64-bit The disk is recognized by the BIOS and is correctly identified, with the name and size correctly reported. Windows recognizes the disk itself and reports the device is functioning correctly, but it doesn't appear in Explorer. Disk Management shows the drive, but incorrectly states that it is uninitialized and has no partitions. If I try to initialize the drive, I get an error saying that "the system cannot find the file specified" (what file?). Before connecting the drive to the new machine, I partitioned and formatted it under Windows XP SP2, creating 2 partitions (MBR, not GPT) and copying over a boatload of data. However, none of this data appears under Windows 7. If I put the disk back into the Windows XP machine, I can access the disk and all of its data. Is it possible to get Windows 7 to correctly recognize the disk without having to erase it and start over? If so, how do I do so? I checked this question, which seems to cover the same issue, but it didn't help.

    Read the article

  • rsyslogd not monitoring all files

    - by Tom O'Connor
    So.. I've installed Logstash, and instead of using the logstash shipper (because it needs the JVM and is generally massive), I'm using rsyslogd with the following configuration. # Use traditional timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $IncludeConfig /etc/rsyslog.d/*.conf # Provides kernel logging support (previously done by rklogd) $ModLoad imklog # Provides support for local system logging (e.g. via logger command) $ModLoad imuxsock # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg * # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log In /etc/rsyslog.d/logstash.conf there are 28 file monitor blocks using imfile $ModLoad imfile # Load the imfile input module $ModLoad imklog # for reading kernel log messages $ModLoad imuxsock # for reading local syslog messages $InputFileName /var/log/rabbitmq/startup_err $InputFileTag rmq-err: $InputFileStateFile state-rmq-err $InputFileFacility local6 $InputRunFileMonitor .... $InputFileName /var/log/some.other.custom.log $InputFileTag cust-log: $InputFileStateFile state-cust-log $InputFileFacility local6 $InputRunFileMonitor .... *.* @@10.90.0.110:5514 There are 28 InputFileMonitor blocks, each monitoring a different custom application logfile.. If I run [root@secret-gm02 ~]# lsof|grep rsyslog rsyslogd 5380 root cwd DIR 253,0 4096 2 / rsyslogd 5380 root rtd DIR 253,0 4096 2 / rsyslogd 5380 root txt REG 253,0 278976 1015955 /sbin/rsyslogd rsyslogd 5380 root mem REG 253,0 58400 1868123 /lib64/libgcc_s-4.1.2-20080825.so.1 rsyslogd 5380 root mem REG 253,0 144776 1867778 /lib64/ld-2.5.so rsyslogd 5380 root mem REG 253,0 1718232 1867780 /lib64/libc-2.5.so rsyslogd 5380 root mem REG 253,0 23360 1867787 /lib64/libdl-2.5.so rsyslogd 5380 root mem REG 253,0 145872 1867797 /lib64/libpthread-2.5.so rsyslogd 5380 root mem REG 253,0 85544 1867815 /lib64/libz.so.1.2.3 rsyslogd 5380 root mem REG 253,0 53448 1867801 /lib64/librt-2.5.so rsyslogd 5380 root mem REG 253,0 92816 1868016 /lib64/libresolv-2.5.so rsyslogd 5380 root mem REG 253,0 20384 1867990 /lib64/rsyslog/lmnsd_ptcp.so rsyslogd 5380 root mem REG 253,0 53880 1867802 /lib64/libnss_files-2.5.so rsyslogd 5380 root mem REG 253,0 23736 1867800 /lib64/libnss_dns-2.5.so rsyslogd 5380 root mem REG 253,0 20768 1867988 /lib64/rsyslog/lmnet.so rsyslogd 5380 root mem REG 253,0 11488 1867982 /lib64/rsyslog/imfile.so rsyslogd 5380 root mem REG 253,0 24040 1867983 /lib64/rsyslog/imklog.so rsyslogd 5380 root mem REG 253,0 11536 1867987 /lib64/rsyslog/imuxsock.so rsyslogd 5380 root mem REG 253,0 13152 1867989 /lib64/rsyslog/lmnetstrms.so rsyslogd 5380 root mem REG 253,0 8400 1867992 /lib64/rsyslog/lmtcpclt.so rsyslogd 5380 root 0r REG 0,3 0 4026531848 /proc/kmsg rsyslogd 5380 root 1u IPv4 1200589517 0t0 TCP 10.10.10.90 t:40629->10.10.10.90:5514 (ESTABLISHED) rsyslogd 5380 root 2u IPv4 1200589527 0t0 UDP *:45801 rsyslogd 5380 root 3w REG 253,3 17999744 2621483 /var/log/messages rsyslogd 5380 root 4w REG 253,3 13383 2621484 /var/log/secure rsyslogd 5380 root 5w REG 253,3 7180 2621493 /var/log/maillog rsyslogd 5380 root 6w REG 253,3 43321 2621529 /var/log/cron rsyslogd 5380 root 7w REG 253,3 0 2621494 /var/log/spooler rsyslogd 5380 root 8w REG 253,3 0 2621495 /var/log/boot.log rsyslogd 5380 root 9r REG 253,3 1064271998 2621464 /var/log/custom-application.monolog.log rsyslogd 5380 root 10u unix 0xffff81081fad2e40 0t0 1200589511 /dev/log You can see that there are nowhere near 28 logfiles actually being read. I really had to get one file monitored, so I moved it to the top, and it picked it up, but I'd like to be able to monitor all 28+ files, and not have to worry. OS is Centos 5.5 Kernel 2.6.18-308.el5 rsyslogd 3.22.1, compiled with: FEATURE_REGEXP: Yes FEATURE_LARGEFILE: Yes FEATURE_NETZIP (message compression): Yes GSSAPI Kerberos 5 support: Yes FEATURE_DEBUG (debug build, slow code): No Atomic operations supported: Yes Runtime Instrumentation (slow code): No Questions: Why is rsyslogd only monitoring a very small subset of the files? How can I fix this so that all the files are monitored?

    Read the article

  • Where should I go with hosting my site: VPS, GAE, another option?

    - by Jonathan Hayward
    My website, http://JonathansCorner.com/, began life before 1994 as www.imsa.edu/~jhayward/ and has been through various iterations and improvements to content, HTML, and the like, but remains a literature site that is from a web administrator's perspective fairly simple and primitive: a fair amount of static HTML and supporting files, a little bit of CGI and URI rewriting, .htaccess files providing Expires: headers and the like. An associated site demoes various CGI scripts that fall under the category of "and other creations"; the site as a whole has the purpose of sharing my creative works, and so far a fairly rudimentary use of Apache functionality, supported by Unix tools to, for instance, update RSS feed and the "starting point" link on the home page, has served that purpose fairly well. I looked around here on web hosting, and found the note on web host reccommendations as a good note for "What are some of people's favorite web hosts overall," but I wanted to ask a more focused question of "What are the best web hosts for criteria XYZ:" I am looking at a VPS so I will have root, be able to install stuff and edit Apache's config files etc., running Gentoo or other Linux, BSD, or the like. I would like a system that is secure enough that the host's vulnerabilities are mostly the ones that come along with what I am trying to do: that is, I won't be trying to administer and secure an ancient Linux like some have complained about at 1and1. I would like good uptime/reliability and competent support staff: if the level 1 help desk is going to tell me to go to "My Computer" on a Linux box, I'd like to be able to get past them. Ideally I would like a site hosted within some place that will have low latency for U.S. visitors in particular. I would like a hosting solution that will be with a stable business, one that will probably be around, and one unlikely to vanish without warning. With those things specified, I would be interested in knowing what are the less expensive options. (I expect that some of the things I've specified will knock out all of the cheapest options, but I'm still interested in price.) With all that stated, I'd like to back up a bit and look at whether I am asking the right question. I am concerned that the above is a very good way of asking, "How can I keep my site in line with the wave of the past?" I am wondering if it might be specifically wiser to look to adapt my site to newer technologies instead of trying to keep it on older technologies. For instance, while I would hardly portray my site as a way to show off the full power of Google App Engine, the main site at least should be a straightforward port if I were to do that. And beyond Google App Engine, my knowledge of cloud solutions is basic. If it is a better and more future-proof solution to port my site to another kind of solution, I would be interested in knowing where those future-proof solutions lie. So I would be interested in wisdom. If the question I asked in detail is still a good question to be asking, what would people suggest? Or if I should seriously consider porting my site to a newer basic option, what should I try there? Any thoughts would be appreciated.

    Read the article

  • My internet drops and will only reconnect if I troubleshoot it or restart my computer.

    - by Paul
    My internet loses its connection and at the same time, and my speaker seems broken if that happens and my mouse will not be smooth as it was before it happens. It already happened before and I didnt mind it. But now, it's happening literally all the time. After startup, my connection seems okay but after some few minutes, even without me doing something, my laptop will just lose its connection. It says that it is not connected and connections are available, but if I check if there are connections where I can connect it shows nothing. Even if, there is actually a wi-fi connection where my phone and other laptops connect. They dont seem to have problem with it. Just my laptop. After that happens, I usually restart my laptop but I discovered that troubleshooting it would somehow repair the problem, but not all the time. It says that restarting my wireless adapter do the job. Are there any long term fix? I mean that could last long or totally erase the problem? I have Atheros AR5B97 Wireless Network Adapter and Microsoft Virtual WiFi Miniport Adapter, both enabled. On the services, I have Wired AutoConfig and WLAN AutoConfig on Automatic. Aspire 4750, Windows 7 (x86)

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Is my OCZ SSD aligned correctly? (Linux)

    - by Barney Gumble
    I have an OCZ Agility 2 SSD with 40 GB of space. I use it as a system drive in Debian Linux (Squeeze) and in my opinion it's really fast. But I've read a lot on aligning partitions and file systems... And I'm not sure if I succeeded in aligning the partitions correctly. Maybe the SSD could be even faster?? ;-) I use ext4 and here is the output of fdisk -cul: Disk /dev/sda: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders, total 78161328 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: [...] Device Boot Start End Blocks Id System /dev/sda1 * 2048 73242623 36620288 83 Linux /dev/sda2 73244670 78159871 2457601 5 Extended /dev/sda5 73244672 78159871 2457600 82 Linux swap / Solaris My partitions were created just by the Debian Squeeze setup assistant. So I didn't care about the details of partitioning. But now I think maybe the installer didn't align it correctly? Actually, 2048 looks good to me (better than odd values like 63 or something like that) but I've no idea... ;-) Help plz! According to some "SSD Alignment Calculator" I found on the web, the OCZ SSDs have a NAND Erase Block Size of 512kB and their NAND Page Size is 4kB. 2048 is divisible by 4 and 512. So are the partitions aligned correctly?

    Read the article

  • Tomcat and ASP site under IIS6 with SSL

    - by Rafe
    I've been working on migrating our companies' website from it's original server to a new one and am having two different but possibly related problems. The box this is sitting on is a Windows 2003 server x64 running IIS 6. The Tomcat version is 5.5.x as it was the version the original deployment was built on. There are two other sites on the server one in plain HTML, another in PHP and the one I am trying to migrate is a combination of Java and ASP (the introductory/sign in pages being Java as well as many reports used for our clients and the administration pages being in ASP) First of all I can only access the site if I enter the ip followed by :8080 (xxx.xxx.xxx.xxx:8080). The original setup had an index.html file in the root of the site with a bit of javascript in the header that pointed the site to 'www.mysite.com/app/public' but if I try going directly to the site without the 8080 I get a 'page not found error' and the javascript redirector causes the same problem because it doesn't add the 8080 into the URL even though on the original site the 8080 wasn't present so I don't understand why it would need it now. The js redirect looks like this: <script language="JavaScript"> <!-- location.href = "/app/public/" location.replace("/app/public/"); //--> </script> When setting the site up I used the command line to unbind IIS from all of the ip's on the system (there are 12 ip's on this box) because I was led to believe Tomcat wanted to use localhost which wasn't accessible. I'm not sure if this was the right thing to do but I'm throwing it in for the sake of completeness. And actually, at this point trying to go to localhost from the server itself throws up a 'could not connect to localhost' error. If I go to localhost:8080 I get the tomcat welcome page. If I do localhost:8080/app/public I get the intro page to our website. So I'm not sure what I'm even looking at in this case, that is what the proper behavior should be. The second part of the problem is that if I do go to either the ip or localhost such as above (localhost:8080/app/public) and click on our login link it is supposed to transfer me to our login page yet instead I receive a 'could not connect' error and the url has changed to localhost:8443/app/secure. From my research I see that port 8443 is Tomcats SSL port and the server.xml alludes to it as follows: <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> I have an SSL certificate assigned to the site via IIS and was under the impression that by default Tomcat allowed IIS to handle secure connections but apparently something is munged because it's not working. There is another section in the server.xml that reads like this: <Connector port="8009" enableLookups="false" redirectPort="443" protocol="AJP/1.3" /> Which I'm not sure what it is for although port 443 is the SSL port that IIS uses so I'm confused as to what this is supposed to be doing. Another question I have is when does the isap_redirector actually come into play? How does it know when to try and serve pages through Tomcat and when not to? I've hunted around the 'net for an answer and have yet to find a clear dialogue on the subject. Anyone have any pointers as to where to look for a solution to all of this?

    Read the article

  • Fedora 15: em1 recently dissapeared and hostapd no longer serves internet to wirelessly connected devices

    - by Daniel K
    I have a laptop running hostapd, phpd, and mysql. This laptop uses an Ethernet connection to connect to the internet and acts as a wireless access point for my workplace's wifi devices. After installing some software and reconnecting my Ethernet elsewhere, my "em1" device is no longer present and wirelessly connected devices can no longer reach the internet. The software I recently installed is: pptp, pptpd, and updated some fedora libraries. I have also recently moved my desk and laptop to another location and thus had to reconnect the Ethernet elsewhere. Wifi devices no longer have access to the internet. Wirelessly connected devices are able to successfully log into the laptop, showing full strength, correct SSID, and uses the proper password. However, when I tried to connect to a site like google, the request times out. The device "em1" also no longer appears on my machine. Running: # ifup em1 will give me the following output: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device em1 does not seem to be present, delaying initialization. And running: # dhclient em1 has the following output: Cannot find device "em1" When I run # dmesg|grep renamed, I get the following: renamed network interface eth0 to p4p1. I've tried to connect to the internet through p4p1 directly from the laptop and was successful. However, my wireless devices connected to my laptop are not able to connect to the internet. I have uninstalled pptp and pptpd using # yum erase ... but the problem still persists. To install pptp I used: # yum install pptp To install pptpd I did the following: # rpm -Uvh http://poptop.sourceforge.net/yum/stable/fc15/pptp-release-current.noarch.rpm # yum install pptpd To update my fedora libraries I used: # yum check-update # yum update EDIT: Running # route produces the following results: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.11.200.1 0.0.0.0 UG 0 0 0 p4p1 10.11.200.0 * 255.255.252.0 U 0 0 0 p4p1 172.16.100.0 * 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • Mac OS X Terminal.app Ubuntu 9.10 SSHD and incorrect keyboard mapping

    - by Jesse
    Does anyone have any Idea how to handle this? I can't stand connecting to certain Ubuntu boxes via Mac OS X because of issues with keyboard layout etc. I have set TERM=vt100 and TERM=xterm-color in Ubuntu .bashrc and also in the Terminal.app advanced preferences and nothing seems to fix this issue. Trying to use arrow keys on slim silver keyboard results in ^[[A etc. From Answer OS X 10.6.4 When I try to run /lib/terminfo/x/xterm-color I get permission denied? Maybe this is the issue?! Regular bash login shell. If I sudo often it works. Which leads me to believe the above permissions problem is the cause. Output from stty -a: $ stty -a speed 9600 baud; rows 47; columns 181; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc ixany imaxbel -iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe -echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

    Read the article

  • Is there a clean way to obtain exclusive access to a physical partition under Windows?

    - by zneak
    Hey guys, I'm trying, under Windows 7, to run a virtual machine with VMWare Player from an OS installed on a physical partition. However, when I boot the virtual machine, VMWare Player says that it couldn't access the physical drive for writing. This seems to be a generally acknowledged problem in the VMWare community, as Windows Vista introduced a compelling new security feature that makes it impossible to write to a raw drive without obtaining exclusive access to it first. I have googled the issue and found a few workarounds. However, the clean ones seem to only work on whole physical disks, and not on partitions. So I would be left with the dirty solution. In short, it meddles with the MBR to erase any trace of the partitions to use, makes Windows forget about them, then restores the MBR so we can launch the VM. I'm not sure I want to do that. Is there a way to let VMWare acquire exclusive access to the partition without requiring me to nuke it away? What I'd be looking for, I suppose, is a way to put just partitions offline instead of whole physical drives.

    Read the article

  • Why is the `remove rows` button not always present in MySQL Workbench?

    - by Shawn
    I'm using MySQL Workbench to remove rows from a table in my database. Most of the time, I will simply write a select statement, then select the rows I want to erase and use the remove rows button (circled in the screenshot below). But sometimes (quite often actually), the remove rows button does not appear. Instead, I get something like the screenshot below: The remove rows button is not there and the remove rows option is grayed out in the context menu, so basically, I can't remove rows... The only way I've found of solving this issue is to run the select query many times until the button appears (it usually does after 3 of 4 times). Does anyone know why this is happening? UPDATE Today, I've been running a select query dozens of times and the button never appears. It seems my incomprehensible workaround no longer works... Help! btw: Using a delete statement does work, though I would rather not have to write one for each row I want to remove as this happens quite frequently during development...

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • Apache SSL reverse proxy to a Embed Tomcat

    - by ggarcia24
    I'm trying to put in place a reverse proxy for an application that is running a tomcat embed server over SSL. The application needs to run over SSL on the port 9002 so I have no way of "disabling SSL" for this app. The current setup schema looks like this: [192.168.0.10:443 - Apache with mod_proxy] --> [192.168.0.10:9002 - Tomcat App] After googling on how to make such a setup (and testing) I came across this: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/861137 Which lead to make my current configuration (to try to emulate the --secure-protocol=sslv3 option of wget) /etc/apache2/sites/enabled/default-ssl: <VirtualHost _default_:443> SSLEngine On SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLProxyEngine On SSLProxyProtocol SSLv3 SSLProxyCipherSuite SSLv3 ProxyPass /test/ https://192.168.0.10:9002/ ProxyPassReverse /test/ https://192.168.0.10:9002/ LogLevel debug ErrorLog /var/log/apache2/error-ssl.log CustomLog /var/log/apache2/access-ssl.log combined </VirtualHost> The thing is that the error log is showing error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol Complete request log: [Wed Mar 13 20:05:57 2013] [debug] mod_proxy.c(1020): Running scheme https handler (attempt 0) [Wed Mar 13 20:05:57 2013] [debug] mod_proxy_http.c(1973): proxy: HTTP: serving URL https://192.168.0.10:9002/ [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2011): proxy: HTTPS: has acquired connection for (192.168.0.10) [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2067): proxy: connecting https://192.168.0.10:9002/ to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2193): proxy: connected / to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2444): proxy: HTTPS: fam 2 socket created to connect to 192.168.0.10 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2576): proxy: HTTPS: connection complete to 192.168.0.10:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection to child 0 established (server demo1agrubu01.demo.lab:443) [Wed Mar 13 20:05:57 2013] [info] Seeding PRNG with 656 bytes of entropy [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/connect initialization [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: unknown state [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1897): OpenSSL: read 7/7 bytes from BIO#7f122800a100 [mem: 7f1230018f60] (BIO dump follows) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1830): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1869): | 0000: 15 03 01 00 02 02 50 ......P | [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1875): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1903): OpenSSL: Exit: error in unknown state [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] SSL Proxy connect failed [Wed Mar 13 20:05:57 2013] [info] SSL Library Error: 336032002 error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 0 with abortive shutdown (server example1.domain.tld:443) [Wed Mar 13 20:05:57 2013] [error] (502)Unknown error 502: proxy: pass request body failed to 172.31.4.13:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [error] [client 192.168.0.10] proxy: Error during SSL Handshake with remote server returned by /dsfe/ [Wed Mar 13 20:05:57 2013] [error] proxy: pass request body failed to 192.168.0.10:9002 (172.31.4.13) from 172.31.4.13 () [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2029): proxy: HTTPS: has released connection for (172.31.4.13) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: SSL negotiation finished successfully [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 6 with standard shutdown (server example1.domain.tld:443) If I do a wget --secure-protocol=sslv3 --no-check-certificate https://192.168.0.10:9002/ it works perfectly, but from apache is not working. I'm on an Ubuntu Server with the latest updates running apache2 with mod_proxy and mod_ssl enabled: ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" ~# dpkg -s apache2 ... Version: 2.2.22-1ubuntu1.2 ... ~# dpkg -s openssl ... Version: 1.0.1-4ubuntu5.7 ... Hope that anyone may help

    Read the article

  • problem transferring Win 7 operating system hard drive to be used as external hard drive

    - by itserich
    Win 7 Asus MA479 8GB Ram hard drives are 500GB The operating system was a Caviar Green, and I tried to exchange it for a Caviar Blue. The Caviar Blue took the install correctly, but the Green, the prior operating system hard drive, will not allow itself to be used as an external hard drive. I use TrueCrypt and tried to format the Green and it freezes each time at the very end of the encyrption of the partition. I took the Blue out of the system and tried to encrypt it, and same problem. I think there must be something on the hard drive that shows it was a system drive and it causes a conflict. I have tried writing over the system hard drive to fully erase everything but that does not work, it still freezes. The drives will work in a different pc, i.e. other pc where it never was the system drive. The external hard drive is connected esata through Thermaltake. I have used TrueCrypt on various pc's for years including this Win7 with no problem. Thank you.

    Read the article

  • Move EFI System Partition to another drive

    - by Pincopallino
    I had a Windows 8 installation on an HDD, using UEFI as boot. The HDD has the following GPT table: DISKPART> list partition Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Ripristino 300 Mb 1024 Kb Partizione 2 Sistema 100 Mb 301 Mb Partizione 3 Riservato 128 Mb 401 Mb Partizione 4 Primario 390 Gb 529 Mb Partizione 5 Primario 540 Gb 390 Gb (I apologize it's in Italian, but the translation is quite straightforward). I recently bought an SSD drive, connected it and installed a fresh Windows 8. Now I have a working dual boot, but the UEFI partition is on the HDD instead of the SSD. Here's the SDD partition list: Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Riservato 128 Mb 1024 Kb Partizione 2 Primario 221 Gb 129 Mb I think that the best solution would be to have it on the SSD for two reasons: the first is performance (I guess it would be a little be faster on the SSD due to the spin up time for an HDD, but I may be wrong about that) second reason is consistency. As I plan to use only the Windows 8 installation that is located on the SSD and I'm probably going to erase the system partition on the HDD to use it as a data storage device, I think that the boot partition should be on the same drive as the OS. So the question is how do I move the EFI System Partition to the SSD?

    Read the article

  • Can't find partition tab in disk utility osX ver. 10.6.8

    - by John W
    I just got a used Mac Book Pro. I created a new admin account and deleted the old one as well as one other user. This is an older late 2007 MBP... the osX upgrade to 10.6.8 was just performed. My Macintosh HD is showing up as Partition 2. I ran disk utility (not from install disk), but there was no partition tab. I have a 160GB drive with only 53GB of space left on it. Since I am the only user and have no files on the laptop yet, I don't understand why there is so little space left. Surely the OS can't use up over 100GB. I wanted to run disk utility to see if there were any recovery partitions or other partition left over from the previous owner that could be erased to make room for expanding the main partition. Unfortunately, there is no partition tab in disk utility. The documentation I have found on line states that this version of osX includes that utility. The osX disks I have are for an older version so I wasn't sure if they would be of any use in solving this problem. Also, I was afraid if using the disks, would I lose the little bit of data/apps that I have assembled. I would rather not do a fresh install and have to do all the updates again to achieve this. The previous owner had some apps that I don't want to lose as I would have to pay handsomely to get them back. Simply, if all the previous users data is backed up on here after deleting user is still taking up space on a recovery partition (that I can't see)... I need to locate it erase it and expand the primary partition to re-aquire disk space for my files. I am new to Mac, so please be as descriptive as possible. Thanks.

    Read the article

  • RDP suggesting a user name when connecting to a server

    - by Neolisk
    Prior to Event X, RDPing to Server 2003 always caused the user name appear blank and Login to be enabled, so you could pick to which domain you would log in. For us it's either local or our domain. Since a recent Event X a domain + user name is being suggested for every server and it's not the most recently used user name. If you remove it manually from RDP dialog, it's still being pre-populated for you, and then at the next available opportunity it returns into General/User name option of RDP dialog. So user name field comes pre-populated and you cannot change to log in locally (only if you manually erase domain specifier - everything before \) - Log in to option is disabled by default. We did not do any changes to our domain or client machines, so I am suspecting some Windows update caused it (and this being Event X). Interesting fact - it does not consistently happen on all machines, and some can login to some servers fine, while other servers keep suggesting a default user name. What could be that Event X and is there a way to fix it? EDIT: I tried this - How to clear remote desktop connections history and specifically this part of it: reg delete "HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\UsernameHint" /f The problem still persists.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >