Search Results

Search found 30742 results on 1230 pages for 'folder size'.

Page 411/1230 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Unable to delete Gmail profile in Outlook 2010

    - by Michele
    I used to have 2 Gmail profiles set up in Outlook 2010. I initially had it set up using IMAP, but I deleted it (or so I thought), and re-set it up the old-fashioned way without IMAP. Then finally, I deleted the 2nd Gmail profile altogether. So now I only have ONE Gmail profile set up in Outlook 2010 and it works fine. The problem is that the original Gmail IMAP folder for the 2nd Gmail account is still in my list of folders when I use Outlook. I've already disabled IMAP via my Google Gmail account. But I am unable to delete the folder from Outlook 2010. I have checked in the account settings and it is NOT in my list of email profiles or the data files. Has anyone else had this problem?

    Read the article

  • I need to access another user's files.

    - by CDeanMartin
    My system is an HP netbook running Ubuntu 10 netbook edition from a USB drive. I created an admin account and user account, and left in place the 'ubuntu' account. My netbook came with Windows 7 factory loaded and I did some work in Windows before setting up Linux. I copied my work into the HP Tools FAT32 partition that also came Factory Loaded, and was only 20% full. Only the 'ubuntu' account shows the HP Tools partition. So I would like to either view the partition from the 'admin' or 'user' accounts, or copy files from the partition-to a folder accessible from admin or user. I have already tried right clicking the folder, selecting share, and installing the share package, but I got a string of errors and would prefer a short term, one time solution that does not involve installing the share package. All I need is a few plain text Windows files i was working on.

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • Extension GLX missing...on a desktop PC

    - by Bart van Heukelom
    I just installed Ubuntu 12.10 on a new PC with an Nvidia GTX 560 graphics card, but after installing the Nvidia proprietary drivers (either -current or -current-updates), Unity won't start. When trying to start it manually I get the message "extension GLX missing". I've searched around and found results like this question which point out it's a problem with Nvidia Optimus laptops. However, I don't have this problem on a laptop, but on a desktop PC. lshw output for the graphics card: *-display description: VGA compatible controller product: GF114 [GeForce GTX 560 SE] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:f4000000-f5ffffff memory:e0000000-e7ffffff memory:e8000000-ebffffff ioport:e000(size=128) memory:f6000000-f607ffff and CPU: *-cpu description: CPU product: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz vendor: Intel Corp. physical id: 40 bus info: cpu@0 version: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz slot: SOCKET 0 size: 1600MHz capacity: 3800MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms cpufreq configuration: cores=4 enabledcores=4 threads=4

    Read the article

  • Mapping skydrive as network drive in macos

    - by vittore
    as you probably know, if you have windows live account you can use free skydrive 25 gb storage. Even more a lot of people know that if you go to your skydrive in browser and copy cid query parameter value (https://...live.com/...&cid=xxxxxxxx ) you will be able to map skydrive as network drive in windows using this network pass \[cid].docs.live.net[cid]\ I do now that if you have network share like \server\folder i can map it in macos too, as smb://server/folder. however it is doesn't seem to be a case with skydrive when i try to map it as smb://[cid].docs.live.net/[cid] finder tells it can't connect. Anyone know how to map it ?

    Read the article

  • Can't serve files without extension because they "appear to be script" on IIS7.5

    - by madd0
    I created a certain number of static JSON files with no extension in a subfolder of my site. I want to use them for tests. The problem is that IIS is refusing to serve them because : HTTP Error 404.17 - Not Found The requested content appears to be script and will not be served by the static file handler. The folder is a subfolder of an ASP.NET application and I can't create an application just for it, neither can I change the parent application's application pool. Actually, I don't have access to the IIS configuration other than through the web.config file in the folder in question. I assume there must be a way to get a web server to serve static files, right?

    Read the article

  • Elastic beanstalk access private git repo

    - by user221676
    I am trying to currently add an ssh key to my elastic beanstalk instances using .ebextensions commands. The keys I have stored are in my application code and I try to copy them to the root .ssh folder so I can access them when doing a git+ssh clone later here is an example of the config file in my .ebextensions folder packages: yum: git: [] container_commands: 01-move-ssh-keys: command: "cp .ssh/* ~root/.ssh/; chmod 400 ~root/.ssh/tca_read_rsa; chmod 400 ~root/.ssh/tca_read_rsa.pub; chmod 644 ~root/.ssh/known_hosts;" 02-add-ssh-keys: command: "ssh-add ~root/.ssh/tca_read_rsa" the problem is that I get is an error when attempting to clone the repo Host key verification failed. I have tried many ways of try to add the host to the known_hosts file but none have worked! The command that is doing the clone is npm install as the repo points to a node module

    Read the article

  • Emails not being delivered

    - by Tomtiger11
    Comment pointed out that this may fix my problem, and it did: Why don't mails show up in the recipient's mailspool? I use Postfix with Dovecot, and when I send an email from my gmail to my server, it is received at the server, but not at my email client using POP3. I can verify it being received at the server using the mail command. This is my main.cf: queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = tom4u.eu myorigin = $myhostname inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 relay_domains = $mydomain alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_tls_cert_file = /etc/postfix/certs/cert.pem milter_protocol = 2 milter_default_action = accept smtpd_milters = inet:localhost:8891 non_smtpd_milters = inet:localhost:8891 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $myhostname smtpd_recipient_restrictions = reject_non_fqdn_recipient,permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination,permit broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth If you could help me with this, I'd be most grateful, if you need any more information, please ask. var/log/maillog: May 30 22:44:25 tom4u postfix/smtpd[18626]: connect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/smtpd[18626]: 318F679B7F: client=mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/cleanup[18631]: 318F679B7F: message-id=<CAA_0zdxY-WUFGOC57K_yVn0G+5hN=8KSXuohJqMDB5Rm7bqu8w@mail.gmail.com> May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: mail-we0-f181.google.com [74.125.82.181] not internal May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: not authenticated May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: DKIM verification successful May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: s=20120113 d=gmail.com SSL May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: from=<[email protected]>, size=1720, nrcpt=1 (queue active) May 30 22:44:25 tom4u postfix/smtpd[18626]: disconnect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/local[18632]: 318F679B7F: to=<[email protected]>, relay=local, delay=0.17, delays=0.12/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: removed May 30 22:45:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18679 May 30 22:45:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 May 30 22:46:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18725 May 30 22:46:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0

    Read the article

  • POSTFIX bouncing when destination is my domain

    - by ZeC
    I am using provider mail hosting to send emails. On my Webserver I also have Postfix running and configured. Here is my main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = yes readme_directory = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = 2-5-8.bih.net.ba alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = bhcom.info, 2-5-8.bih.net.ba, localhost.bih.net.ba, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = mailbox_size_limit = 10485760 recipient_delimiter = + inet_interfaces = 80.65.85.114 When I try sending email to my hosted domain name, every message gets bounced with this error: Nov 4 20:38:34 2-5-8 postfix/pickup[802]: 1492A3E0C6C: uid=0 from=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 1492A3E0C6C: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: from=<[email protected]>, size=348, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/local[990]: 1492A3E0C6C: to=<[email protected]>, relay=local, delay=0.12, delays=0.08/0.01/0/0.04, dsn=5.1.1, status=bounced (unknown user: "info") Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 28ED53E0C6D: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: from=<>, size=2056, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/bounce[991]: 1492A3E0C6C: sender non-delivery notification: 28ED53E0C6D Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: removed Nov 4 20:38:34 2-5-8 postfix/local[990]: 28ED53E0C6D: to=<[email protected]>, relay=local, delay=0.06, delays=0.03/0/0/0.02, dsn=5.1.1, status=bounced (unknown user: "razvoj") Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: removed However, when I try to @gmail.com, it sends message without problems, and here is log. What might be the issue? Nov 4 20:41:23 2-5-8 postfix/pickup[802]: B2EC63E0C6C: uid=0 from=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/cleanup[1022]: B2EC63E0C6C: message-id=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: from=<[email protected]>, size=350, nrcpt=1 (queue active) Nov 4 20:41:23 2-5-8 postfix/smtp[1024]: connect to gmail-smtp-in.l.google.com[2a00:1450:4001:c02::1a]:25: Network is unreachable Nov 4 20:41:24 2-5-8 postfix/smtp[1024]: B2EC63E0C6C: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[173.194.70.26]:25, delay=0.97, delays=0.08/0.01/0.27/0.62, dsn=2.0.0, status=sent (250 2.0.0 OK 1352058066 f7si2180442eeo.46) Nov 4 20:41:24 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: removed

    Read the article

  • Annotation of a pdf file in my dropbox in ipad and keep it the last version in dropbox

    - by Farshid
    I have a folder in my dropbox that i keep my ebooks in it. I want to find an app in ipad that can do these to me: Let me open a pdf file from my dropbox Let me annotate on that file Annotation getting applied to the dropbox version of my file, instead of creating a local copy that its changes does not affect the dropbox version In my pc, when i open a pdf file from my dropbox and make Some highlights, when i press the save button in acrobat reader, the dropbox version is instantly gets updated and whenever i open my dropbox folder i have the latest version of the file. I need similar functionality in my ipad. What ipad app do you recommand for gaining this functionality?

    Read the article

  • Strategy for hosting 700+ domains, each with static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to managing CNAMEs on the registrar-side, is there a way to quickly associate CNAMEs with new websites, maybe gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Does Win 7 still requires copying all files over before burning to a DVD-R or BD-R?

    - by Jian Lin
    It seems that Win 7 still needs to copy all files over to a folder, before it burns all files to the DVD-R or BD-R? I think since XP or Vista, Windows always copy everything over to a temporary folder before it will burn to an empty DVD-R. So if you just want to burn a 4GB file to an empty DVD-R, it will first make a copy of that file, and then burn it, instead of just burning it without making a copy first. And now on Win 7, it seems like it is the case also? Most other 3rd party burning tools won't make an extra copy of the files first... Win 7 is the exception. Is there a way around it? (to avoid copying over 25GB or 50GB of data before burning)

    Read the article

  • Notification if SyncToy fails

    - by Joel Coehoorn
    I support a number of laptop users. In the past (before there were many laptops), each user's computer was set up so that their My Documents folder was mapped to a shared folder on the server. This worked very well for desktops, but has several obvious downsides for laptops (no files when you're off-site, etc). I'm exploring several alternatives for laptops to better map the shared drives, and SyncToy seems the best so far. I have a couple trial users set up so that it syncs automatically whenever they log in, along with a desktop icon they can click if they know they'll need something saved before the next login. My problem is that I'm concerned how I, as the maintainer of this system, can spot failures. I don't want my first indication of a problem to come after a user drops their laptop in a lake and it turns out nothing was synced for the last year. Any ideas?

    Read the article

  • Ubuntu server 11.04 recognize only 1 core instead of 4

    - by Kreker
    I searched for other questions and googled a lot but I don't find a solution for solving this problem. Ubuntu Server 11.04 64bit installed on Dell Poweredge with Intel Xeon X5450. He only recognize 1 of the 4 cores I have. Tried to modify the GRUB config but didn't work. IN the machine BIOS I didn't find anything useful. CPU root@darwin:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz stepping : 10 cpu MHz : 2992.180 cache size : 6144 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 5984.36 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: GRUB root@darwin:~# cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="noapic nolapic" #was with acpi=off # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" Complete dmesg Too long, posted on pastebin http://pastebin.com/bsKPBhzu

    Read the article

  • Cannot install new certificate in IIS 7 on Windows Server 2008 R2

    - by Alex B.
    We are trying to renew our existing web site certificate on our IIS 7 site under Windows Server 2008 R2, but we continue to get the "Access is denied" error that others have posted. However, when we have gone to implement the common fix of making sure the Administrator group has full access to all folders and subfolders on the C:\ProgramData\Microsoft\Crypto\RSA folder, we get an "Access is Denied" error on changing those permissions. Yes, we are logged in as Administrator user - it just seems to not allow us to modify the group permissions to this folder. Help! We need to renew our certificate before March 2011!

    Read the article

  • Why change net.inet.tcp.tcbhashsize in FreeBSD?

    - by sh-beta
    In virtually every FreeBSD network tuning document I can find: # /boot/loader.conf net.inet.tcp.tcbhashsize=4096 This is usually paired with some unhelpful statement like "TCP control-block hash table tuning" or "Set this to a reasonable value." man 4 tcp isn't much help either: tcbhashsize Size of the TCP control-block hash table (read-only). This may be tuned using the kernel option TCBHASHSIZE or by setting net.inet.tcp.tcbhashsize in the loader(8). The only document I can find that touches on this mysterious thing is the Protocol Control Block Lookup subsection beneath Transport Layer in Optimizing the FreeBSD IP and TCP Stack, but its description is more about potential bottlenecks in using it. It seems tied to matching new TCP segments to their listening sockets, but I'm not sure how. What exactly is the TCP Control Block used for? Why would you want to set its hash size to 4096 or any other particular number?

    Read the article

  • How are thumbnails stored on Mac OS X?

    - by Alberto Moriconi
    I have noticed that thumbnails on Mac OS X don't seem to be generated every time I open a folder, but to be somehow "cached" instead. I wasn't able, however, to find a folder where they clearly look to be stored. I thought then they could be saved as some kind of metadata; i found, however, that when deleting a file (e.g. from the Desktop) and saving immediately after a file by the same name, the preview for the previous file is showed for ~ a second. Are they stored separately from data? How and when are they invalidated (e.g. only when a file by the same name appears in the same directory)?

    Read the article

  • Deleting pagefile.sys on shutdown

    - by Daniel E. Shub
    I have a Windows XP machine (it is a VM running in Xen) that I would like to backup. I have enabled ClearPageFileAtShutdown by following MS KB 314834. If I cleanly shutdown the XP machine and then mount the drive in another machine (which is trivial since the machine is virtual) I still have a large pagefile.sys. I was hoping that enabling ClearPageFileAtShutdown would result in a pagefile.sys with a size near zero. I have two questions. First, is it possible to have pagefile.sys be deleted, or have a drastic size reduction, at shutdown? Second, can I exclude pagefile.sys from my backup?

    Read the article

  • Installing Ubuntu Server 12.04 as a software RAID 1 mirror fails to boot

    - by Jeff Atwood
    I'm installing a few new Ubuntu Server 12.04 LTS servers, and they have two 512 GB SSDs. I want them to use software RAID 1 mirroring, so I was following this document religiously step by step: https://help.ubuntu.com/12.04/serverguide/advanced-installation.html To summarize the above official documentation: to set up a software RAID 1 mirror in Ubuntu Server, you choose manual partitioning during the setup, and do this on each drive: "swap" partition of roughly RAM size "physical volume for RAID" partition for remaining drive size After that, you set up the RAID 1 mirror using the RAID partitions on drive A and B, make it ext4 and containing the root filesystem partition. Setup continues from there just fine. One caveat: I was completely unable to select the "physical volume for RAID" as bootable. When I tried to do that in setup, it had no effect: I could press enter on the "make bootable" option all day long and nothing would ever change. However, after install successfully completes, I have a big problem: the system won't boot! I get Reboot and Select proper boot device or Insert Boot Media in selected Boot device and press a key What did I do wrong? Why can't I mark that "physical volume for RAID" partition bootable during Ubuntu Server setup? Is there some way for me to make the physical volumes for RAID bootable after the fact, perhaps from a live CD or something?

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

  • Creating a bootable flash without overlayfs

    - by Septagram
    I want to create an USB stick to carry my Ubuntu everywhere around with me. It's not intended to spread Ubuntu by installing it everywhere, but rather for running my configured system on any computer I come across. So far, I went with installing Ubuntu with unetbootin, however, I have some issues with this. When installed with netbootin, the original disk image is kept intact on the flash drive, forever. Also, a file is created for persistent storage and during boot it is accessed together with the image by overlayfs. This, in my opinion, has the following problems: If system is updated regularly, then files from the image are overwritten in persistent storage, doubling their size and wasting precious space. Persistent storage has a fixed size that you have to define from the start, again, wasting precious space. I'm not 100% sure, but maybe using overlayfs makes disk access slower, and more so on the relatively slow devices. So I'd like to find another solution: either to get rid of the original image or to install Ubuntu "normally" on the separate ext2 partition, or maybe even install it in the main vfat partition on the USB stick. Suggestions?

    Read the article

  • Installing "SoX" via the Terminal

    - by timkl
    I'm new to installing applications via the Terminal, so excuse my absolute ignorance on the subject. I want to install SoX ( http://sox.sourceforge.net/ ), so I can do some ninja audio editing. First I installed git, then I installed SoX. I didn't get any error messages and the installation has spawned a sox-folder in my Users/myName-folder. However when I use the program by typing "sox" in the Terminal, nothing happens, all I get is "command not found". Does anybody know how to troubleshoot this?

    Read the article

  • A Quarter Century of SPARC

    - by kemer
    You might have missed an interesting milestone: the 25th anniversary of SPARC. Twenty-five years! Almost 40% of my life: humbling, maybe a little scary. When I joined Sun Microsystems in 1988, SPARC was just starting to shake things up. The next year we introduced the SPARCstation 1, which had basically triple the performance of our Motrolla-based Sun–3 systems. Not too long after that, our competition began a campaign of “SPARC is dead.” We really distressed them with our success, in spite of our small size. “It won’t last.” “It can’t last!” So they told themselves. For a stroll down memory lane take a look at this page. I remember the sales meeting we had in Atlanta to internally announce the SPARCstation 1. Sun hadn’t really hit the big times, yet. Our much bigger competitors viewed us as an ill-mannered pest, certain of our demise. And, why wouldn’t they be certain: other startups more our size, such as Apollo (remember them?), Silicon Graphics (they fought the good fight!), and the incredibly cool Symbolics are memories. Wait! There was also a BIG company, DEC, who scoffed at us: they are history, too. In fact, we really upset them with what was supposed to be an internal-only video production that was a take-off on Bruce Lee movies, in which we battled the evil Doctor DEC – complete with computer mice (or is that “mouses”?) wielded like nun chucks with the new SPARCstation 1 somehow in the middle of everything. The memory is vivid, but the details hazy. After all, that was almost a quarter century ago. So, here’s to Oracle’s SPARC: still going strong after all these years. – Kemer

    Read the article

  • script Disk Management configuration

    - by Joseph
    I have 10 workstations with large monitors that have USB slots and several card readers built in. The card readers cannot be disabled and will map to drive letters when I image the computers. I go into Disk Management and delete the drive mappings and add mappings to a single folder in C:\ with a folder for each slot. I have to do this because of scripts that run that are expecting specific letter drive mappings to network resources. Is there a way to script the deleting and adding of drive mappings instead of having to use the Disk Management GUI manually on each workstation? The workstations are running XP Professional.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >