Search Results

Search found 2390 results on 96 pages for 'morten green hermansen'.

Page 71/96 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • Network problems with Ubuntu on VMware

    - by svick
    I'm running Ubuntu 10.04 inside VMware Player running under Windows Vista and I can't connect to the internet or the host computer from the Ubuntu. I have set all the VMware services to “manual” (like VMware DHCP Service), but starting them manually doesn't help. In VMware, the network seems to work (there is a green dot beside the network icon) and I have tried both Bridged and NAT settings. ifconfig doesn't show the eth1 interface, unless I give it as a parameter (or use -a). I think this means that Ubuntu thinks that the network isn't connected at all. How do I fix this? ifconvadmin@ubu1004:~$ ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:56 errors:0 dropped:0 overruns:0 frame:0 TX packets:56 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4192 (4.1 KB) TX bytes:4192 (4.1 KB) vadmin@ubu1004:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr 00:0c:29:2d:a0:6f BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:18 Base address:0x2000

    Read the article

  • Installing Joomla on Windows Server 2008 with IIS 7.0

    - by Greg Zwaagstra
    Hi, I have been spending the past while trying to install Joomla on a server running Windows Server 2008. I have successfully installed PHP (using Microsoft's web tool for installing PHP with IIS) and MySQL and am now trying to run the browser-based installation. Everything comes up green, I fill in the appropriate information regarding the site name, MySQL information, etc. and no errors are thrown. However, when I get to the step that asks me to remove the installation directory, I am unable to do so as Windows states it is in use by another program (I cannot fathom how this is true). Also, there is no configuration.php file that is created so if I were to manage to delete this folder I have a feeling that there would be problems. I was thinking there was some kind of a permissions issue and have set the permissions for IIS_IUSRS to have read, write, and execute permissions for the entire folder that Joomla resides in but this has not helped. Any help in this matter is greatly appreciated. ;) Greg EDIT: I decided to try and manually install Joomla by manually editing the configuration.php file. This has worked great and now I am certain there is some kind of a permissions issue going on because I am able to do everything that involves the MySQL database (create an article, edit menu items, etc.), but anything that involves making changes to Joomla installation's directory does not work (install plugins, edit configuration settings using the Global Configuration menu within Joomla, etc.) I have granted IIS_IUSRS every permission except Full Control (reading on the Joomla! forums shows that this should be enough for everything to work). This is confusing to me and I am quite stuck on this problem. EDIT 2: The bizarre thing is that in the System Info under Directory Permissions, everything turns up as Writable but then whenever I try to actually use Joomla to, for example, edit the configuration.php file using the interface, it says it is unable to edit the file.

    Read the article

  • FreeBSD ZFS RAID-Z2 performance issues

    - by Axel Gneiting
    I'm trying to build my own network attached storage based on FreeBSD+ZFS+standard components, but there are strange performance issues. The hardware specs are: AMD Athlon II X2 240e processor ASUS M4A78LT-M LE mainboard 2GiB Kingston ECC DDR3 (two sticks) Intel Pro/1000 CT PCIe network adapter 5x Western Digital Caviar Green 1.5TB I created a RAID-Z2 zpool from all disks. I installed FreeBSD 8.1 on that zpool following the tutorial. The SATA controllers are running in AHCI mode. Output of zpool status: pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gptid/7ef815fc-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/80344432-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/81741ad9-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/824af5cb-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/82f98a65-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 The problem is that write performance on the pool is very very bad (<10 MB/s) and every application that is accessing the disk is unresponsive every few seconds when writing. It seems like writing is fine until the ZFS ark cache is full and then ZFS stalls the entire system I/O till it's finished writing that data. Also I'm getting kmem_malloc to small kernel panics. I've already tried to put vm.kmem_size="1500M" vm.kmem_size_max="1500M" into /boot/loader.conf, but it doesn't help. Does anyone know what's going on here? Am I really not having enough memory for ZFS to handle this RAID-Z2?

    Read the article

  • Which HDD brand do you ..trust

    - by Shiki
    Okay it says its 'subjective' but I believe it's not. Basically I want to ask the community about your preference. Not really 'preference' but actual experience. Like if you never had a problem with Western Digital, then write that in an answer, or if there is one with WD, just vote it up. And so on. (Heard so many stories, experiences. I only had Samsung, Maxtor, WD, Seagate HDDs. Samsung died with bad blocks, had anomalies. Maxtor died so fast I couldn't even try it really and it's really hot, loud. Seagate is just as loud as a jet plane, and moderately hot. My WD (green) is quiet, really cool and somewhat fast. That's all I have about experiences. So I would say Western Digital in an answer (OR Hitachi. Never had one yet, but every expert I know says I should get one since they even had problems with WD but Hitachi seems to be ok. (My laptop comes with Hitachi hdd but I don't think its really relevant.)) Basically I mean desktop 7200RPM HDDs here. Well.. notebook HDDs are ok also, but no raptor/scsi/server ones. Hope you get what I meant and it won't get closed.

    Read the article

  • Google app mail hosting and windows 2008 server DNS configuration

    - by fyasar
    Hi There, I used to use shared hosting before and I created a google app account and I configured DNS records on shared hosting's contol panel according to google dns documentations. Everything was working until I switched to dedicated server. First I added DNS role to my new server and I configured whole DNS and NS stuff. I have mail address that mail.domain.com that was redirecting to google app email before. Rightnow, It's not possible to access to mail.domain.com addres from somewhere. But I'm accessing from a few point which located some computer on different network. I stacked in this problem. Please, see the below screen shots I checked on DNSStuff.com,everything is green. Also, I checked on InteliWiz, it seems correct And, Here is the my DNS records. I was accessing to my mail.domain.com address from everywhere, Now, I cannot access to mails from many places since 3 week. Where is the my mistake ? Any help would be appriciated.

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • apache: can't renew ssl certificate

    - by Caballero
    I have Godaddy SSL certificate for one website on my dedicated server running Centos 5.3 / Apache 2.2.3. I have renewed certificate on Godaddy recently, however now it's showing as expired on my website. I've re-keyed certificate since and reuploaded domain.key, domain.crt and bundle.crt (example file names) files to the server, restarted apache, but the sertificate still shows as expired. I'm running out of clues. I've tried replacing content of .crt files with jiberish and restart apache - it's still showing that certificate is expired, even though it shouldn't be picked up at all. I eventually rebooted dedicated server, still no luck. I'm using free SSL check tool http://www.digicert.com/help/ which clearly shows all the green checks except one - certificate is expired. Has someone any idea what might be causing this? Could there be some kind of caching going on here? UPDATE: after running openssl x509 -in domain.crt -noout -enddate I'm getting this output: notAfter=Jun 2 08:16:51 2013 GMT So I asume this means I have the right certificate on the server and yet the old expired one shows on the web...

    Read the article

  • Triple monitor setting in Linux with USB-HDMI adapter

    - by Oscar Carballal
    I'm trying to set up a triple monitor desktop at my office using Fedora 17, but it seems impossible, let me explain the setting: Laptop ASUS K53SD with 2 graphic cards, Intel and nVidia (Screen controled by Intel card) 24" Full HD monitor connected to the HDMI output (controlled by Intel card) 23" Full HD monitor connected to an USB-HDMI adapter (via framebuffer in /dev/fb2, apparently) VGA output (not used) controlled by nVidia card First of all, the USB-HDMI adapter works perfectly, it gives me a green screen (which means the communication is OK) and I can make it work if I set up a single monitor setting via framebuffer in Xorg. Here I leave the page where I got the instructions: http://plugable.com/2011/12/23/usb-graphics-and-linux Now I'm trying to set up the the two main monitors (laptop and 24") with the intel driver and the 23" with the framebuffer, but the most succesful configuration I get is the two main monitors working and the third disconnected. Do you have any idea what can I do to make this work? Here I leave my xRandr output and my Xorg conf: -> xrandr Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA2 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 531mm x 299mm 1920x1080 60.0*+ 50.0 25.0 30.0 1680x1050 59.9 1680x945 60.0 1400x1050 74.9 59.9 1600x900 60.0 1280x1024 75.0 60.0 1440x900 75.0 59.9 1280x960 60.0 1366x768 60.0 1360x768 60.0 1280x800 74.9 59.9 1152x864 75.0 1280x768 74.9 60.0 1280x720 50.0 60.0 1440x576 25.0 1024x768 75.1 70.1 60.0 1440x480 30.0 1024x576 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 720x576 50.0 848x480 60.0 720x480 59.9 640x480 72.8 75.0 66.7 60.0 59.9 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) 1920x1080_60.00 60.0 The Xorg file: # Xorg configuration file for using a tri-head display Section "ServerLayout" Identifier "Layout0" Screen 0 "HDMI" 0 0 Screen 1 "USB" RightOf "HDMI" Option "Xinerama" "on" EndSection ########### MONITORS ################ Section "Monitor" Identifier "USB1" VendorName "Unknown" ModelName "Acer 24as" Option "DPMS" EndSection Section "Monitor" Identifier "HDMI1" VendorName "Unknown" ModelName "Acer 23SH" Option "DPMS" EndSection ########### DEVICES ################## Section "Device" Identifier "Device 0" Driver "intel" BoardName "GeForce" BusID "PCI:0:02:0" Screen 0 EndSection Section "Device" Identifier "USB Device 0" driver "fbdev" Option "fbdev" "/dev/fb2" Option "ShadowFB" "off" EndSection ############## SCREENS ###################### Section "Screen" Identifier "HDMI" Device "Device 0" Monitor "HDMI1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "USB" Device "USB Device 0" Monitor "USB1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

  • Computer sponteously reboots when doing heavy file copy to/from disk

    - by Mark Hosang
    I've been fighting with this problem for the last 3 weeks where my machine will just instantly reboot. No BSOD, and when i checked the event log all that was reported was the generic "Kernal-power" error with the detailed information pointing to a hard crash. This is a machine that was working for 18 months before these crashes started happening. When they started happening is after I added 3 HDs in a RAID-5, upped the memory to 12gb, moved to a new house, added a SSD and added about 5 case fans. I have thus eliminated the RAID, and determined that the SSD was not the cause (because it was still crashing even though the ssd wasn't connected). I've run memtest several times over night with no memory problems showing up. I've run IntelBurnTest to max out the cpu to see if it was a heat issue and at full tilt after 20 min it was only at 85C and the machine didn't crash. I also took a look at the voltages during this test, with a screenshot at the bottom of this post I've ruled out a software issue by reinstalling windows 7 ultimate x64 a total of 5 times, but even during that the install it crashes. Happens sometime during file copying at the beginning, or during uncompressing files, or sometimes during running windows update. The only discernible pattern i can see is that it seems to crash when hard disks might be spinning up or when they are accessed heavily from large file transfers. My current guess is that it is probably an issue with the MB, PSU or the power coming through the outlet. Any suggestions of what i could try to troubleshoot or what may be wrong? Specs PSU: Seasonic M12 700w Mem: 12gb CPU: i7-920 with stock heatsink MB: Asus P6T HDs: 3 green WD and 1 Corsair force 3 120b with 1.3.3 firmware Running full tilt voltages Idling Voltages

    Read the article

  • Western Digital My Book not recognized by WD software

    - by Kari
    A few years ago I bought a WD My Book Pro 2. It worked fine for a while, then one of the drives failed and I sent it back to be replaced under warranty. I never got around to setting up the new one when I got it back. I finally ran out of room on my internal drive, so I tried to use the external - no go. Both drives spin up, but aren't recognized by either Disk Utility (Mac) or the WD Drive Manager. I tried on a PC as well with fresh software. Then I pulled the drives out of the enclosure (warranty is already expired) and plugged them straight into the PC. Both recognized and working 100% in RAID0. BIOS recognizes either disk as functional; Windows only sees them when both are connected due to the RAID which I can't change without the WD software. The drives that were returned to me are the "Green" drives which I've read are NOT recommended for RAID. Is it possible that this is interfering with them reading externally? Any other ideas? My main computer is a laptop so using them internally isn't an option :(

    Read the article

  • apache on Cent OS opening default page on https

    - by Asghar
    I am new to apache and SSL and configuration, i got verysign certificte to secure my site. i have public, private and ca_intermediate cert files. i have configured ssl.conf as below VirtualHost _default_:443> DocumentRoot /var/www/mydomain.com/web/ ServerName mydomain.com:443 ServerAlias www.mydomain.com # Use separate log files for the SSL virtual host; note that LogLevel # is not inherited from httpd.conf. ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on problem is that when i access www.mydoamin.com with "HTTP" it works fine, but when i access using "HTTPS" it just opens apache default page. but with green "HTTPS" means my certificates are installed correctly. How can i get rid of this situtaion. Thanks EDIT Output of apachectl -S -bash-3.2# apachectl -S [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:80 has no VirtualHosts [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: _default_:8081 localhost.localdomain (/etc/httpd/conf/sites-enabled/000-apps.vhost:10) *:8080 is a NameVirtualHost default server localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) port 8080 namevhost localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) *:443 is a NameVirtualHost default server mydomain.com (/etc/httpd/conf.d/ssl.conf:81) port 443 namevhost mydomain.com (/etc/httpd/conf.d/ssl.conf:81) *:80 is a NameVirtualHost default server app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost mydomain.com (/etc/httpd/conf/sites-enabled/100-mydomain.com.vhost:7) Syntax OK

    Read the article

  • Recovering a mdadm+lvm+ext4 partition with read error

    - by bitwelder
    One of disks in my NAS has failed. The NAS is running Linux, and it uses mdadm + LVM technology for its filesystems. I do have backup for most of the contents, but not for the very last changes, and if possible, I'd like to recover that from this failing disk. The disk (a 'green drive' WD10EARS 1TB in size) throws this kind of errors: Oct 3 12:00:41 kernel: [ 3625.620000] ata5.00: read unc at 9453282 Oct 3 12:00:41 kernel: [ 3625.620000] lba 9453282 start 9453280 end 1953511007 Oct 3 12:00:41 kernel: [ 3625.620000] sde5 auto_remap 0 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: edma_err_cause=00000084 pp_flags=00000003, dev error, EDMA self-disable Oct 3 12:00:41 kernel: [ 3625.640000] ata5.00: failed command: READ FPDMA QUEUED Oct 3 12:00:41 kernel: [ 3625.650000] ata5.00: cmd 60/40:00:e0:3e:90/00:00:00:00:00/40 tag 0 ncq 32768 in Oct 3 12:00:41 kernel: [ 3625.650000] res 41/40:00:e2:3e:90/12:00:00:00:00/40 Emask 0x409 (media error) <F> Oct 3 12:00:41 kernel: [ 3625.660000] ata5.00: status: { DRDY ERR } However, while testing with 'dd', I noticed that if I skip the first 4kB, the read seems to be ok, i.e. a command like. dd if=/dev/sde5 of=dev/null bs=4k count=1000 skip=1 doesn't return any read error. Supposing that there is no other read failure in the rest of the disk, would I be able to recover this 900 GB partition (as I mentioned before, it's a 'linux raid autodetect' partition, that contains a a LVM2 volume that contains a ext4 filesystem) if I copy-clone the partition somewhere else, but the first 4kB?

    Read the article

  • Suddenly can't send E-Mails with Apple Mail to Gmail SMTP

    - by slhck
    Hi all, I have a weird problem that started just today. I am using Apple Mail on a Leopard machine, connecting to Gmail. Fetching e-mail works just fine. My SMTP settings are also correct. Still, I can't send mail, it will display a pop up saying that "transferring the content to the mail server" failed (translation from German, could be different in English OS X versions). I have verified the following: My SMTP settings are definitely correct. I have not changed them and the issue appeared today. Also, I went through the Apple online configuration for Gmail accounts and did not have to adjust any setting. I can run network diagnosis and it will connect to both POP and SMTP servers without a problem (all green lights) The Telnet details will show me the HELO message from the Gmail servers, so there's no authentication failure. Console.app will not show any messages related to "mail" when I try to send the mail, so there's no specific error message The mail I'm trying to send does not have an attachment, it is plaintext only I can login to gmail.com and send mails without a problem The recipient address exists and contains no syntax errors I can also not send mails to myself When using another IP and ISP (through VPN), it still doesn't work As for my settings: I connect to smtp.gmail.com and for advanced settings I choose password-based authentication with user: [email protected] and my password. I let Apple Mail try the default ports (for SSL and TLS, respectively). Again: I have not changed a thing between yesterday and today. What is causing that strange behavior? Any help would be much appreciated.

    Read the article

  • Dell PowerEdge 860 won't boot but gets power

    - by fierflash
    I have a Dell PowerEdge 860. Got an 2008 R2 running on it. Here is the scenario I'm currently in: I can boot it up and work as usuall. But when I restart it, it just shuts down, doesn't reboot, and it won't boot up again. Then it stays in this "mode", On/off button blinking green(standby mode?) and System Indicator LED blinking amber. If I try to press the On/Off button on the frontpanel it won't boot. It's like you can hear the fan from the PSU running but nothing else is trying to boot. If I unplug the powercable and then put it back in again it's the same, the extremley noisy fans won't start and it's just sitting there. If I let it "rest" for like 1 to 24 hours while the powercable is unplugged and I plug it in again, it boots directly, without me having to push the On/Off button. What I have tried: Run diagnostics(not the one in POST phase) and everything is fine Booting without any cables except the powercable Boot without memory installed Boot without the raidcable plugged in Cleared NVRAM Reseated all the components and cables Googling like a freak for any possible solution Tried Dell Support but I don't have any warranty left so they can't help me unless i pay some ridiculous amount of money I suppos Do anyone have some experience with the same issue? Do I need to replace something or is there any other way to determine the problem? Any help will be greatly appreciated.

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • How can I remotely display images on my computer?

    - by Jakob
    What I Have: A laptop booted with Ubuntu and a stationary computer dual-booted with Ubuntu and Vista, both connected through a wireless ad-hoc network. What I Want: I want a way to display images in fullscreen on my stationary, using my laptop as a "remote control". I want to be able to choose another picture at any time and have my stationary computer remain in fullscreen mode at all times. Preferably, I should also be able to display just an empty (black) screen. How can I arrange for this? What I Have Tried: I have tried simply SSH:ing into my stationary computer and opening the image files using an image viewer, but all of the ones that I have tried (Eye of Gnome, Mirage, Gwenview, and others) open a new window for every new image. I don't know how to force them into using a single instance. I have tried using the VLC remote control command line interface, but apart from seeming somewhat unreliable (exiting with segmentation faults at one point), it also displays some images with a green border and forces me to pause playback in order for the image to remain on screen. Bonus Question: In my final setup, I also need to play music through my stationary computer's speaker and have the ability to switch to another track at any point, like with the images. Preferably, I would like to control the images and the audio through the same interface. How can I best achieve this?

    Read the article

  • How to get data from a borked Windows Home Server

    - by harhoo
    Yesterday we had a power surge, followed by a power outage. This left my WHS borked: powering on just gives to a flashing blue light (the led on the power supply also flashes green) - no fan or boot activity, nothing. I urgently needed some files off there in the short term (and the 500GB of photos, music, personal video etc in the long term) so I took the hard drive out and put it in my computer. The files and folders showed up, but I couldn't access them - clicking on an image gave an invalid image error in Picasa, I couldn't play MP3s etc. I changed the ownership and permissions of the files, still nothing. I booted in with a LiveCD, the same: files appear, but won't open. Is there anything else I can do? I'm now wondering if it was just the power cable that's broken, but if so, why can;t I access my files from the hard drive? If it is the power cable, and I replace that and the hard drive, will I have done any harm messing around with ownership and file permissions?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Windows 7 ICS client web failure

    - by n8wrl
    I have several windows 7 PC's connected on a LAN via a hub. One has a Verizon 3G connection and works great. I have internet connection sharing enabled on it, which automagically set the LAN connection to 192.168.137.1 and enabled DHCP. I am trying to get the client PC's working one at a time. The others are off. The client is able to: Get an IP via DHCP with correct settings. Ping any web address I can throw at it, so DNS and routing are working. Windows update works. But web sites hang in IE. All but google.com! I type www.msn.com, microsoft.com, amazon.com, etc. etc. All ping via a cmd window but IE just hangs - it says web site found but the green progress bar just slowly creeps and no content displays. www.google.com comes up even after clearing browser and dns cache. I am pulling my hair out - what am I missing? EDIT: After some more gyrations with a router I'm back to ICS. Same symptoms, only now I have an answer to Andrew's question, YES I can do Google searches but clicking on any of the result links hangs! Let one sit for half an hour with no timeout or error.

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • Graphics Artifacts/ Texture Flickering

    - by Cerin
    Hey I been having some problems with artifacts in games. Sometimes textures flicker. Artifacts of various shapes and sizes show up usually after a couple games of dota 2. I built my computer almost exactly one month ago and it has been doing this pretty much from the start except before the artifacts I believe just flashed on screen fast enough to where I couldn't tell what it was but I still noticed. In dota I've seen green triangular artifacts among other things. I've tried running Furmark for a while but even though it pushes the gpu much harder than dota 2, there are still no artifacts. It maxes in furmark at about 60C and running every game I've tried on it at 40C. CPU and system temp don't usually get higher than 40C either. These are my system specs: Gigabyte Z68 Intel Motherboard 16 GB Gskill Ripjaws SDRAM DDR3 Sapphire Radeon HD 7770 GHz edition Intel Core i5-2500k (with built in gpu) Corsair 750 Watt PSU windows 7-64 bit I have the latest drivers for everything. What should I do about this? Try to RMA my graphics card? Are there other things that could be causing this?

    Read the article

  • Tortoise SVN, find all non-added files?

    - by John Isaacks
    I am using Tortoise SVN on Windows. I have a deep directory structure (using cakephp). After creating several files I then went through and +added them. The new files show a ? next to them and a + next to them after set to add so it is easy to tell which files still need to be added. this is the same with folders. However if a new file is inside an old folder there will still be the green checkmark. You won't have any indicator telling you there is a new file. So I forgot to add one before a commit. This wouldn't be so bad as I could just add it and recommit. However, I also made this a tag, so I had recommit to the tag and the trunk. Anyways this wouldn't have been an issue if there was an indicator on existing folders that contain new files. Is there a way to set this up?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >