Search Results

Search found 90601 results on 3625 pages for 'user friendly'.

Page 482/3625 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • Raid 1 array won't assemble after power outage. How do I fix this ext4 mirror?

    - by Forkrul Assail
    Two ext4 drives on Raid 1 with mdadm won't reassemble after the power went out for an extended period (UPS drained). After turning the machine back on, mdadm said that the array was degraded, after which it took about 2 days for a full resync, which completed without problems. On trying to remount the array I get: mount: you must specify the filesystem type cat /etc/fstab lines relevant to setup: /dev/md127 /media/mediapool ext4 defaults 0 0 dmesg | tail (on trying to mount) says: [ 1050.818782] EXT3-fs (md127): error: can't find ext3 filesystem on dev md127. [ 1050.849214] EXT4-fs (md127): VFS: Can't find ext4 filesystem [ 1050.944781] FAT-fs (md127): invalid media value (0x00) [ 1050.944782] FAT-fs (md127): Can't find a valid FAT filesystem [ 1058.272787] EXT2-fs (md127): error: can't find an ext2 filesystem on dev md127. cat /proc/mdstat says: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : active (auto-read-only) raid1 sdj[2] sdi[0] 2930135360 blocks super 1.2 [2/2] [UU] unused devices: <none> fsck /dev/md127 says: fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/md127 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> mdadm -E /dev/sdi gives me: /dev/sdi: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 3e6e9a4f:6c07ab3d:22d47fce:13cecfd0 Update Time : Tue Nov 13 20:34:18 2012 Checksum : f7d10db9 - correct Events : 27 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) boot@boot ~ $ sudo mdadm -E /dev/sdj /dev/sdj: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 7fb84af4:e9295f7b:ede61f27:bec0cb57 Update Time : Tue Nov 13 20:34:18 2012 Checksum : b9d17fef - correct Events : 27 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing) machine@user ~ dmesg | tail [ 61.785866] init: alsa-restore main process (2736) terminated with status 99 [ 68.433548] eth0: no IPv6 routers present [ 534.142511] EXT4-fs (sdi): ext4_check_descriptors: Block bitmap for group 0 not in group (block 2838187772)! [ 534.142518] EXT4-fs (sdi): group descriptors corrupted! [ 546.418780] EXT2-fs (sdi): error: couldn't mount because of unsupported optional features (240) [ 549.654127] EXT3-fs (sdi): error: couldn't mount because of unsupported optional features (240) Since this is Raid 1 it was suggested that I try and mount or fsck the drives separately. After a long fsck on one drive, it ended with this as tail: Illegal double indirect block (2298566437) in inode 39717736. CLEARED. Illegal block #4231180 (2611866932) in inode 39717736. CLEARED. Error storing directory block information (inode=39717736, block=0, num=1092368): Memory allocation failed Recreate journal? yes Creating journal (32768 blocks): Done. *** journal has been re-created - filesystem is now ext3 again *** The drive however still doesn't want to mount: dmesg | tail [ 170.674659] md: export_rdev(sdc) [ 170.675152] md: export_rdev(sdc) [ 195.275288] md: export_rdev(sdc) [ 195.275876] md: export_rdev(sdc) [ 1338.540092] CE: hpet increased min_delta_ns to 30169 nsec [26125.734105] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [26125.734115] EXT4-fs (sdc): group descriptors corrupted! [26182.325371] EXT3-fs (sdc): error: couldn't mount because of unsupported optional features (240) [27083.316519] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [27083.316530] EXT4-fs (sdc): group descriptors corrupted! Please help me fix this. I never in my wildest nightmares thought a complete mirror would die this badly. Am I missing something? Suggestions on fixing this? Could someone explain why it would resync after the powerout, only to seemingly nuke the drive? Thanks for reading. Any help much appreciated. I've tried everything I can think of, including booting and filesystem checking with SystemRescue and Ubuntu liveboot discs.

    Read the article

  • htaccess on remote server issues - password prompt not accepting input

    - by pying saucepan
    EDIT: I will contact the university about my problem after labor day weekend, but I thought if someone knew a quick fix that I haven't tried, or if the problem has an obvious fix then I could hope to try my luck here, thanks! TLDR: Sorry its a long post, I thought I should be... thorough. I am having a common issue (found a dead thread through google with no solution to the same problem) with the prompt to enter in a username and password via htaccess rights, but this prompt will keep popping up asking for a username and password when trying to access my home directory on my university's server which has the .htaccess and .htpasswd files. It does not matter if I enter in correct or incorrect credentials, the prompt will keep asking me for input without displaying my home directory. Ever since I have included these ht files I have never once been able to get past the username/password no matter what I have tried, save for removing them from the directory I am trying to access (my top level directory that I own). This kind of served my original goal of making the top level directory inaccessible to casual users, but if I wanted to use this method on other places, I would want it to work as intended. And I also like it when computers do what I wish they would, so any help is appreciated. Some things I have tried: Changing the file/directory access rights: they told me to try these commands if people can't access my files cd ~/public_html find ./ -type d -exec chmod 755 {} \; find ./ -type f -exec chmod 644 {} \; enter in the single character name/pw at least twenty times in a row, no cheddar. so I changed directory with cd ~ in hopes that this would be my home directory, since my home directory contains the "public_html" directory, so logic tells me that the ~ tilde symbol is the top level directory that I have ownership of. Then I did those two commands to change the rights on the files inside, I am still having no luck. How I got to this point: I have been following the instructions given to me through my university's website for setting up my little directory. A link on how they describe how to password protect the home directory is given below: "Protect Web Directories" instructions I have everything in order except for one small detail that I feel probably does not matter. I am on windows and so I am using winSCP to remote control my allocated server space. The small detail is that as the instructions indicate (on step 3) that I should use the command htpasswd -c .htpasswd {username} where {username} is my folder that holds my allocated server space. But this command requires further input through the terminal, and unfortunately winSCP does not offer this kind of functionality. So I looked up some basic instructions on using htaccess and it is formatted correctly such that the .htaccess file appears as follows: AuthType Basic AuthName "Verify" AuthUserFile /correctpath/.htpasswd require valid-user and this file is in the root directory for my server space as well as the .htpasswd file which has only this data inside: username:password I know for sure that these two files must be formatted correctly, at least according to their tutorial, because before my path was incorrectly formatted via including some curly { braces } without knowing the correct way to do this at first. And the password prompt that shows up when accessing my directory responded by loading an error page indicating to contact OSU admin or something not important. But now that I have everything like it 'should' be. I know this because when I enter in my credentials "username and password" the prompt pops up for my username and password again and again whether or not I enter in correct information. The only exception is that if I click cancel it will direct me to a page saying that I need to enter in a username and password. Note that I am very inexperienced at server-related buisness, two days ago I couldn't have told you what a website actually consists of. So, if you use some technical jargon I may or may not need to look it up and get back to you before I actually understand what you mean, but I am a quick learner and it probably wont matter.

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • iCloud stuff stops working while connected to OpenVPN

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again! (I tried serverfault, it was closed as off-topic. I'm trying here and the Unix site as well. Here because it's a more general audience that might know more about OpenVPN based on the number of questions I see asked about it) EDIT: -I have also tried upgrading to Version: 2.3-beta1-debian0 - issue persists. -Removed all iptables rules except for the ones that flush -left this rule:iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source (server ip) -added iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT still, nothing works. I can see traffic in tcpdump on the server if i watch the tunnel: 20:03:48.702835 IP nk11p01st-courier105-bz.push.apple.com.5223 10.9.8.6.60772: Flags [F.], seq 2635, ack 1218, win 76, options [nop,nop,TS val 914984811 ecr 745921298], length 0 20:03:48.911244 IP 10.9.8.6.60772 nk11p01st-courier105-bz.push.apple.com.5223: Flags [R], seq 3621143451, win 0, length 0 But still, no push messages/notifications are ever delivered. :/ EDIT: * Further testing indicates that it might actually be the client after all.

    Read the article

  • how to use ajax with json in ruby on rails

    - by rafik860
    I am implemeting a facebook application in rails using facebooker plugin, therefore it is very important to use this architecture if i want to update multiple DOM in my page. if my code works in a regular rails application it would work in my facebook application. i am trying to use ajax to let the user know that the comment was sent, and update the comments bloc. migration: class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.string :body t.timestamps end end def self.down drop_table :comments end end controller: class CommentsController < ApplicationController def index @comments=Comment.all end def create @comment=Comment.create(params[:comment]) if request.xhr? @comments=Comment.all render :json=>{:ids_to_update=>[:all_comments,:form_message], :all_comments=>render_to_string(:partial=>"comments" ), :form_message=>"Your comment has been added." } else redirect_to comments_url end end end view: <script> function update_count(str,message_id) { len=str.length; if (len < 200) { $(message_id).innerHTML="<span style='color: green'>"+ (200-len)+" remaining</span>"; } else { $(message_id).innerHTML="<span style='color: red'>"+ "Comment too long. Only 200 characters allowed.</span>"; } } function update_multiple(json) { for( var i=0; i<json["ids_to_update"].length; i++ ) { id=json["ids_to_update"][i]; $(id).innerHTML=json[id]; } } </script> <div id="all_comments" > <%= render :partial=>"comments/comments" %> </div> Talk some trash: <br /> <% remote_form_for Comment.new, :url=>comments_url, :success=>"update_multiple(request)" do |f|%> <%= f.text_area :body, :onchange=>"update_count(this.getValue(),'remaining');" , :onkeyup=>"update_count(this.getValue(),'remaining');" %> <br /> <%= f.submit 'Post'%> <% end %> <p id="remaining" >&nbsp;</p> <p id="form_message" >&nbsp;</p> <br><br> <br> if i try to do alert(json) in the first line of the update_multiple function , i got an [object Object]. if i try to do alert(json["ids_to_update"][0]) in the first line of the update_multiple function , there is no dialog box displayed. however the comment got saved but nothing is updated. it seems like the object sent by rails is nil or cant be parsed by JSON.parse(json). questions: 1.how can javascript and rails know that i am dealing with json objects?deos ROR sent it a object format or a text format?how can it check that the json object has been sent 2.how can i see what is the returned json?do i have to parse it?how? 2.how can i debug this problem? 3.how can i get it to work?

    Read the article

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • How do I resolve "conflicting accounts" in google apps without breaking links to online photos on picasa?

    - by lee
    I have been using google apps for some time, and only recently learned I have what google calls "conflicting accounts" which is creating a problem I haven't been able to resolve. Turns out that the apps account really only covers email, google docs, and the calendar and not other features like picasa, blogger, youtube etc. and at some point they gave me a non-apps google account with my same (proprietary non-gmail) email address for the additional apps. This is the "conflicting account." I had noticed that I sometimes had to come in through another door when I went back and forth, between docs, picasa, and mail let's say, but never understood why since it was the same username and password and I didn't get any communication about it at the time. Google is now in the process of giving google apps users access to the additional apps and providing instructions for consolidating the two accounts. But if I want to move my picasa site into the new apps structure I have to download my albums and re-synch them. This would be disastrous for me as I have hundreds of photos embedded in my websites, and new web addresses would break all the connections. The alternative seems to be to rename my "personal" (non-apps) accounts as described at http://www.google.com/support/a/bin/answer.py?answer=185186: Users with conflicting Google Accounts can easily resolve their conflicts by renaming their personal Google Accounts, and the data in their personal accounts will remain safe and accessible to them. Here’s how a user can rename their personal Google Account: * Step 1: Visit www.google.com/accounts and sign in with your personal Google Account * Step 2: Click ‘Change email’ under ‘Personal Settings’ * Step 3: Enter a different email address where you can receive mail, enter your password, and click ‘Save email address’ * Step 4: Check your other email If your users don’t have different email addresses where they can receive mail, they can resolve the conflict by renaming their personal Google Accounts to @gmail.com addresses instead. Sounds easy enough, right? I gave them a gmail address. The wizard said "sorry you can't use a gmail account for this" --which contradicts the last paragraph above but ok, I switched to a new email address I just created for one of my domains. I can send email back and forth between this account and my google apps account with no problem. But when I try to use it as a replacement on the "personal" side I always get "The password you gave is incorrect." I have tried it over and over and know the password is correct. Since I like to get all my emails though one web interface I initially had the new email set up as an add-on to my google apps email account, but noting that the instructions said the "personal account" email could not be associated with any other gmail account I took it off and went back to accessing it via horde so there would be no conflict there, which seemed to make no difference. I can't figure out why it won't accept the password. Does anyone have any thoughts about that? or suggestions for another way to resolve my picasa problem? any help at all is greatly appreciated. Lee

    Read the article

  • What NAS setup for syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker.

    Read the article

  • What seems to be plaguing my hard drives?

    - by Craig
    In a little bit of a tech nightmare here. I know oodles about software, not so much more than the above average user about hardware. I recently had to toss an old desktop of mine. It was gradually getting slower and slower, and after shutting the desktop down for long periods of time, it would choke up upon startup. Sometimes it'd give a disk read error, sometimes say no OS was found, etc. Restarting it about 5-15 times would eventually boot properly. Weird. I also noticed that startup programs were going missing, Dropbox was reindexing my entire folder, and Backblaze was backing up less than the number of files that it should. This lead me to believe it was probably a hard drive issue. I began to wonder why I'd have hard drive issues, and came to closure when I assumed it was because of recent power surges and outages. I'm sure that does a number on the drive. I bought a new desktop recently. It's not a beast or anything, but it's enough for what I do. It's an eMachines (I know, I know) Ultra-Slim (http://www.amazon.com/eMachines-Ultra-Slim-ER1401-57-Desktop-PC/dp/B00475OG9U). This is ideal for me because it's small and portable. It comes with an AC adapter and battery, like a laptop. Just to be safe, I bought an uninterruptable power supply on top of that. It's basically protected completely from any outages that might scramble the drive. I set this up a few days ago and for the past few days I've been perfecting settings, downloading the usual applications, etc. Two days ago, I noticed Dropbox was reindexing my entire Dropbox folder. I installed both Dropbox and Backblaze on this system, but it is very much more lightweight than the other. Only about 15 third-party applications installed. I thought that maybe Dropbox and Backblaze were stressing my system, so I turned off Backblaze. Still, Dropbox runs and comes across this infinite reindex issue. I noticed that upon a reboot recently, two applications did not start on startup either. Also, much like my old desktop, every 3rd or 4th reboot, I'll be forced into a chkdsk. This makes me incredibly nervous. What could possibly be going wrong with my old, years-old desktop that is immediately causing the same issues to my new one? I've considered all of the basics. I'm in a very air conditioned room. I take run routine virus scans. I'd like to think I take care of my systems very well. What is this issue that is haunting me? There's always the possibility that this new desktop has a junk hard drive, but it just seems way too coincidental.

    Read the article

  • CodePlex Daily Summary for Wednesday, November 06, 2013

    CodePlex Daily Summary for Wednesday, November 06, 2013Popular ReleasesWin_8 (??? Devel Studio 2 ??? 3): Win8 0.8 + Sample (.dvs): ???????------------------------------------------ 1. ????????? ??????????? ????????? ??????. 2. ?????????? ???? , ????????? ? ?????????? ???? ????. 3. ?????????? ?????? ??????. 4. ?????????? ????????? ???????????. 5. ????????????? ??? . English----------------------------------------------------------------------- 1. Added ability to load icons. 2. Fixed bugs related to obtaining names of shapes. 3. Fixed a memory leak. 4. Fixed some defects. 5. Optimized code. ---------------------------...NWTCompiler: NWTCompiler v2.4.0: This version fixes a bug with Pinyin and adds support for the 2013 English NWT.ConEmu - Windows console with tabs: ConEmu 131105 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe"CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Home Access Plus+: v9.7: Updated: JSON.net Fixed: Issue with the Windows 8 App Added: Windows 8.1 App Added: Win: Self Signed HAP+ Install Support Added: Win: Delete File Support Added: Timeout for the Logon Tracker Removed: Error Dialogs on the User Card Fixed: Green line showing over the booking form Note: a web.config file update is requiredWPF Extended DataGrid: WPF Extended DataGrid 2.0.0.10 binaries: Now row summaries are updated whenever autofilter value sis modified.Community Forums NNTP bridge: Community Forums NNTP Bridge V55 (LiveConnect): This is a which can be used with the new LiveConnect authentication and the MSDN forums. It fixed a bug where the authentication does not work after 1 hour. A logfile will be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt" or attached it to a issue.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net Visual Studio Runner: A placeholder for downloading Visual Studio runner VSIX files, in case the Gallery is down (or you want to downgrade to older versions).Social Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyVeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 - 1.0.0.0: Set the default view for a user for a particular entity based on security role Hide/ Show views for a user for a particular entity based on his security role Choose the preferred role for a user for view configuration when the user have more than one security role in the system. Ability to exclude/ include a user from view configuration as per business requirementsDuplica: duplica 0.2.498: this is first stable releaseDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerStock Track: Version 1.2 Stable: Overhaul and re-think of the user interface in normal mode. Added stock history view in normal mode. Allows user to enter orders in normal mode. Allow advanced user to run database queries within the program. Improved sales statistics feature, able to calculate against a single category. Updated database script file. Now compatible with lower version of SQL Server.VG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesVidCoder: 1.5.11 Beta: Added Encode Details window. Exposes elapsed time, ETA, current and average FPS, running file size, current pass and pass progress. Open it by going to Windows -> Encode Details while an encode is running. Subtitle dialog now disables the "Burn In" checkbox when it's either unavailable or it's the only option. It also disables the "Forced Only" when the subtitle type doesn't support the "Forced" flag. Updated HandBrake core to SVN 5872. Fixed crash in the preview window when a source fil...Wsus Package Publisher: Release v1.3.1311.02: Add three new Actions in Custom Updates : Work with Files (Copy, Delete, Rename), Work with Folders (Add, Delete, Rename) and Work with Registry Keys (Add, Delete, Rename). Fix a bug, where after resigning an update, the display is not refresh. Modify the way WPP sort rows in 'Updates Detail Viewer' and 'Computer List Viewer' so that dates are correctly sorted. Add a Tab in the settings form to set Proxy settings when WPP needs to go on Internet. Fix a bug where 'Manage Catalogs Subsc...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...New Projects.Net PG: Just university projectA2D: 1. Cache System 2. Event System 3. IoC 4. Sql Dispatcher System 5. Session System 6. ???Command Bus 7. ????Advantage Browser: A web browser made in Visual Basic 2010 for all to add and learn from or just use.BarCoder: BarCoder is C# Web app for creating EAN-8 and EAN-16 bar codec in vector graphic image format.CLIDE .NET: CLIDE .NET The Command Line IDE for .NET Because Code is just CodeDevcken Java Library: Devcken's Java LibraryDouble Sides Flipping Control - Windows Phone: Double Sides Flipping Control The Control features the following: Two Sides Control which flipping across its center in Both Directions based on horizontal geFigTree FHMS (Funeral Home Management System): FigTree FHMS (Funeral Home Management System) application is outfitted for daily operations of your funeral home.InChatter: InChatter is a Instant messaging communication module for a desktop application programmed by C#. And the server is a WCF program.Mod.Training: Some helpful examples about Orchardy stuffPlupload MVC4 Demo: This project shows how to implement Plupload with an MVC applicationQuality Control Management System: QCMS is a web-based Test Management systemQuan Ly Nha Thuoc: Project Qu?n Lý Ti?m Thu?c Tây Email: phuoc.nh2953@sinhvien.hoasen.edu.vnRandomchaos DX11 Engine: An open source C++ DX 11 EngineService Gateway: The service gateway enables composition and agility for web sites and services.Sortable objects: Sortable objects server and client sideSTSADM ExportCrawlLog - SP Foundation 2010: STSADM extension to see/export SharePoint Fiundation 2010 MSSearch configuration and Crawl LogsTACACS Plus Extended: tacacs+ updates to support subnet specific configurations and reduced configuration with ldap access.TKinect: Framework for Testing Kinect ApplicationsWebApi Data Wrapper: This project - a collection of wrapper for WebAPI.WPF ExpressionEditor: Control representing expression editor. Allows usage of custom expression parsers.

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • Pairing Bluetooth device with PIN fails

    - by Pikaro
    I'm trying to pair my old BlackBerry 8310 to my Linux desktop (up-to-date Debian Sid, 3.15-10.dmz.1-liquorix-amd64) by using blueman and its associated tools. Scanning for the device works equally well for both sides; however, I am unable to pair the two once it comes to entering the PIN. If I scan from my PC, I have two options in blueman-manager regarding my phone: Directly selecting "pair", or selecting "setup". If I select "pair", nothing happens on my desktop, but the phone asks me to enter a PIN; if I do so, it reports that pairing has failed. During that, nothing is logged to the console. Selecting "setup" opens a configuration dialog that allows for entering or generating a PIN. Regardless, I get to a screen that tells me to enter the PIN on the phone, and at the same time, the phone pops up the equivalent dialog. This would be what one would expect to work; but whatever I enter (naturally, the same on both), both devices report that pairing has failed, and blueman-manager logs init_services (/usr/lib/python2.7/dist-packages/blueman/main/Device.py:73) Loading services org.bluez.Error.AuthenticationFailed: Authentication Failed If I instead try to pair from the phone, I cannot see any kind of reaction from my desktop - all I get is the equivalent "pairing failed" message from the BlackBerry after I entered a PIN in the dialog that pops up there. hcitool scan and hciconfig -a work without complaints, but I cannot find a way to try the pairing as a whole on the console since bluez-simple-agent seems to have been discontinued and this recommendation is everywhere on Google. hcitool cc as root opens the PIN dialog on the phone, then fails with "Input/Output error" once I enter it. The user is not permitted to execute this command. I also tried creating /usr/lib/bluetooth/<MAC>/pincodes to manually define a persistent PIN, which seems to have had no effect. The same goes for running the different commands as root, though I'm really confused about the internal structure of the Bluetooth subsystem now: They usually and inconsistently failed with Python or DBUS errors or just showed the same results. The only other Bluetooth device I have around are a pair of Creative speakers. Trying "setup" asks me to enter a key on them, which is impossible. If I try "pair", I'm asked for a PIN as I should, but no pairing takes place, and no errors appear on the console. (It just repeats their name a few times.) Interestingly, I tried that before writing my question, and nothing happened in terms of PIN questions, just like with the BlackBerry, which still shows no change. I don't think I actively changed anything since then. The BlackBerry can pair with and connect to the speakers, and everything goes as one would expect, so the problem is definitely with my desktop. So thus my questions: What is that PIN window generated by, and why does it seem to appear randomly? How can I find out what, exactly, fails after trying to add the speakers, as this may give me a clue? Is there any kind of complete log that concerns itself with Bluetooth? What data can I provide to make this more solvable?

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • Vista 64-bit, DISK BOOT FAILURE

    - by weka
    So I have this Acer Aspire AX3200-U3600A with Windows Vista (64-bit). Every night I turn it off and turn it back on in the morning. Around three weeks ago, I did a fresh factory reimage. Good as new. Then around two days ago, when I turned it on, I noticed it was running extremly slow. As in, it would often freeze up while I had multiple applications open when it usually never froze up. So I decided to restart my computer. Big mistake. My computer froze right after I clicked shut-down. I waited a while. Nothing. Waited some minutes. Nope. I decided to shut it down by pressing the power button. Here is where the problems begin. When I turned it back on, I saw the Windows logo and loading bar and then it loaded to black. I turned it off again forcefully by power button and then once more... then I got: AMD Data Change... Update New Data to DMI! then later the screen clears and I get: AHCI Option ROM BIOS Revision: 01.05.92 Date: 02-19-2008 Copyright (c) 2006-2008 Phoenix Technologies, LTD Port 01: Reset Port Error!! Port 02: then the screen clears again but this time, this loads from the bottom: Nvidia Boot Agent 249.0542 (copyright stuff... blah blah) PXE-E61: Media test failure, check cable. PXE-M0F: Exiting Nvidia Boot Agent DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER. So I try to go into Safe Mode. Well, first of all it doesn't load as fast. After it loads disk.sys from windows/drivers, it will wait a while (2-3 mins) THEN load. However it loads the Acer eRecovery Management Tool. I have three options: Reset computer to factory default, Restore computer from user's backup, or Exit. However, the top two options are gray and disabled where as the Exit is in blue and definitely clickable. So obviously safe mode is not there... A strong thing to note: In the beginning when all of this started, I did a Boot Windows Normal from pressing f8 and I got to my desktop! It logged me in. I could see the icons on my files. However my desktop was extremely slow as in when I clicked on the Start menu, it would wait a while, then load up the menu with JUST the gradient, no text or icons... so as you can see... it saw my HDD? Also, before anyone says, I have NO USB plugged in. My mouse and keyboard are not USB inputs, I assure you. And this came without a recovery CD AND when I went in BIOS, to change the BOOT ORDER, I did NOT see a CD-ROM option. And when I tried pressing ALT+F10 to get into Acer eRecovery Management, the top two options were disabled as well. But sometimes on start-up, I get: Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removeable storage is properly connected and then restart your computer. If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occured. Then I tried Last Known Good Configuration Settings, that gives me a BSOD. What should I do/

    Read the article

  • configuring uppercut for automated build

    - by deepasun
    This is my cc.net's config file. http://confluence.public.thoughtworks.org/display/CCNET/Configuration+Preprocessor -- -- -- <!-- PROJECT STRUCTURE --> <cb:define name="WindowsFormsApplication1"> <project name="$(projectName)"> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> <artifactDirectory>$(drop_directory)\$(projectName)</artifactDirectory> <category>$(projectName)</category> <queuePriority>$(queuePriority)</queuePriority> <triggers> <intervalTrigger name="continuous" seconds="60" buildCondition="IfModificationExists" /> </triggers> <sourcecontrol type="svn"> <executable>c:\program files\subversion\bin\svn.exe</executable> <!--<trunkUrl>http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1</trunkUrl>--> <trunkUrl>$(svnPath)</trunkUrl> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> </sourcecontrol> <tasks> <exec> <executable>$(working_directory)\$(projectName)\build.bat</executable> </exec> </tasks> <publishers> <merge> <files> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\*.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\mbunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\nunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ncover\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ndepend\*.xml</file> </files> </merge> <!--<email from="[email protected]" mailhost="smtp.somewhere.com" includeDetails="TRUE"> <users> <user name="YOUR NAME" group="BuildNotice" address="[email protected]" /> </users> <groups> <group name="BuildNotice" notification="change" /> </groups> </email>--> <xmllogger/> <statistics> <statisticList> <firstMatch name="Svn Revision" xpath="//modifications/modification/changeNumber" /> <firstMatch name="ILInstructions" xpath="//ApplicationMetrics/@NILInstruction" /> <firstMatch name="LinesOfCode" xpath="//ApplicationMetrics/@NbLinesOfCode" /> <firstMatch name="LinesOfComment" xpath="//ApplicationMetrics/@NbLinesOfComment" /> </statisticList> </statistics> <modificationHistory onlyLogWhenChangesFound="true" /> <rss/> </publishers> </project> </cb:define> <cb:WindowsFormsApplication1 projectname="WindowsFormsApplication1" queuepriority="80" svnpath="http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1" /> It is not producing the build directory in code_drop, but updating reports.xml with updated build.. wht is the problem?

    Read the article

  • My 2009 MacBook Logic board failed - options to proceed and how difficult?

    - by user181061
    Scannerz just gave my MacBook logic board a big fat F! I upgraded from Snow Leopard to Mountain Lion about 3 weeks ago. The system was running short of memory so I upgraded it. The system was running fine for about 2 weeks. Yesterday the thing started acting erratic. A lot of spinning beach balls, delays, and then some errors saying files couldn't be read to or from the drive. I figured the drive was going because the system is over 3 years old. I ran Scannerz on it and it indicated a lot of errors and irregularities. I rescanned it in cursory mode, and none of them were repeatable, just showing up all over the place in different regions of the scan. I went through the docs and they implied either an I/O cable was bad, a connection was damaged, or the logic board was bad. I tossed on my backup of Snow Leopard that I cloned from the original hard drive because I figured Mountain Lion was to blame and booted from the USB drive with the clone on it. It wasn't. I performed scans on every single port, and errors and irregularities that couldn't be repeated were showing up on every single one of them. I then, for kicks, put a CD into the CD player. Scannerz doesn't test optical drives but I figured surely that will work. No it won't. More spinning beach balls and messages telling me it can't be read. It was working fine 3 days ago. I know a lot of people don't like MacBook's, but mine's been great, at least until now. It was working great even with Mountain Lion after the upgrade. The system is a mid-2009 MacBook. In my opinion, it's a complete waste to toss this system. The display is too good, the keyboard works great, and it still looks good, plus this type of MacBook still uses the FireWire 400 port and I use that for Time Machine backups. I've tried reseating the RAM, it didn't do anything. I shut the system down and put in the old RAM, booted to Snow Leopard, and the problems persist. Here are my questions: The Scannerz documentation somewhere said something about the Airport card not being seated properly, but when I go to iFixit, it's apparent, at least I think it's apparent, that this isn't a slot type Airport card that the user can easily install or remove. If the cables or connections to the Airport card are bad, could they be causing this problem. How about any other connections that can be intermittent, failing or erratic? Any type of resets that I could possibly do to get rid of this? For any of those that have replaced a logic board on a MacBook, if this really is the culprit, are there any "gotcha's" I need to be aware of? As an FYI, I replaced the hard drive on an old iBook @500MHz that I had a long time ago, and I replaced the drive on a 1.33GHz PowerBook about 6 years ago. You have to be careful, but using some of the info on web sites like iFixit it's not that hard. Time consuming, but not that hard. The Intel based MacBook's to me look like they're easier to service than either of those. I'm thinking about getting a unit off of eBay that matches mine but has something else wrong with it, like a busted display. I REFUSE to buy a new system. A guy at my office has a 2007 Mac Pro and he can't upgrade to Mountain Lion because his system is "obsoleted." That's ridiculous. If you pay nearly $7,500 for a system it shouldn't be trash just because Apple decides they don't have enough money (sorry for the soap box, but it's true, IMO!) Any input is appreciated.

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • Hard drive write speed - finding a lighter antivirus?

    - by Shingetsu
    I recently have been getting a lot of system lag here (for example, the mouse and the display in general take about 15 seconds to react in the worst cases). After a lot of monitoring the resources, I found that the problem mainly happens when too much Disk I/O is being done. Three culprits have been identified: My browser had the highest write I/O with 35,000,000 I/O Write Bytes. Steam had the highest read I/O (when IDLE!!!) with 106,000,000 I/O Read Bytes. My antivirus (in both cases I will soon mention) was the runner up in both cases with: 30,000,000ish write and 80,000,000ish read. The first AV I had was Avast! which I had liked on my previous system. After noticing it taking so much I/O I switched to Panda (supposing it wouldn't use TOO much during idle phase). However it only used a bit less I/O. Just a lot less memory and cpu and somewhat more network. My browser at the moment is Maxthon 3 (which I like a lot). Before this I was running chrome which had similar data and much higher cpu when running in the background was enabled. I'm not going to be running steam all the time and there aren't many alternatives to it. I like my browser very much, but I AM willing to switch if there's an obvious problem (I'm in programming, however I'm not a very good sysadmin, especially not when it comes to windows). Finally, my system almost stops lagging when I turn off the antivirus (and preferably steam) (some remains but once in every 5-6 hours for a few seconds so it isn't a big problem). My question (has a few parts): Is it possible to configure steam to lower it's I/O usage? (and maybe network while we're at it?) Which antivirus (very preferably free) uses lowest I/O while idle (I leave PC alone during active scans so that isn't a problem). Is there an obvious problem with my current browser and, if so, is there a way to fix it or should I switch and, if so, to what? (P.S. I've been on FFox for some time too). Info on system: Windows 7 (32 bit T_T, I am getting a new one in a few months but I want to keep using system during that time though). Hard Drive (main) is a Raid0. (Also have an external 1TB one which contains steam (and steam alone). As such it doesn't get used by much anything other than steam and isn't a very large problem. However steam still uses some I/O of registry) CPU: Intel(R) Core(TM)2 CPU [email protected] RAM: 6GB (3.25GB usable) (this and CPU have little effect as shown in next section) Additional info: Memory usage during problematic times: 44% CPU usage during problematic times: 35% Page File: main drive: system managed. 1TB drive: none. The current system I'm using is about 6 years old and is mainly a place holder while I await the new one in a few months. Final words: this is my 1st post on Super User (this question wouldn't feel right on Stack Overflow where I usually stay). If it doesn't have it's place here please tell me. If anything is wrong with it, same. Edit Technically I'm looking for a live thread detection program with minimal IO usage. I already have good active scan capability: Kaspersky (the free scanner uses the paid database) and MalwareBytes. Edit 2 Noticed another one, it seems that windows media player has been using stuff even when off! Turning it off and restarting now. If the problem is fixed I'll tell you guys. The reason I didn't notice it before was because I didn't have resource manager in front of me at the MOMENT of the problem. Now I did and it was at the very top of the list!

    Read the article

  • Monitor randomly shutting down, computer accepting no input, need to restart to get working

    - by Sebastian Lamerichs
    First off, spec list: OS: Windows 7 Ultimate 64-bit SP1 CPU: i7-4820k @ 3.7GHz (stock) GPU: Two 3GB Radeon HD 7970s @ 1.05GHz Mobo: AsRock X79 Extreme6 HDD: 2TB Seagate Barracuda 7200rpm RAM: 16GB quad-channel Kingston 1600MHz PSU: Antec HCG 900W Monitors: Acer S220HQL 1920x1080 + ViewSonic VA2251 1920x1080. Plugged into different GPUs. My problem is that, on a daily-ish basis, my monitors will turn off and not turn back on. My computer will still be running, GPU/CPU/case fans all still going, but the monitors will not turn back on. Additionally, it seems to cease all network activity. It doesn't seem to log any errors at all. I've verified that this is not a monitor issue, as when I press the num/caps/scroll lock buttons on my keyboard, the lights don't change, so the computer is clearly not accepting input. I have noticed a few other people on the internet with this problem, and some have claimed that it was solved by disabling PCI-Express Link State Power Management, but the issue still occurs for me after this. Whilst my CPU and GPUs both run at 100% 24/7, the temperatures are certainly not at dangerous levels, with the CPU averaging 65°C and the GPUs at 70°C and 78°C average. All components are brand new. I have tried forcing MSI Afterburner to start when Windows starts and to force a constant voltage, as this fixed the issue for a few days for another user, but he reported back saying that it had stopped working properly again, so I'm not putting too much faith in this working. Many people have said to adjust display sleep mode settings, but this will clearly not work, as the keyboard lights would still work if the monitors were the issue. The closest I can get to a log file for this issue is the following Folding@Home logs: 14:45:21:WU01:FS00:0x17:Completed 1120000 out of 2000000 steps (56%) 14:46:43:WU00:FS01:0x17:Completed 480000 out of 2000000 steps (24%) 14:46:49:WU01:FS00:0x17:Completed 1140000 out of 2000000 steps (57%) 14:48:30:WU01:FS00:0x17:Completed 1160000 out of 2000000 steps (58%) 14:49:55:WU01:FS00:0x17:Completed 1180000 out of 2000000 steps (59%) As you can see, the second GPU (FS01) stops computation approximately three and a half minutes before the issue occurs (it should be completing 1% every 80-120 seconds), and the first GPU (FS00) continues for a few minutes more before the logs just end. As far as I can tell, the computer has a network failure at the time the first GPU stops working, the latest IRC message I received from this time was at 14:47:58. That being said, there could have just not been any messages between then and 14:50:00, so I'm going to be connecting a laptop to the same bouncer to double-check if it happens again. The GPUs functioned perfectly well in another computer for a significant period of time, so I'm fairly confident that they aren't the issue, which means that this is being caused by either software or the motherboard, or possibly RAM. I really hope it's software. I heard from a forum board that there was a patch from Microsoft that fixed this problem, but "I've forgot which KB it was or the google search terms I used to find the patch, LOL.", so that's not much help. Haven't seen it mentioned by anyone else on about a dozen threads about this issue either. The computer is plugged in via a surge-protected power board, and I've run several other computers and pieces of hardware through it with no issues, so that is not the cause. I have just set the hard disk to never turn off, although I don't believe that that will solve the issue. Strangely, this has only happened when I'm not at the computer (which is actually a minority of the time). Until today it had only happened when I had not been actively using the computer for 6 hours, but today it happened within 10-30 minutes of me last using the computer actively. I have enabled file logging from MSI Afterburner, so hopefully this will shed some light on the issue, but I'm not too optimistic. I've heard that it could be a motherboard problem, but I figured I should ask around before RMAing it. Any help?

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >