Search Results

Search found 17944 results on 718 pages for 'size'.

Page 280/718 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Error during Time Machine backups on OS X Lion

    - by user92401
    After I turn on my machine, the first couple of Time Machine backups seem to go OK, but after about an hour I get this error: Unable to complete backup. An error occurred while creating the backup folder. Latest successful backup: 7/31/11 at 12:32 PM I'm running 10.7. Time Machine is backing up an internal HD to an external USB HD. I've already run Disk Utility to repair the Time Machine partition. It's a relatively new hard drive and didn't have any issues. Here's what I've found in the Console's log filtered for backupd: 7/31/11 12:31:21.223 PM com.apple.backupd: Starting standard backup 7/31/11 12:31:21.447 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 12:31:29.146 PM com.apple.backupd: 983.7 MB required (including padding), 391.90 GB available 7/31/11 12:32:19.471 PM com.apple.backupd: Copied 3156 files (36.0 MB) from volume Macintosh HD. 7/31/11 12:32:20.017 PM com.apple.backupd: Copied 3173 files (36.0 MB) from volume LI. 7/31/11 12:32:20.136 PM com.apple.backupd: 934.8 MB required (including padding), 391.86 GB available 7/31/11 12:32:54.755 PM com.apple.backupd: Copied 916 files (117.8 MB) from volume Macintosh HD. 7/31/11 12:32:54.894 PM com.apple.backupd: Copied 933 files (117.8 MB) from volume LI. 7/31/11 12:32:55.937 PM com.apple.backupd: Starting post-backup thinning 7/31/11 12:32:55.937 PM com.apple.backupd: No post-back up thinning needed: no expired backups exist 7/31/11 12:32:55.960 PM com.apple.backupd: Backup completed successfully. 7/31/11 1:21:28.624 PM com.apple.backupd: Starting standard backup 7/31/11 1:21:28.631 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 1:21:28.682 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:28.683 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:38.694 PM com.apple.backupd: Backup failed with error: 2

    Read the article

  • ffmpeg - h264 to xvid creates large file

    - by fatnic
    I'm trying to use ffmpeg to convert a h264/aac video file to an xvid/mp3 file so I can play it in my ultra-cheap media player. At the moment the converted video file is TWICE the size of the original mp4. Is there any way to get a smaller file size without loosing too much quality? Even a drop to -qmin 1 is pretty awful! The command i'm using is ffmpeg -i input.mp4 -vcodec libxvid -sameq -acodec libmp3lame -ab 128k -ac 2 output.avi And the ffmpeg output is Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4' Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 Duration: 01:34:27.69, start: 0.000000, bitrate: 1520 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x304 [PAR 1:1 DAR 45:19], 1387 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0(und): Video: mpeg4, yuv420p, 720x304 [PAR 1:1 DAR 45:19], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1(und): Audio: libmp3lame, 48000 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1

    Read the article

  • iperf max udp multicast performance peaking at 10Mbit/s?

    - by Tom Frey
    I'm trying to test UDP multicast throughput via iperf but it seems like it's not sending more than 10Mbit/s from my dev machine: C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [156] local 192.168.1.99 port 49693 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [156] 0.0- 1.0 sec 1.22 MBytes 10.2 Mbits/sec [156] 1.0- 2.0 sec 1.14 MBytes 9.57 Mbits/sec [156] 2.0- 3.0 sec 1.14 MBytes 9.55 Mbits/sec [156] 3.0- 4.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 4.0- 5.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 5.0- 6.0 sec 1.15 MBytes 9.62 Mbits/sec [156] 6.0- 7.0 sec 1.14 MBytes 9.53 Mbits/sec When I run it on another server, I'm getting ~80Mbit/s which is quite a bit better but still not anywhere near the 1Gbps limits that I should be getting? C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [180] local 10.0.101.102 port 51559 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [180] 0.0- 1.0 sec 8.60 MBytes 72.1 Mbits/sec [180] 1.0- 2.0 sec 8.73 MBytes 73.2 Mbits/sec [180] 2.0- 3.0 sec 8.76 MBytes 73.5 Mbits/sec [180] 3.0- 4.0 sec 9.58 MBytes 80.3 Mbits/sec [180] 4.0- 5.0 sec 9.95 MBytes 83.4 Mbits/sec [180] 5.0- 6.0 sec 10.5 MBytes 87.9 Mbits/sec [180] 6.0- 7.0 sec 10.9 MBytes 91.1 Mbits/sec [180] 7.0- 8.0 sec 11.2 MBytes 94.0 Mbits/sec Anybody has any idea why this is not achieving close to link limits (1Gbps)? Thanks, Tom

    Read the article

  • Recovering drive via boot to Win7 setup command prompt

    - by Valamas
    I am trying to recover data from two old IDE drives. Drive1 has been successful, but something is wrong with Drive2. It does not appear as a drive letter. Due to limited legacy hardware, the only way i can see these drives is to boot using windows 7 setup and goto the command prompt. Without going further as to why, my question is how i can access the data in this command prompt. I discovered DISKPART command and while a first time user, it looked like something that can fix my problem. Here are the results of my diskpart commands. At the bottom is a image of the commands taken with a camera. The Drive2 is present because when using the diskpart command, I can see it. How can I copy the information using a robocopy script if the drive letter is not available? how can I assign a drive letter? Is there any repair command I need to execute? When i execute DISKPART, the following is what i see. DISKPART> LIST DISK Disk### Status Size Free Disk 5 Online 37 GB 2048 KB So then I select disk 5. DISKPART> SELECT DISK 5 "Disk 5 is now the selected disk" When I list partition DISKPART> LIST PARTITION Partition ### Type Size Partition 1 Primary 101 MB Partition 2 Primary 37 GB So I select partition 2 "Partition 2 is now the selected partition." I then try to assign a drive letter DISKPART> ASSIGN LETTER=G "There is no volume specified." "Please select a volume and try again." When i list volume the drive is not present. DISKPART> LIST VOLUME Result of the above commands

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • Outlook 2007 message formatting - pasted images

    - by Jack
    When you cut and past an image into a message window when composing a new email, the image will display as you would expect and formatting the image appears straight forward, However the pain happens when you click send. The recipient notices that the image will resize with the size of there outlook window. The original image size is ignored and no scrollbars appear. Howe do you stop this behaviour. When said image is pasted, say you want to place a graphic on top of the image such as an arrow. By using the ribbon, selecting the insert tab and choosing shapes, you go ahead and select the arrow shape and plonk it on to of the image, just where you want it, give it a nice colour and then send the email. As the recipient resizes there outlook message window, the image resizes but the shape remains where it was, now who wants that micros*a*ft! So, how do you A) make the shape resize with the image, so the shape stays where I put it in relation to the image, and b) stop the image resizing in the first place.

    Read the article

  • Windows cannot open directory with too long name created by Linux

    - by Tim
    Hello! My laptop has two OSes: Windows 7 and Ubuntu 10.10. A partition of Windows 7 of format NTFS is mounted in Ubuntu. In Ubuntu, I created a directory under somehow deep path and with a long name for itself, specifically, the name for that directory is "a set of size-measurable subsets ie sigma algebra". Now in Windows, I cannot open the directory, which I guess is because of the name is too long, nor can I rename it. I was wondering if there is some way to access that directory under Windows? Better without changing the directory if possible, but will have to if necessary. Thanks and regards! Update: This is the output using "DIR /X" in cmd.exe, which does not shorten the directory name: F:\science\math\Foundations of mathematics\set theory\whether element of a set i s also a set\when element is set\when element sets are subsets of a universal se t\closed under some set operations\sigma algebra of sets>DIR /X Volume in drive F is Data Volume Serial Number is 0492-DD90 Directory of F:\science\math\Foundations of mathematics\set theory\whether elem ent of a set is also a set\when element is set\when element sets are subsets of a universal set\closed under some set operations\sigma algebra of sets 03/14/2011 10:43 AM <DIR> . 03/14/2011 10:43 AM <DIR> .. 03/08/2011 10:09 AM <DIR> a set of size-measurable sub sets ie sigma algebra 02/12/2011 04:08 AM <DIR> example 02/17/2011 12:30 PM <DIR> general 03/13/2011 02:28 PM <DIR> mapping from sigma algebra t o R or C i.e. measure 02/12/2011 04:10 AM <DIR> msbl mapping from general ms bl space to Borel msbl R or C 02/12/2011 04:10 AM 4,928 new file~ 03/14/2011 10:42 AM <DIR> temp 03/02/2011 10:58 AM <DIR> with Cartesian product of se ts 1 File(s) 4,928 bytes 9 Dir(s) 39,509,340,160 bytes free

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Install Apache + PHP on CentOS server

    - by Scott
    Hi everyone, I am trying to use YUM to install Apache and PHP on CentOS but keep getting these errors. Anyone know what's wrong? Thanks! Loading mirror speeds from cached hostfile * c5-testing: dev.centos.org Setting up Install Process Parsing package install arguments Resolving Dependencies --> Running transaction check ---> Package httpd.i386 0:2.2.8-1.el5s2.centos set to be updated --> Finished Dependency Resolution Dependencies Resolved ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: httpd i386 2.2.8-1.el5s2.centos c5-testing 1.0 M Transaction Summary ============================================================================= Install 1 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 1.0 M Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: Package perl-libapreq needs perl(Apache::Table), this is not available. Package perl-libapreq needs perl(mod_perl) >= 1.17, this is not available. Package perl-libapreq needs perl(mod_perl) >= 1.17, this is not available. Package apache-devel needs apache = 1.3.41, this is not available. Complete! bash-3.2#

    Read the article

  • FreeBSD's ng_nat stopping pass the packets periodically

    - by Korjavin Ivan
    I have FreeBSD router: #uname 9.1-STABLE FreeBSD 9.1-STABLE #0: Fri Jan 18 16:20:47 YEKT 2013 It's a powerful computer with a lot of memory #top -S last pid: 45076; load averages: 1.54, 1.46, 1.29 up 0+21:13:28 19:23:46 84 processes: 2 running, 81 sleeping, 1 waiting CPU: 3.1% user, 0.0% nice, 32.1% system, 5.3% interrupt, 59.5% idle Mem: 390M Active, 1441M Inact, 785M Wired, 799M Buf, 5008M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 4 155 ki31 0K 64K RUN 3 71.4H 254.83% idle 13 root 4 -16 - 0K 64K sleep 0 101:52 103.03% ng_queue 0 root 14 -92 0 0K 224K - 2 229:44 16.55% kernel 12 root 17 -84 - 0K 272K WAIT 0 213:32 15.67% intr 40228 root 1 22 0 51060K 25084K select 0 20:27 1.66% snmpd 15052 root 1 52 0 104M 22204K select 2 4:36 0.98% mpd5 19 root 1 16 - 0K 16K syncer 1 0:48 0.20% syncer Its tasks are: NAT via ng_nat and PPPoE server via mpd5. Traffic through - about 300Mbit/s, about 40kpps at peak. Pppoe sessions created - 350 max. ng_nat is configured by by the script: /usr/sbin/ngctl -f- <<-EOF mkpeer ipfw: nat %s out name ipfw:%s %s connect ipfw: %s: %s in msg %s: setaliasaddr 1.1.%s There are 20 such ng_nat nodes, with about 150 clients. Sometimes, the traffic via nat stops. When this happens vmstat reports a lot of FAIL counts vmstat -z | grep -i netgraph ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP NetGraph items: 72, 10266, 1, 376,39178965, 0, 0 NetGraph data items: 72, 10266, 9, 10257,2327948820,2131611,4033 I was tried increase net.graph.maxdata=10240 net.graph.maxalloc=10240 but this doesn't work. It's a new problem (1-2 week). The configuration had been working well for about 5 months and no configuration changes were made leading up to the problems starting. In the last few weeks we have slightly increased traffic (from 270 to 300 mbits) and little more pppoe sessions (300-350). Help me please, how to find and solve my problem?

    Read the article

  • Dual-booting Ubuntu and Pardus with GRUB2...Pardus no show?

    - by Ibn Ali al-Turki
    I have Ubuntu 10.10 installed and used to dual-boot Fedora, but I replaced Fedora with Pardus. After the install, I went into ubuntu, and did a sudo update-grub. It detected my Pardus 2011 install there. When I rebooted, it did not show up in my grub2 menu however. I went back to Ubuntu and did it again...then checked the grub.cfg, and it is not there. I have read that Pardus uses a grub legacy. How can I get Pardus into my grub2 menu? Thanks! sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd9b3496e Device Boot Start End Blocks Id System /dev/sda1 * 1 15197 122067968 83 Linux /dev/sda2 36394 60802 196059757 5 Extended /dev/sda3 15197 30394 122067968 83 Linux /dev/sda5 36394 59434 185075308 7 HPFS/NTFS /dev/sda6 59434 60802 10983424 82 Linux swap / Solaris Partition table entries are not in disk order and update-grub Found linux image: /boot/vmlinuz-2.6.35-25-generic Found initrd image: /boot/initrd.img-2.6.35-25-generic Found memtest86+ image: /boot/memtest86+.bin Found Pardus 2011 (2011) on /dev/sda3 Yet after this, I go to grub.cfg, and Pardus is not there.

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Oracle: Getting ORA-01195 and ORA-01110 when attempting resetlogs

    - by MacAnthony
    I am trying to get our database to startup. When I login to sqlplus and do a startup, I get the message: Total System Global Area 534462464 bytes Fixed Size 2215064 bytes Variable Size 331350888 bytes Database Buffers 192937984 bytes Redo Buffers 7958528 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open So I do a shutdown, startup mount (which works fine) and then run: SQL> alter database recover using backup controlfile until cancel; alter database recover using backup controlfile until cancel * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/<path>/system01.dbf' SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01195: online backup of file 1 needs more recovery to be consistent ORA-01110: data file 1: '/<path>/system01.dbf' I know I've used instructions to get me past this error before, but I seem to be having trouble tracking it down. A bit of history: We wanted to refresh the data in this from another db so we attempted to do a expdb/impdb into this instance. The impdb did not complete correctly and got an end of file error message in it and hung (I still have the message in a log if it's important). Since the instance would start at this point, we decided to use the hotbackup process we have to restore the db. The hotbackups are from another server/instance. We went through the same process 2 weeks ago. At the point of recreating the control file is where we got to the issue above.

    Read the article

  • Windows Server 2008 R2 Finding Hidden files that are causing drive to fill up

    - by Xaxum
    What is the best way to find out what files are sucking up so much space? My Windows directory lists its size as 15 GB but when I run a powershell script against it, it shows 40 GB. It seems 40 GB is about right because if I use that 40 GB and add it to the rest of my folders on C I get the 45 GB that is in use. I checked volume shadow copy and that is disabled and there are no volume shadow files listed. My page file is limited to 3 GB. How can I find out what files are taking 25 GB of space? Final Results: We ended up disabling UAC and then the Properties showed the true file size of the Windows folder, as shown in powershell script. This allowed us to drill down to the folder that had the issues... In our case it was in C:\Windows\System32\config\systemprofile\AppData\Local\VMware\VDM there were several large dump files, these were the mysterious unknown files in WinDirStat. It turns out our nessus scan is crashing a view service for a couple seconds so it creates a large dump file. So we will try this.. https://discussions.nessus.org/thread/5212

    Read the article

  • AVConv increases song duration when converting MP3

    - by chauffch
    I am struggling with the following issue. I want to convert an MP3 ADTS into pure a MP3. I am using AVConv on Ubuntu 12.10. The outcome is a file that has the same size, but the duration is now longer. $ ls -l total 6436 -rw-r--r-- 1 teuf teuf 6586514 nov. 25 09:25 Blindsided_Bon_Iver.mpga $ file Blindsided_Bon_Iver.mpga Blindsided_Bon_Iver.mpga: MPEG ADTS, layer III, v1, 160 kbps, 44.1 kHz, JntStereo $ avconv -i Blindsided_Bon_Iver.mpga -c copy Blindsided_Bon_Iver.mp3 avconv version 0.8.4-4:0.8.4-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:50:25 with gcc 4.6.3 [mp3 @ 0x8c6e240] max_analyze_duration reached Input #0, mp3, from 'Blindsided_Bon_Iver.mpga': Duration: 00:05:29.29, start: 0.000000, bitrate: 160 kb/s Stream #0.0: Audio: mp3, 44100 Hz, stereo, s16, 160 kb/s Output #0, mp3, to 'Blindsided_Bon_Iver.mp3': Metadata: TSSE : Lavf53.21.0 Stream #0.0: Audio: libmp3lame, 44100 Hz, stereo, 160 kb/s Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding size= 6432kB time=329.30 bitrate= 160.0kbits/s video:0kB audio:6432kB global headers:0kB muxing overhead 0.002080% $ ls -l total 12868 -rw-rw-r-- 1 teuf teuf 6586129 nov. 27 22:26 Blindsided_Bon_Iver.mp3 -rw-r--r-- 1 teuf teuf 6586514 nov. 25 09:25 Blindsided_Bon_Iver.mpga $ file Blindsided_Bon_Iver.mp3 Blindsided_Bon_Iver.mp3: Audio file with ID3 version 2.4.0, contains: MPEG ADTS, layer III, v1, 32 kbps, 44.1 kHz, Stereo Amarok shows the new file has a duration of 25:27 and has a lot of silence. Am I using an incorrect option? Is it a bug in AVConv? Any ideas how to fix it?

    Read the article

  • sendmail on Snow Leopard

    - by Jay
    I'm trying to get sendmail working on my MacBook Pro (OS 10.6.4), so that I can send mail with PHP's mail() function. If you know how to do this without sendmail, I'd be interested in that also. The plan is to send mail through smtp.gmail.com using my gmail account, unless you have a better idea. I did this and that didn't work. In /etc/postfix/smtp_sasl_passwords I tried both:     smtp.yourisp.com username:password and     smtp.yourisp.com [email protected]:password The problem seems to be that google doesn't like me. I don't think my ISP is blocking it because Mail.app can send email through smtp.gmail.com just fine. $email is my gmail address. $ printf "Subject: TestMail" | sendmail -f $email $email $ tail /var/log/mail.log Oct 21 19:38:18 Jays-MacBook-Pro postfix/master[8741]: daemon started -- version 2.5.5, configuration /etc/postfix Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: CAACBFA905: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/pickup[8742]: C2A68FA93A: uid=501 from=<$email> Oct 21 19:38:18 Jays-MacBook-Pro postfix/cleanup[8744]: C2A68FA93A: message-id=<20101021233818.$mydomain> Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: C2A68FA93A: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8746]: initializing the client-side TLS engine Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8748]: initializing the client-side TLS engine Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: CAACBFA905: to=<$email>, relay=none, delay=1334, delays=1304/0.04/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: C2A68FA93A: to=<$email>, relay=none, delay=30, delays=0.08/0.05/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) $ I also tried setting myhostname, mydomain, and myorigin in /etc/postfix/main.cf to $ nslookup myip (as displayed by http://www.whatismyip.com/) And still no luck. Any ideas?

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • Revover original email from Sendmail log

    - by Xavi Colomer
    I have a website the contact form has been failing silently for two weeks (Wordpress + Contact form 7). Apparently updating to Contact Form 7 made the assigned email to fail [email protected], I also tested with [email protected] and it also failed until I tried with gmail and it finally worked. Apparently @telefonica.net and @me.com domains are not working with this version of the plugin, but I have to investigate the cause. I found the logs of the lost emails, but I would like to know If I can recover the sender or the content of the original messages. May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: from=www-data, size=3250, class=0, nrcpts=1, msgid=<[email protected]_web.com>, relay=www-data@localhost May 24 23:41:11 localhost sm-mta[27655]: s4P3fBdA027655: from=<[email protected]>, size=3359, class=0, nrcpts=1, msgid=<[email protected]_web.com>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1] May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=33250, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4P3fBdA027655 Message accepted for delivery) May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=123359, relay=tnetmx.telefonica.net. [86.109.99.69], dsn=5.0.0, stat=Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: s4P3fCdA027657: DSN: Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fCdA027657: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent Thanks

    Read the article

  • Why is the Windows 8 recycle bin using more space than it is allocated?

    - by oldmankit
    I ran WinDirStat to scan the contents of my hard drive. I was surprised to see that the $RECYCLE.BIN folder on my D: drive takes 26 GB of space. I emptied the recycle bin, refreshed the folder in WinDirStat, but it still takes 26 GB of space. I reduced the Maximum size of the recycle bin for this drive to 10000 MB for the main user of this computer, and disabled the recycle bin for the other user, and refreshed the folder in WinDirStat, but it still takes 26 GB of space. I ran (in an elevated window) rd /s D:\$Recycle.bin, and refreshed the folder in WinDirStat, and finally it became empty. Why was it taking up space even after I emptied it? Why was it taking more space (26 GB) than the maximum allowed amount (10 GB)? Update: After six months of using Windows (no re-install and no changes of settings related to the Recycle Bin) I used WinDirStat to check how big D:\$RECYCLE.BIN has become. It is now 29 GB. In Recycle Bin Properties, I select drive D, and it is still a custom maximum size (10000 MB).

    Read the article

  • Configuring SASL support in libmemcached

    - by John Keyes
    I'm trying to build libmemcached with SASL support on OS X Mountain Lion. I have built memcached (1.4.15) with SASL support: $ memcached -S -vv Initialized SASL. slab class 1: chunk size 96 perslab 10922 ... slab class 42: chunk size 1048576 perslab 1 <17 server listening (binary) <18 server listening (binary) <19 send buffer was 9216, now 3728270 <20 send buffer was 9216, now 3728270 <19 server listening (udp) <20 server listening (udp) ... I am trying to build libmemcached with SASL support too. I have tried the following: $ ./configure --prefix=/usr/local \ --with-memcached-sasl=/usr/local/bin/memcached ... $ ./configure --prefix=/usr/local \ --with-memcached-sasl="/usr/local/bin/memcached -S" ... But the resulting configuration summary is the same for both: Configuration summary for libmemcached version 1.0.11 * Installation prefix: /usr/local * System type: apple-darwin12.2.0 * Host CPU: x86_64 * C Compiler: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C Flags: -O2 -Werror -Wall -Wextra -std=c99 -Wbad-function-cast -Wmissing-prototypes -Wnested-externs -Woverride-init * C++ Compiler: i686-apple-darwin11-llvm-g++-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C++ Flags: -O2 -Werror -Wall -Wextra -Wpragmas -D_FORTIFY_SOURCE=2 -Waddress -Wchar-subscripts -Wcomment -Wctor-dtor-privacy -Wfloat-equal -Wformat=2 -Wmissing-field-initializers -Wmissing-noreturn -Wnon-virtual-dtor -Wnormalized=id -Woverloaded-virtual -Wpointer-arith -Wredundant-decls -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wstrict-overflow=1 -Wswitch-enum -Wundef -Wunused-variable -Wwrite-strings -fwrapv -ggdb * CPP Flags: -I/usr/local/include * Assertions enabled: no * Debug enabled: no * Warnings as failure: no * SASL support: Am I doing something incorrectly? Thanks.

    Read the article

  • Linux: Send mail to external mail box from a server with that user's hostname

    - by dtbarne
    I've got sendmail running on a linux box. Let's say the hostname of the box is bar.com. If I run the following command, I don't receive the email (which is hosted with a third party), presumably due to the hostname pointing to the local machine. echo "Test Body" | mail -s "Test Subject" [email protected] Is there any way to get this to work so that I can receive emails at my third party email address even though it has the same hostname? Do I have to change the hostname of this server (not preferred)? It may be worth noting that I created a user "foo" on my machine and noticed that the mailbox for that account is empty. I noticed these log entries, which may or may not be relevant: Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: from=apache, size=80, class=0, nrcpts=1, msgid=<[email protected]>, relay=apache@localhost Jun 28 01:09:48 bar sendmail[14339]: p5S59mIA014339: from=<[email protected]>, size=293, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.$ Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: [email protected], ctladdr=apache (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30080, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5S59mIA$ Jun 28 01:09:48 bar sendmail[14340]: p5S59mIA014339: to=<[email protected]>, ctladdr=<[email protected]> (48/48), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30495, dsn=2.0.0, stat=Sent

    Read the article

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • free up not used space on a qcow2-image-file on kvm/qemu

    - by bmaeser
    we are using kvm/qemu with qcow2-images for our virtual machines. qcow2 has this nice feature where the image file only allocates the actually needed space by the virtual-machine. but how do i shrink back the image file, if the virtual machine's allocated space gets smaller? example: 1.) i create a new image with qcow2 format, size 100GB 2.) i use this image to install ubuntu. installation needs about 10 gb, the image-file grows up to about 10GB. nothing unexpected so far. 3.) i fill up the image with about 40 GB of additional data. the image-file grows up to 50GB. i am ok with that :-) 4.) this is where it gets strange: i delete all of the 40GB data on the image, but the image-size still eats up 50GB. question: how do i free up that 40GB of data and shrink the image to the only needed 10 GB? thanks in advance, berni

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >