Search Results

Search found 7545 results on 302 pages for 'backup and restore'.

Page 211/302 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Test disk recovery

    - by AIB
    I had a 250GB hard disk having several NTFS partitions. The disk was a dynamic disk (created in windows). Now when I formatted windows (which was in another disk), the dynamic disk is shown as offline. I tried using the testdisk tool to recover the data and created a partial backup. Testdisk is able to list all partitions in the disk. All partitions are shown as type 'D' (Deleted). I want to change the 'D' to 'P' (Primary), 'L'(Logical), 'E' (Extended) appropriately and build a new partition table. If I can write the partition table to disk, the disk will be of 'basic' type and should be readable in all OS. What should be the appropriate partition types? I checked the files on the partitions and no OS was ound. So none of the partitions were bootable. Will randomly selecting P,L,E hurt the data in anyway?

    Read the article

  • Configuring HAProxy with memcache with failover

    - by Lawrie Matthews
    I'm configuring a new set of servers for an existing Wordpress site, and it's been requested that memcache be available and made more resilient. The idea proposed is to have HAProxy send requests to one of the two servers; if that memcache instance is inaccessible, then it should switch to the second, but should not switch back to the first if it comes back up unless the second is then unavailable. This doesn't appear to be a particularly common use case and I've not found much along these lines except to possibly set up the first node with an enormous rise value, such as: server server1 10.112.58.16:11211 check inter 5s fall 3 rise 99999999 server server2 10.112.58.19:11211 check backup which falls over as expected when server1 is unavailable. It won't ever fall back to server1, though, even if server2 goes offline. Can this be made to work?

    Read the article

  • MySQL & tmpfs : performance

    - by Serty Oan
    I was wondering if, and how much, using tmpfs could improve MySQL performance and how it should be done ? My guess would be to do mount -t tmpfs -o size=256M /path/to/mysql/data/DatabaseName, and to use the database normally but maybe I'm wrong (I'm using MyISAM tables only). Will a hourly rsync between the tmpfs /path/to/mysql/data/DatabaseName and /path/to/mysql/data/DatabaseName_backup penalize performances ? If so, how should I make the backup of the tmpfs database ? So, is it a good way to do things, is there a better way or am I losing my time ?

    Read the article

  • Can't configure PAM + LDAP on Debian Lenny - Getting error=49 on server logs

    - by Jorge Suárez de Lis
    I've been migrating some servers and desktops using Ubuntu 10.04 from getting the users from an old OpenLDAP implementation to a newer Centos Active Directory. I haven't had any problems so far, until I reached a Debian Lenny server. I've set up the server as the others, setting /etc/ldap.conf and /etc/ldap/ldap.conf. However, when I issue "getent passwd", I get nothing from the LDAP server. Reading the pam_ldap manpage, I realized that /etc/ldap.conf was not an accepted file by pam_ldap -it worked with Ubuntu though-, so I renamed it to /etc/pam_ldap.conf. Same result. However, once I've changed the name of this file, when I login using SSH I get this on the LDAP server logs: [20/Jul/2012:11:19:40 +0200] conn=16501 fd=155 slot=155 connection from x.x.x.50 to 10.1.176.237 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 RESULT err=49 tag=97 nentries=0 etime=0 The password isn't working. I don't know that could be wrong, anything else seems to be OK. That user/password is working from another clients: [20/Jul/2012:11:29:39 +0200] conn=16528 fd=188 slot=188 connection from x.x.x.224 to 10.1.176.237 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=jorge.suarez,ou=people,ou=citius,dc=inv,dc=usc,dc=es" I'm using SSHA for storing passwords on the LDAP server. Maybe this is not supported by Debian Lenny? On pam_ldap.conf, I've set up this, as in all the other servers: # Do not hash the password at all; presume # the directory server will do it, if # necessary. This is the default. pam_password md5 Also tried clear, but it didn't work. Anyways, it's weird that issuing getent passwd still gets me no users. However, if I use pamtest from the package libpam-dotfile to test login, it works. # pamtest ssh jorge.suarez Trying to authenticate <jorge.suarez> for service <ssh>. Password: Authentication successful. # pamtest foo jorge.suarez Trying to authenticate <jorge.suarez> for service <foo>. Password: Authentication successful. But "su" won't work also: # su jorge.suarez Id. descoñecido: jorge.suarez Just the output from getent passwd : # getent passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh Debian-exim:x:101:103::/var/spool/exim4:/bin/false statd:x:102:65534::/var/lib/nfs:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin luser:x:1000:1000:Usuario local de Burdeos,,,:/home/luser:/bin/bash messagebus:x:105:107::/var/run/dbus:/bin/false sge-admin:x:1001:1001:Administrador do SGE,,,:/home/cluster/sge-admin:/bin/bash ntp:x:107:110::/home/ntp:/bin/false haldaemon:x:108:111:Hardware abstraction layer,,,:/var/run/hald:/bin/false vde2-net:x:109:114::/var/run/vde2:/bin/false uml-net:x:110:115::/home/uml-net:/bin/false polkituser:x:111:116:PolicyKit,,,:/var/run/PolicyKit:/bin/false Debian-pxe:x:113:65534:Dummy user for Debian pxe package,,,:/home/Debian-pxe:/bin/false Nscd was stopped from the beginning.

    Read the article

  • File Saving Sometimes Fails

    - by YellPika
    When I attempt to save files, it sometimes (randomly) fails. In Blender, I sometimes get "Version Backup Failed: File Saved With @". In Visual Studio, building sometimes fails with an error message indicating that the target file/exe cannot be overwritten. If I wait a bit, I can save fine. It's almost as if programs are taking an abnormal amount of time to 'let go' of the files. What could be causing this behaviour? This seems to be caused by Windows Live Mesh monitoring my files, and locking them whenever it uploads the new versions (BAD considering the amount of times I save my files, even redundantly). Any suggestions to work around this behaviour? Should I switch to a better service to sync my files?

    Read the article

  • Duplicity Errno 2 - no such file or directory

    - by Luma
    Hello, I am trying to setup a script for backing up a linux box to a CIFS share. I manually mounted the CIFS share and created a few test folders - OK I then ran duplicity manually with a rather simple command to begin with to make sure things work and well Not OK on this one :) duplicity /root file:///cifsmountfolder/existingfolder/ results: No signatures found, switching to full backup. Traceback (most recent call last): File "/usr/bin/duplicity", line 463, in <module> with_tempdir(main) File "/usr/bin/duplicity", line 458, in with_tempdir fn() File "/usr/bin/duplicity", line 449, in main full_backup(col_stats) File "/usr/bin/duplicity", line 155, in full_backup bytes_written = write_multivol("full", tarblock_iter, globals.backend) File "/usr/bin/duplicity", line 99, in write_multivol backend.put(tdp, dest_filename) File "/usr/lib/python2.5/site-packages/duplicity/backends.py", line 279, in put target_path.writefileobj(source_path.open("rb")) File "/usr/lib/python2.5/site-packages/duplicity/path.py", line 500, in writefileobj fout = self.open("wb") File "/usr/lib/python2.5/site-packages/duplicity/path.py", line 448, in open else: result = open(self.name, mode) IOError: [Errno 2] No such file or directory: '/cifsmountfolder/existingfolder/duplicity-full.2010-09-18T18:41:43-07:00.vol1.difftar.gpg' any ideas? Thank you. Luc

    Read the article

  • SQL Server 2008 data directiories in SSD

    - by Kuroro
    I am going to install a new SQL server 2008 instance on my development/testing machine. My machine have one 7200rpm 500GB SATA Disk (C:OS) and one Intel X25-G2 80GB SSD(D:). Details machine config is as follow: CPU:i7 860 RAM:8GB Microsoft said I have an option to place following directories in different disk. So I plan to place User database & Temp DB on SSD and rest of it on traditional disk. Is it a good choice for gaining a performance boost in fast SSD? Data root directory :C:\Program Files\Microsoft SQL Server User database directory D:\Data User log directory C:\Logs Temp DB directory D:\TempDB Temp Log directory C:\TempDB Backup directory C:\Backups

    Read the article

  • What can cause SQL 2008 Transaction Log Shipping to stop functioning?

    - by Rick
    I read somewhere that doing a backup or when Maintenance Plan runs can cause Log Shipping to stop functioning. Is this true? What should we watch out for once our Transaction Log Shipping is in place that could stop it? A Log Shipping test we were doing between two databases on the same SQL 2008 server appeared to stop working without any error. When we checked the History of the LSRestore_* job it was always ignoring the new *.trn files. Any suggestions? Thanks.

    Read the article

  • interesting uses for a headless host running Ubuntu.

    - by Manuel
    Hey! So, I have configured a pc with no monitor, keyboard or mouse running Ubuntu. I use it as a ssh server, file backup, web server, etc. Though, it seems as if I could use it for sooo much more. The problem is I can't think of many more uses. What interesting uses of a headless host have you heard of? Is there a cool trick you want to share? Thanks! Manuel

    Read the article

  • Converting Outlook Express csv adress book and dbx files into Thunderbird on W7

    - by PiotrK
    Recently I changed my OS from XP to W7. I made backup of any Outlook Express messages (the dbx files and adress book as CSV). On W7 I want to import that data into Thunderbird. There is option for importing from Outlook Express, but it is looking for live application data (I can't specify directory with real files myself) and there is no Outlook Express installed on W7 so I can't just import it back to it and then into Thunderbird. How can I import that data into Thunderbird?

    Read the article

  • Microsoft Entourage/Exchange Server problem: all objects disappeared from server - still in some for

    - by splattne
    One of our employees works with Entourage on his MacBook Pro (OSX 10.6) accessing Exchange Server 2007. Last Friday morning, I think while working over a VPN, Entourage (I think it was Entourage) deleted all his objects (mail, calendar, contacts) on the server and while creating a lot of strange folders (starting with underscores) on the client. The local data seems to be there, but not in a consistent form. Since the user's mailbox is rather big, I suspect, that there was some kind of "move" operation which did not complete. I tried to export the data, but the export stops because of a corrupted object. Is there a tool or another way to export or retrieve the local data? Edit - FYI: we solved the problem getting his data from the previous night's backup.

    Read the article

  • database importing problem with sql server

    - by tibin mathew
    Hi, I have a database working in mu local sql server 2005 express edition. I have to import my local dtabase to a remote servr database. For that i established connection to that remote server, and i can now see that database . but when i tried to restore database fro my local machine i'm getting an error message when i tried to give backup file location. Below is the error message The EXECUTE permission was denied on the object 'xp_availablemedia', database 'mssqlsystemresource', schema 'sys'. The user does not have permission to perform this action. The statement has been terminated. (Microsoft SQL Server, Error: 229) waht is the problem, how can i solve this. Please help me

    Read the article

  • Partition External Harddrive already using Time Machine?

    - by Wex
    I have a 1TB external that I've used to backup my Mac for the past year using Time Machine. Unfortunately, my hard drive is getting close to full, and I'd like to move some of the stuff off of my Mac onto the same external drive. The problem is that the external drive is already full with my Time Machine Backups. I'd like to partition 750GB to the Time Machine Backups, and save the other 250GB for personal use. Is there any way I can go about this without corrupting my current backups? I'm willing to delete some of the older backups if necessary; again I'm just worried about corrupting the data.

    Read the article

  • External hard drive encryption

    - by Kragen
    I've got a complete backup of my main PC on 1.5 TB external hard drive that I carry around with my laptop so I can have access to all of my files while I'm on the move, however it has just dawned on me that if someone nicks my external hard drive they now have access to everything! Hence I'm looking for a way to encrypt my external hard drive. I'm after something that is: Secure (if I need to carry around a USB dongle to keep the key on so be it) Fast (the performance of the drive should still be reasonable) Cross-platform (I regularly use other peoples computers - Sometimes they are not windows based and might not even have internet access, however I still want to be able to access my files) Cheap (preferably free / open source!)

    Read the article

  • NTFS disk mounted as fuseblk in ubuntu 12.10 is very slow and a lot of errors when rsync. Is that not a rare thing?

    - by Pablo Marin-Garcia
    I am having problems with a NTFS disk mounted as a fuseblk in my ubuntu 12.10 through external usb3. When I did a 1.1TB backup with rsync the speed was 1-2MB/s (wiht a ext4 disk speed was 70 MB/s before and after trying the NTFS disk). Also after one hour errors started to appear: rsync: write failed on "xxx": No such file or directory recv_files: "yyy" is a directory #but this file is a FILE not a dir ??!! .... As this is the first time I have mounted the NTFS in linux for heavy usage (the data would be used in windows afterwards), I would like to know if this kind of thinks are common o was only that something became unstable in my system and a simply restart would probably have solved it. This leads me to the these questions: Can I trust fuse for manage NTFS disks? Or is a problem of the NTFS tools in linux not yet totally stables for writing? Do people is still suffering from low performance with fuse-NTFS vs ext4 (in the past I have read about people complaining about this)?

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • CMD Echo date but show month as string

    - by Asim Rehman
    I am using the robocopy command to create a backup system, I have successfully managed to copy the folders, but the date stamp is wrong. The folders are prefixed with the date and time . The robocopy command is this: robocopy U:\Data\ X:\Private\Backups\FolderName_%date:/=-%-(%time::=-%) /e The out of the folder is displayed like this: FolderName_09-11-2013-(20-24-06.60) The only thing I want to change is the date, I want to show the month as a string with only the first 3 characters like Oct. Can someone please guide me. Thanks.

    Read the article

  • Is there a way to initialize ImageKit's IKSaveOptions to default to TIFF with LZW compression?

    - by Rei
    I'm using Mac OS X 10.6 SDK ImageKit's IKSaveOptions to add the file format accessory to an NSSavePanel using: - (id)initWithImageProperties:(NSDictionary *)imageProperties imageUTType:(NSString *)imageUTType; and - (void)addSaveOptionsAccessoryViewToSavePanel:(NSSavePanel *)savePanel; I have tried creating an NSDictionary to specify Compression = 5, but I cannot seem to get the IKSaveOptions to show Format:TIFF, Compression:LZW when the NSSavePanel first appears. I've also tried saving the returned imageProperties dictionary and the userSelection dictionary, and then tried feeding that back in for the next time, but the NSSavePanel always defaults to Format:TIFF with Compression:None. Does anyone know how to customize the default format/compression that shows up in the accessory view? I would like to default the save options to TIFF/LZW and furthermore would like to restore the user's last file format choice for next time. I am able to control the file format using the imageUTType (e.g. kUTTypeJPEG, kUTTypePNG, kUTTypeTIFF, etc) but I am still unable to set the initial compression option for TIFF or JPEG formats. Thanks, -Rei

    Read the article

  • Unable to edit CIFS Share permissions

    - by Datapimp23
    Hi, I have this backup Disk to disk device HP Storageworks 2540i. Managing the device is via a web interface. I joined the device into our AD domain in the CIFS server configuration. I then created a CIFS share called backupdata. If I try to access it I'm prompted for a login. The permissions tab in the web interface is empty. The following message is displayed. "CIFS Authentication is managed through Active Directory" However I do not find the share in AD. I forced replication between all DCs and I do not find it. Is there another way to edit the permissions?

    Read the article

  • Copying files within a Workgroup

    - by Andrew La Grange
    I have three boxes operating in a Windows Server workgroup within a closed network. (No Domain / No AD) There are several derivations of the scenario that I'm about to outline, but I'm sure I will be able to retool the solution as and when I need. Essentially the boxes are: 2 x Windows Server 2008 R2 x64 Standard 1 x Windows Server 2000 Standard I need to be able to schedule the copying/and-or/moving of files from various directories and each of the boxes. Each box has a different username and password for the administrator. I have PowerShell 2.0 on the two Win2K8 boxes (obviously). Previously I have used mapped network drives to copy the files, and cmd line batches, but I'd much rather use Powershell if possible (with Shares and/or $ notation). However the Copy-Item cmdlet doesn't seem to be processing the Credential correctly. Perhaps some Powershell gurus out there might be able to help me. Essentially I'd like to schedule a PS run of script to push backup files onto my WIn2k box (old fileserver) periodically.

    Read the article

  • VS2010 + Resharper 5 performance issues

    - by Jeremy Roberts
    I have been using VS2010 with Resharper 5 for several weeks and am having a performance issue. Sometimes when typing, the cursor will lag and the keystrokes won't show instantaneously. Also, scrolling will lag at times. There is a forum thread started and JetBrains has been responding. Several people (including myself) have added their voice and uploaded some performance profiles. If anyone here has has this issue, I would encourage you to visit the thread and let JetBrains know about it. Has anyone had this problem and have a suggestion to restore performance?

    Read the article

  • Script / command to drop all connections / locks in Sybase SQL Anywhere 9?

    - by nxzr
    I've recently become responsible for administering an application which is essentially a front end to a Sybase SQL Anywhere 9 database, including the database itself. I'd like to use unload table to efficiently export the data for backup and, in the case of a few tables, ETL to get it into a reporting database / small scale data warehouse. The problem is that the client application crashes and leaves dead connections and shared locks on a pretty regular basis, which seems to prevent unload table from getting the (brief) exclusive locks it needs. Currently I use Sybase Central to verify that these connections are in fact zombies and drop them myself at the end of the day / week. Is there a command or script to drop all connections? Being able to drop everything at once after verifying that they're unneeded would be quite helpful but I haven't found a way to do it.

    Read the article

  • XCode 3.2 does not mark unit test assert failures in the editor

    - by Cliff
    I've been off in Java land for about a month or so and now, upon returning to XCode I feel lost. I've upgraded 1st to 3.1.2 then recently to 3.2 and also got a new Mac with Snow Leopard so I'm not exactly sure when the problem surfaced. I just know that I used to get little red bubbles in my unit test next to the failing asserts and that no longer seems to happen. Is there a way to restore this? I've been trying to use Apple's own SenTesting framework instead of GoogleTools for mac like I used to. Should I revert to Google Tools? Does anyone have an answer?

    Read the article

  • How to selectively route network traffic through VPN on Mac OSX Leopard?

    - by newtonapple
    I don't want to send all my network traffic down to VPN when I'm connected to my company's network (via VPN) from home. For example, when I'm working from home, I would like to be able to backup my all files to the Time Capsule at home and still be able to access the company's internal network. I'm using Leopard's built-in VPN client. I've tried unchecking "Send all traffic over VPN connection." If I do that I will lose access to my company's internal websites be it via curl or the web browser (though internal IPs are still reachable). It'd be ideal if I can selectively choose a set of IPs or domains to be routed through VPN and keep the rest on my own network. Is this achievable with Leopard's built-in VPN client? If you have any software recommendations, I'd like to hear them as well.

    Read the article

  • Can't connect to Apple Time Capsule in home network using Home Plugs from Win 7 Machine

    - by Eugene
    I have the following home network setup with subnet 255.255.255.0 but recently moved my time capsule to a different location when I added a third Home Plug and can no longer ping or map a network drive to it from the Windows 7 Machine. However using Airport Utility on the Windows 7 machine I can manually configure the Time Capsule. Using a Macbook on WIFI Network 1 or 2 - I can backup to the time capsule, so its accessible via both the router wifi network and the time capsule wifi network. The Time Capsule is set to BRIDGE function - ie no NAT or DHCP server enabled. Any bright sparks out there that can help diagnose the problem? Router (192.168.1.254) WIFI Network 1 | | |---- Home Plug one |---- Home Plug Two | |---- Computer A Windows 7 (192.168.1.160) | |---- Printer (192.168.1.69) |---- Home Plug Three | |---- Apple Time Capsule (192.168.1.150) WIFI Network 2 |---- Smart TV (192.168.1.70) | |---- Apple TV (192.168.1.4)

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >