Search Results

Search found 13388 results on 536 pages for 'certificate store'.

Page 417/536 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • ASA DHCP Relay configuration..

    - by Jeff
    I have locations in different cities, connected using 2 Cisco ASA devices. my main location, corporate, use the IP 192.168.1.x The second location, remote store, use the IP 192.168.3.x I have a DHCP server (192.168.1.254) at my corporate location. I have created a scope for the 192.168.1.x which works fine for the corporate location. I created a scope for the remote location (192.168.3.x) on my DHCP server and tried to configure the remote ASA DCHP Relay, on the remote ASA: I disabled the DHCP Server on the inside. I enabled DHCP Relay on the inside, with set route set at yes. I set the Global DHCP Relay Servers, specify up to four servers to which DHCP requests would be relayed. I added my DHCP, 192.168.1.254 I flashed these settings to the ASA and gave it a try, didn't do anything. am i missing something - forgetting something. not really sure what im doing wrong. DHCP Settings on remote ASA: dhcp-client update dns server both dhcpd dns 192.168.1.254 dhcpd ping_timeout 750 dhcpd domain JEWELS.LOCAL dhcpd auto_config outside dhcpd update dns both ! dhcpd address 192.168.3.2-192.168.3.33 inside ! dhcprelay server 192.168.1.254 outside dhcprelay enable inside dhcprelay setroute inside on my local ASA: i have two ACLs for UDP ports 67 and 68 permitting any inbound traffic from the remote locations IP ... dhcprelay timeout 120

    Read the article

  • Firefox Won't Save My Google Cookies Between Restarts

    - by Tom Purl
    I'm using firefox 11 on Ubuntu. For some strange reason, Firefox won't save my google cookies between browser restarts. I have to log in to gmail every time I restart my browser, even if I click on the check box that tells Google to remember me. The strange thing is that Firefox does actually store some gmail cookies when I log in. It's just that those cookies disappear after restarting Firefox. The especially strange thing is that this only seems to happen with *.google.com url's. I haven't noticed this problem with any other site that I use. Please note that I tried to see if this was a plugin-related problem. I therefore started Firefox in safe mode and turned off all plugins. I then logged into Gmail and told it to remember me. I then shut down Firefox and started it the same way in safe mode. I got the same bad results. Has anyone else ever seen anything like this before? Is there a reason that Firefox seems to be blacklisting Google cookies?

    Read the article

  • Vserver: secure mails from a hacked webservice

    - by lukas
    I plan to rent and setup a vServer with Debian xor CentOS. I know from my host, that the vServers are virtualized with linux-vserver. Assume there is a lighthttpd and some mail transfer agent running and we have to assure that if the lighthttpd will be hacked, the stored e-mails are not readable easily. For me, this sounds impossible but may I missed something or at least you guys can validate the impossibility... :) I think basically there are three obvious approaches. The first is to encrypt all the data. Nevertheless, the server would have to store the key somewhere so an attacker (w|c)ould figure that out. Secondly one could isolate the critical services like lighthttpd. Since I am not allowed to do 'mknod' or remount /dev in a linux-vserver, it is not possible to setup a nested vServer with lxc or similar techniques. The last approach would be to do a chroot but I am not sure if it would provide enough security. Further I have not tried yet, if I am able to do a chroot in a linux-vserver...? Thanks in advance!

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • How \Deleted flag can be unset for all mails in cyrus-imapd mailbox?

    - by Sachin Divekar
    I have a 5GB mailbox which I moved using imapsync. But somehow I messed up with --delete/--delete2 option and end up with almost all the messages having \Deleted flag set. I do not have delayed expunge enabled, so I can not use unexpunge utility. I am using cyrus-imapd v2.3.7. Using cyrus-imapd's debugging feature I found out that email client(Roundcube in my case) fires following IMAP command to unset it. UID STORE 179 -FLAGS.SILENT (\Deleted) I don't know if somehow I can fire this command for all the mails. Is there any way I can unset \Deleted flag for all the mails in the mailbox? UPDATE: Using @geekosaur's tip of specifying range of message-ids in the above command, I could solve it for one mailbox under INBOX like INBOX.folder1. Is there any way I can do it for multiple mailboxes under INBOX recursively? Now I am working on solving it using/creating some script, maybe using Perl's IMAP related module. But still I need to solve it asap so inputs are welcome.

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • Retail windows xp prof sp 2 Lost key but have the genuine cd , what to do?

    - by AdityaGameProgrammer
    I had recently formatted my system only to find out i have lost the cd key to my original cd. i had used the option to enter the product key later. Yes, i know its a stupid thing to do but i bought the cd in 2008 from a retail store and i lost the original packaging. the actual label on the cd is includes service pack version 2002 .@2004 microsoft corporation reserved. There are some numbers on the back side of the cd in the inner ring. i cant for the life of me figure out how what is the use of the genuine cd i have with me when i cant seem to activate it. what exactly is the advantage of having the original cd in your possession in situations like this?. i have tried the unattend.txt and it doesnt contain the correct key. and there does not exist any winnnt.sif file in the cd. where on the cd or in it can i find the product id information i stay in india . and my attempts at trying the microsoft support site keeps getting me directed to page which says they had stopped support for windows xp in 2011. lets say by some miracle i do contact microsoft. what information would i have to provide them? and would they be giving me the product key for my cd key from their database? or a new key?

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • How to collect the performance data of a server during an unreachable/down period using Nagios?

    - by gsc-frank
    Some time services and host stop responding due to a poor server performance. I mean, if for some reason (could be lot of concurrency services access, a expensive backup execution on the server or whatever that consume tons of server resources) a server performance is very degraded, that could lead that the server isn't capable to establish any "normal network communication" (without trigger whatever standards timeouts defined for such communication). Knowing host's performance data (cpu, memory, ...) in case of available during that period (host is not down and despite of its performance degradation still allow plugins collect performance data) could be very useful for sysadmin to try to determine what cause the problem, or at least, if the host performance was good and don't interfered at all in the host/service down. This problem could be solved using remote active (NRPE) or remote passive (NSCA) if such remote solutions could store (buffered) perf data to be send to central Nagios server when host performance or network outage allow it. I read the doc of both solutions and can't find any reference to such buffer mechanism neither what happened in case that NSCA can't reach Nagios server. Any idea of how solve this lack of info? so useful for forensic analysis. EDIT: My questions isn about which tools I can use to debug perf problems or gather perf data to analysis, but is about how collect (using Nagios) host perf data even during a network outage for its posterior analysis (kind of forensic analysis). The idea is integrate such data to Nagios graphers like pnp4nagios and NagiosGrapther. I know that I could install tools like Cacti in each of my host, and have a kind of performance data collection redundancy, but I really want avoid that and try to solve all perf analysis requirements with one tools: Nagios

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Directory on the same server?

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • Convert DVD Movie to MPEG and view on PS3 via Windows Media Server 12

    - by Vidar
    I think Apollo spacecraft missions to the moon were easier than this! I have tried dozens of DVD ripping software and media servers and have had limited success in trying to convert all my DVDs into file format so they can be viewed on PS3. I have also been on dozens of forums and it's all getting a bit confusing, some advice is out of date, some software is no longer updated - updates have been applied to PS3 operating system and windows and so on and so on. There has to be a way to get all this knowledge and information in one place that's up to date so people can do the same thing as me. Can anyone give me some definitive software and/or advice to do the following: I have over 200 DVDs - I want to convert these to VOB files (rename to MPEG so WMS can stream them). Store on hard disk and view via Windows Media Server 12 (Windows 7). I will then be able to view these via my PS3 in my lounge and never have to get out another DVD case again. I don't want to encode to any other format like MP4 with H.264 because I will lose some of the original quality. So MPEG-2 is fine for me. Note: I have been using DVD Shrink but it gives odd results sometimes. The main problem being that once the DVD has been ripped - WMS shows the wrong playing length of the film, however if I use VLC Media Player it will play through the whole film OK. This is obviously no good when it comes to streaming on the PS3.

    Read the article

  • Emails from web site sometimes blank or gibberish

    - by John Gardeniers
    Our company has one web site with an online store based on osCommerce. The system sends emails for various reasons, such as password changes, order confirmations, etc., using PHP's mail() function. We occasionally have customers report that the email they received is either blank (email is plain text format) or gibberish (email is in HTML format). In the latter case it's really just HTML that's being displayed as raw text but of course the customers can't read it. In this case the first opening tag's <, and sometimes a few more characters, has gone missing. In an attempt to determine whether this was happening only for certain customers or email systems I configured the web site to send a CC of each message to a service account at my end. Those CC'd messages always arrive intact and display correctly in Outlook. For what it's worth, it seems to happen a little more frequently to Hotmail users but is certainly not limited to them. As the web site is on a shared (Debian) host there's precious little I can do about debugging things from that end, although if I made the right request I feel the hosting company staff would help me, even though they have limited resources to spend on such matters. Any suggestions on what else I might do to try and determine just why those emails are not being received correctly by some customers, yet a CC copy arrives just fine?

    Read the article

  • from svn to git (+ LDAP + password-less updates + passworded access control)

    - by Jayen
    We have an SVN setup and there are some things we dislike about it and some things we like about it. We want to move to git, but we're not sure exactly what setup will work for us. We're currently using SVN (w/ Authz) + Apache (w/ WebDAV & LDAP). Hook to update the live site [like] Live site update requires no additional interaction [like] Live site update uses stored password [dislike] Commits require centralized-password authentication [like] Commit from live site changes stored credentials [dislike] Access control (per repository) for commits [like] Point 5 above is the one that keeps stuffing us up. Someone makes a commit from the live site and then the hook breaks. We're thinking to use gitosis/gitolite to get access control, but as they use ssh keys, we won't be requiring passwords. We're also thinking to use git-http-backend, and use Apache for authentication, but then do we lose access control? Can the live site be automatically updated from a hook if Apache requires authentication? Can we combine git-http-backend and gitosis/gitolite somehow? Can we store http credentials with git?

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • Wireless disconnect randomly with wpa_supplicant reason=2

    - by renenglish
    I installed ubuntu-server 12.04 on my PC , and I use an usb wireless card to join the network. It works ok when I boot up my PC , but the wireless disconnects after a while. I pkill wpa_supplicant and reload the driver rtl8192cu , then it works a again. Then it disconnect again after about a random minutes. Here is the syslog: 22384 May 29 21:49:27 homecenter kernel: [ 6450.459313] wlan1: authenticated 22385 May 29 21:49:27 homecenter kernel: [ 6450.459535] wlan1: associate with f4:ec:38:45:62:74 (try 1) 22386 May 29 21:49:27 homecenter kernel: [ 6450.469080] wlan1: RX AssocResp from f4:ec:38:45:62:74 (capab=0 x431 status=0 aid=3) 22387 May 29 21:49:27 homecenter kernel: [ 6450.469085] wlan1: associated 22388 May 29 21:49:27 homecenter wpa_supplicant[2342]: Associated with f4:ec:38:45:62:74 22389 May 29 21:49:27 homecenter kernel: [ 6450.481933] ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready 22390 May 29 21:49:27 homecenter wpa_supplicant[2342]: WPA: Key negotiation completed with f4:ec:38:45:62:7 4 [PTK=CCMP GTK=CCMP] 22391 May 29 21:49:27 homecenter wpa_supplicant[2342]: CTRL-EVENT-CONNECTED - Connection to f4:ec:38:45:62: 74 completed (auth) [id=0 id_str=] 22392 May 29 21:49:38 homecenter kernel: [ 6461.472014] wlan1: no IPv6 routers present 22393 May 29 21:49:38 homecenter ntpdate[2263]: step time server 91.189.94.4 offset 0.012758 sec 22394 May 29 21:49:51 homecenter ntpdate[2404]: step time server 91.189.94.4 offset -0.001190 sec 22395 May 29 21:54:38 homecenter kernel: [ 6762.052030] wlan1: deauthenticated from f4:ec:38:45:62:74 (Reas on: 2) 22396 May 29 21:54:38 homecenter wpa_supplicant[2342]: CTRL-EVENT-DISCONNECTED bssid=f4:ec:38:45:62:74 reas on=2 22397 May 29 21:54:38 homecenter kernel: [ 6762.064744] cfg80211: All devices are disconnected, going to re store regulatory settings 22398 May 29 21:54:38 homecenter kernel: [ 6762.064752] cfg80211: Restoring regulatory settings 22399 May 29 21:54:38 homecenter kernel: [ 6762.064757] cfg80211: Calling CRDA to update world regulatory d omain 22400 May 29 21:54:38 homecenter kernel: [ 6762.069938] cfg80211: Ignoring regulatory request Set by core s ince the driver uses its own custom regulatory domain 22401 May 29 21:54:38 homecenter kernel: [ 6762.069943] cfg80211: World regulatory domain updated: 22402 May 29 21:54:38 homecenter kernel: [ 6762.069945] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) 22403 May 29 21:54:38 homecenter kernel: [ 6762.069949] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22404 May 29 21:54:38 homecenter kernel: [ 6762.069952] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22405 May 29 21:54:38 homecenter kernel: [ 6762.069956] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22406 May 29 21:54:38 homecenter kernel: [ 6762.069959] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22407 May 29 21:54:38 homecenter kernel: [ 6762.069962] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KH z), (300 mBi, 2000 mBm)

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

  • How can a Postfix/Dovecot(ssl)/Apache/Roundcube(non-ssl) setup leak email addresses?

    - by Jens Björnhager
    I have a linux box email server with Postfix as the MTA, Dovecot as the IMAP server and Apache with Roundcube as webmail. In my /etc/postfix/aliases I have just above a hundred different aliases which makes as many email addresses on my domain. I use one address per website so I easily can shut down spam infested addresses. During the half a year or so that I have had this setup, I have received 3 spam from 2 sources. As I know exactly where I entered this address, it should be easy to pinpoint email leaking websites and services. However, these sources are, according to me, not likely email sellers. And for one of them to sell my email twice? I contacted one of the sources and they are adamant that their system is tight. They suggested the possibility that it is my server that is doing the leaking. So, my question is: How likely is it that my box is leaking email addresses, and how? I don't store fully qualified email addresses anywhere in my system except in my maildir. I use SSL connection to IMAP I do not use https on webmail

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you.

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >