Search Results

Search found 9894 results on 396 pages for 'primary interop assembly'.

Page 352/396 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Microsoft signed driver appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OurCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Areca 1280ml RAID6 volume set failed

    - by Richard
    Today we hit some kind of worst case scenario and are open to any kind of good ideas. Here is our problem: We are using several dedicated storage servers to host our virtual machines. Before I continue, here are the specs: Dedicated Server Machine Areca 1280ml RAID controller, Firmware 1.49 12x Samsung 1TB HDDs We configured one RAID6-set with 10 discs that contains one logical volume. We have two hot spares in the system. Today one HDD failed. This happens from time to time, so we replaced it. Upon rebuilding a second disc failed. Normally this is no fun. We stopped heavy IO-operations to ensure a stable RAID rebuild. Sadly the hot-spare disc failed while rebuilding and the whole thing stopped. Now we have the following situation: The controller says that the raid set is rebuilding The controller says that the volume failed It is a RAID 6 system and two discs failed, so the data has to be intact, but we cannot bring the volume online again to access the data. While searching we found the following leads. I don't know whether they are good or bad: Mirroring all the discs to a second set of drives. So we would have the possibility to try different things without loosing more than we already have. Trying to rebuild the array in R-Studio. But we have no real experience with the software. Pulling all drives, rebooting the system, changing into the areca controller bios, reinserting the HDDs one-by-one. Some people are saying that the brought the system online by this. Some are saying that the effect is zero. Some say, that they blew the whole thing. Using undocumented areca commands like "rescue" or "LeVel2ReScUe". Contacting a computer forensics service. But whoa... primary estimates by phone exceeded 20.000€. That's why we would kindly ask for help. Maybe we are missing the obvious? And yes of course, we have backups. But some systems lost one week of data, thats why we'd like to get the system up and running again. Any help, suggestions and questions are more than welcome.

    Read the article

  • Microsoft signed drivers appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OutCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Suspend only works once after full power cycle with ASUS P7P55D-E Pro

    - by John Chadwick
    This one is strange. I can't seem to get suspend working more than once per power cycle. When I say "power cycle," I mean the only way to get one proper suspend is to cut power from the power supply and boot back up cold. After the proper suspend, I get a failed suspend, and after all reboots or cold boots until power is cut, suspends fail. I'm using an ASUS P7P55D-E Pro with a Sandy Bridge Core i7, running on Ubuntu Precise repositories and UEFI. I'm running Nouveau from repository (And Gallium3d compiled from git, but that does not come into this since I can avoid OpenGL and it still happens the same way) with a GTX 285 (nv50.) I had to build a custom kernel (3.3) in order for ACPI 5.0 to be supported and make suspend work at all. I compiled it using the latest Ubuntu kernel's config file with the additional entries set to the default options. All packages are up to date. I know these are relatively exotic settings, but I'm hoping maybe I can get some help anyways. The behavior when suspend fails is strange. Upon a proper suspend, all fans turn off and the only led left on, the power led, is blinking. Upon a failed suspend, 1. USB power remains. 2. The power led stays on solid. 3. All fans seem to still be on. 4. I can hear what I believe is the primary harddrive shutting off. 5. Despite USB power remaining, the USB powered keyboard does not respond to anything, and the indicator leds on it shut off. Pressing the power button does nothing, and of course I have not to date found a way to wake it up. When trouble shooting the first round of issues I got with suspend not too long ago, I ended up building a list of modules to disable upon sleeping. Here's my config file for them: In /etc/pm/config.d/01modules: SUSPEND_MODULES="uhci_hd ehci_hd button" All of my other pm configuration files are stock. In case it's any help, here are my relevant BIOS settings. Thanks.

    Read the article

  • Do glue records in non-circular dns-lookups speed up domain resolution or not?

    - by Joe Hopfgartner
    Doing a lookup for my domain on http://www.intodns.com/ I noticed theese two messages: In Parent section: DNS Parent sent Glue The parent nameserver g.gtld-servers.net is not sending out GLUE for every nameservers listed, meaning he is sending out your nameservers host names without sending the A records of those nameservers. It's ok but you have to know that this will require an extra A lookup that can delay a little the connections to your site. This happens a lot if you have nameservers on different TLD (domain.com for example with nameserver ns.domain.org.) and in NS section: Glue for NS records INFO: GLUE was not sent when I asked your nameservers for your NS records.This is ok but you should know that in this case an extra A record lookup is required in order to get the IPs of your NS records. The nameservers without glue are: 109.230.225.96 84.201.40.52 You can fix this for example by adding A records to your nameservers for the zones listed above. I do perfectly understand that the primary objective of glue records is to resolve circular dependencies. The classic use case: my domain is example.com and I want to have the nameserver ns1.example.com. This will never work because i cannot know the ip of ns1.example.com if I don't fetch example.com and in order to do that I need to fetch it from ns1.example.com. To resolve this deadlock I add a glue record to ns1.example.com containing the ip adress of the nameserver, so this can work out. So this problem does not occour if the nameservers are in a different TLD than the domain i want to look up. But however to fetch the zone information from the nameservers I need to know their ip adress right? And in order to know that i need to fetch the zone the nameservers are in from their respective nameservers, right? (or rather my ISP needs to do that in the background) So an extra lookup that takes time? If I now have glue records, I know the IP adress right away without the need to look it up - so this should speed up the resolution of my domain, shouldnt it? However my DNS zone provider (tecserver.at) replied that this would make no sense because "we are not running ns1.ourdomain.com an ns1.ourdomain.com as authorative NS for ourdomain.com. This would be the only sense for glue records. Tecserver has a glue record because the NS for tecserver.at are ns1.tecserver.at and ns2.tecserver.at. Therefore a glue record is needed for resolution.

    Read the article

  • How to setup multiple Apache SSL sites using multiple IP addresses

    - by Jeff
    How do you setup a single Apache2 config to host multiple HTTPS sites each on their own IP address? There will also be multiple HTTP sites on just a single IP address. I do not want to use Server Name Indication (SNI) as described here, and I'm only concerned with the important top-level Apache directives. That is, I just need to know the skeleton of how my config should look. The basic setup looks like this: Hosted on 1.1.1.1:80 (HTTP) - example.com - example.net - example.org Hosted on 2.2.2.2:443 (HTTPS) - secure.com Hosted on 3.3.3.3:443 (HTTPS) - secure.net Hosted on 4.4.4.4:443 (HTTPS) - secure.org And here are the important config directives I have so far, which is the closest I've come to a working iteration, but still no dice. I know I'm close, just need a little push in the right direction. Listen 1.1.1.1:80 Listen 2.2.2.2:443 Listen 3.3.3.3:443 Listen 4.4.4.4:443 NameVirtualHost 1.1.1.1:80 NameVirtualHost 2.2.2.2:443 NameVirtualHost 3.3.3.3:443 NameVirtualHost 4.4.4.4:443 # HTTP VIRTUAL HOSTS: <VirtualHost 1.1.1.1:80> ServerName example.com DocumentRoot /home/foo/example.com </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.net DocumentRoot /home/foo/example.net </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.org DocumentRoot /home/foo/example.org </VirtualHost> # HTTPS VIRTUAL HOSTS: <VirtualHost 2.2.2.2:443> ServerName secure.com DocumentRoot /home/foo/secure.com SSLEngine on SSLCertificateFile /home/foo/ssl/secure.com.crt SSLCertificateKeyFile /home/foo/ssl/secure.com.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 3.3.3.3:443> ServerName secure.net DocumentRoot /home/foo/secure.net SSLEngine on SSLCertificateFile /home/foo/ssl/secure.net.crt SSLCertificateKeyFile /home/foo/ssl/secure.net.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 4.4.4.4:443> ServerName secure.org DocumentRoot /home/foo/secure.org SSLEngine on SSLCertificateFile /home/foo/ssl/secure.org.crt SSLCertificateKeyFile /home/foo/ssl/secure.org.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> For what it's worth, I prefer to have each of my SSL sites on their own IP instead of including one of them on the primary VHOST IP. Any links which show a standard setup would be more than welcome!

    Read the article

  • Why won't vyatta allow SMTP through my firewall?

    - by Solignis
    I am setting up a vyatta router on VMware ESXi, But I see to have hit a major snag, I could not get my firewall and NAT to work correctly. I am not sure what was wrong with NAT but it "seems" to be working now. But the firewall is not allowing traffic from my WAN interface (eth0) to my LAN (eth1). I can confirm its the firewall because I disabled all firewall rules and everything worked with just NAT. If put the firewalls (WAN and LAN) back in place nothing can get through to port 25. I am not really sure what the issue could be I am using pretty basic firewall rules, I wrote the rules while looking at the vyatta docs so unless there is something odd with the documentation they "should" be working. Here is my NAT rules so far; vyatta@gateway# show service nat rule 20 { description "Zimbra SNAT #1" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.17 } type source } rule 21 { description "Zimbra SMTP #1" destination { address 74.XXX.XXX.XXX port 25 } inbound-interface eth0 inside-address { address 10.0.0.17 } protocol tcp type destination } rule 100 { description "Default LAN -> WAN" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.0/24 } type source } Then here is my firewall rules, this is where I believe the problem is. vyatta@gateway# show firewall all-ping enable broadcast-ping disable conntrack-expect-table-size 4096 conntrack-hash-size 4096 conntrack-table-size 32768 conntrack-tcp-loose enable ipv6-receive-redirects disable ipv6-src-route disable ip-src-route disable log-martians enable name LAN_in { rule 100 { action accept description "Default LAN -> any" protocol all source { address 10.0.0.0/24 } } } name LAN_out { } name LOCAL { rule 100 { action accept state { established enable } } } name WAN_in { rule 20 { action accept description "Allow SMTP connections to MX01" destination { address 74.XXX.XXX.XXX port 25 } protocol tcp } rule 100 { action accept description "Allow established connections back through" state { established enable } } } name WAN_out { } receive-redirects disable send-redirects enable source-validation disable syn-cookies enable SIDENOTE To test for open ports I have using this website, http://www.yougetsignal.com/tools/open-ports/, it showed port 25 as open without the firewall rules and closed with the firewall rules. UPDATE Just to see if the firewall was working properly I made a rule to block SSH from the WAN interface. When I checked for port 22 on my primary WAN address it said it was still open even though I outright blocked the port. Here is the rule I used; rule 21 { action reject destination { address 74.219.80.163 port 22 } protocol tcp } So now I am convinced either I am doing something wrong or the firewall is not working like it should.

    Read the article

  • Expectations for NTFS file recovery

    - by Fred Hamilton
    Yesterday I booted my XP system, and as I looked up a minute later I saw the light blue screen and tail-end of that pre-boot diskcheck Windows sometimes does if it finds an error (or was previously told to run a diskcheck drung the next boot). I didn't worry about it at the moment... But then I looked at my "scratch" disk, which was a 70% full, 750GB hard disk...and it now looks like it has been freshly formatted. It doesn't have a single file on it, just the hidden "System Volume Information" file and 750GB of freedom from data. I looked at some of the recovery tools from the Free NTFS partition recovery question and decided to try PC INSPECTOR™ File Recovery 4.x initially. It ran overnight and afterwards returned a list of thousands of files it could recover. The odd thing was that the filenames were lost, but the file extensions were not (WTF?). And all of the files were exactly 1,472kB in size. I recovered a dozen PDFs as a test, and 80% of them displayed OK despite being padded out to 1.5MB (though I assume any files 1472kB are hosed). My primary question is: Is this the best I can expect from any file recovery software when trying to recover NTFS files? Or is there perhaps something better out there? I assume this is as good as it gets, but wanted to check in with the experts first. Bonus questions: What might have happened to my drive? I didn't intentionally format it. I've never seen a disk error cause the drive to suddenly become a clean, reformatted drive. Could some malicious/confused software have told my PC to format my disk on reboot? Is that even a function Windows XP has? Why can the file extensions be recovered but not the filename? Does NTFS really treat them as separate entities? I thought I had 8.3 naming turned off, but maybe that had something to do with it. Or maybe it looks at the data in the file and guesses the extension?

    Read the article

  • DNS settings for resolving Host name to IP not working?

    - by Hasas Ali Khan
    I want to access my IIS hosted application over LAN. First I installed a DNS server. The DNS configuration steps are: Go to DNS Manager - right click on System Name - click on configure a DNS Server. DNS Server wizard open -, click on next button - Select radio button "forward lookup zone" click on next button. In the second window. click on radio button "The server maintains the zone" and then click next. Give the zone name "example.com" Click on radio button, "Do Not allow dynamic updates". and then click next button. In the next window, click on radio button "No it should not forward query" and then click next button. Complete the configure a DNS server wizard and then click on finish button. After it is managing the DNS records: In DNS server wizard. open tree of forward lookup zone and right click on the new zone name "example.com" - properties and click on "Start of authority" and write values on text boxes serial number=1 primary server=systemname.domainname responsible person=hostmaster.domainname Click on server name, highlight domain name, click on edit button and enter IP address of the server where I host my application. Highlight new zone name and right click on it and click "New Host" option. In this window there are three text boxes: Name(user parent name if blank)=scoring Fully Qualified Domain Name=scoring.example.com IP Address= My IP Address and check on "Create associated pointer(PTR) record" and click on "Add Host" Host button and then click done button. I have host header for my application is "scoring" on port 80 and its working fine on server my application setting are I have change its, Advance setting --> Application Pool Identity --> Local System application can access on server with host name "scoring" but it can not access on machines on LAN. When I change LAN machine host file that is under, C:/windows/system32/driver/etc/host and edit it and enter host name with hosted machine IP like this: scoring 192.168.1.20 By making these changes I can run the application over LAN machines as I mentioned above DNS setting by which I can run App over LAN with out editing the client's host file. What mistake am I doing in this configuration?

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Unable to delete all partitions on flash drive using Windows 7 OS??

    - by irrational John
    Recently I purchased an ADATA C802 8GB flash drive. Since the drive was new I decided to run some of the HD Tune Pro (v4.50) performance tests on it, mostly just for the heck of it. To avoid accidently destroying data HD Tune refuses to write to a drive unless there are no partitions on the drive. If you do attempt to write to a drive with partitions, it posts the message "Writing is disabled. To enable writing please remove all partitions." As you would expect, the ADATA came formatted with a single primary FAT32 partition in the Master Boot Record. But a number of unexpected things happened when I attempted to delete that partition. The first thing I tried was to use the Windows 7 (64-bit) Disk Management tool (diskmgmt.msc) to delete the partition. It would not let me. The context menu choice to delete that volume was not available. Next I opened up a command prompt window with Admin authority and ran diskpart. Diskpart deleted the volume for me. However, when I attempted to run an HD Tune write test on the drive I still got the "Writing is disabled" message. Huh??? So I fired up a utility I have which allows viewing drives at the sector level and verified that the partition table in the Master Boot Record was empty. No partitions. Yet HD Tune still thought there were partitions on the drive? So why was I still getting the "Writing is disabled" message from HD Tune Pro? And why wouldn't the Windows 7 Disk Management tool let me change the partitions on this drive. After doing the above, I plugged the ADATA into my MacBook. I was then able to format it as either a GPT or MBR partitioned drive with no problems. I am not looking for suggestions on how to format this drive. I can do that. What I do not understand and was hoping I might get insight into is why this drive behaves so strangely under Windows 7? And BTW, what's up with HD Tune Pro? BTW, if I plug the drive I formatted on my MacBook back into my Windows 7 64-bit system I still run into road blocks with the Disk Management tool. For example, I cannot delete all the GPT partitions on the ADATA so I can convert it into an MBR drive. I following Microsoft's instructions, the instructions just do not work with this ADATA flash drive. Anyone know what's up with this? It makes no sense to me. Has something changed in Windows 7 (Vista)??

    Read the article

  • Laptop turning off when fan is spinning hard

    - by Ieyasu Sawada
    My laptop seems to have reach its lifespan. Its an Acer laptop so I guess that's normal. But I'd like to hear your opinions about this. My laptop is only 2 years old. I haven't heard the fan spinning like crazy not until these past 5 months. What I did: Hoping that its just the applications that I have installed that's consume the life of my laptop from the background. I used PC Decrapifier to uninstall some of the things that I don't need. Reformatted my computer but only the primary partition since my files are on the second partition. Bought a cooling pad. None of these works. I noticed the fan spins so hard when: I have a lot of browser tabs open. Full screen mode a flash video that I'm viewing online. Using VLC to watch encoded videos. There's this thing called minicoder http://sourceforge.net/projects/minicoder/ to reduce the size of videos without affecting much of the quality. I'm suspecting that it needs additional software(to make life easier for the hardware) even though the video is working fine in VLC. VLC consumes about 300,000K and above(as seen from task manager) while watching videos (.mkv). The problem: Laptop suddenly turns off when the fan spins like crazy for about 20 minutes. I'm always checking to see if its already too hot(using my fingers to feel the side of the laptop) but its not so I continued watching and then poof! computer turns off. Laptop won't turn on immediately when I turn it on after it turning off by itself. The light for the power goes on but its turns off immediately. I have to wait for about 10-20 seconds before it boots up without problems. So how do I go about this? Is this just normal for Acer laptops after about 2 years of heavy usage (8-12 hours a day)? My usage is heavy but I normally only have a text-editor(sublime) and browser open(chrome). Here's what I got from HW monitor:

    Read the article

  • SSL connection error during handshake on Windows Server 2008 R2

    - by Thomas
    I have a Windows 2008 R2 Server that runs a HTTPS Tunneling service. The software uses a certificate that is provided via the Windows certificate store. The certificate is located in the local computer private certificates. It supports server and client authentication with signing and keyencipherment. Cert chain The certificate chain looks fine. It's a Thawte SSL123 certificate. Thawte Premium Server CA (SHA1) [?e0 ab 05 94 20 72 54 93 05 60 62 02 36 70 f7 cd 2e fc 66 66] thawte Primary Root CA [?1f a4 90 d1 d4 95 79 42 cd 23 54 5f 6e 82 3d 00 00 79 6e a2] Thawte DV SSL CA [3c a9 58 f3 e7 d6 83 7e 1c 1a cf 8b 0f 6a 2e 6d 48 7d 67 62] Server certificate Issues Most browsers accept the certificate without any warning. But IE 7 on Windows XP SP3 and Opera 12 on OSX just report an connection error. Opera complains: Secure connection: fatal error (552) https://www.example.com/ Opera was not able to connect to the server, because the server does not communicate via any secure protocol known to Opera. A connection test using openssl s_client -connect www.example.com:443 -state says: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A 52471:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-35.1/src/ssl/s23_lib.c:182: ssldump -aAHd host www.example.com during curl https://www.example.com/ reports: New TCP connection #1: localhost(53302) <-> www.example.com(443) 1 1 0.0235 (0.0235) C>SV3.1(117) Handshake ClientHello Version 3.1 random[32]= 50 77 56 29 e8 23 82 3b 7f e0 ae 2d c1 31 cb ac 38 01 31 85 4f 91 39 c1 04 32 a6 68 25 cd a0 c1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f Unknown value 0x9a Unknown value 0x99 Unknown value 0x96 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 0.0479 (0.0243) S>C TCP FIN 1 0.0481 (0.0002) C>S TCP FIN Thawte provides two Java based SSL Checkers. The Legacy Thawte SSL Certificate Installation Checker and the sslToolBox. Both validate the certificate under Windows XP but report connection errors under OSX and Windows 2008 R2.

    Read the article

  • How can I change exim's DKIM and SPF for emails sent?

    - by 0pt1m1z3
    I've now spent 2 hours trying to figure out this issue and I am about to give up and go to bed. I've been having issues with Gmail rejecting emails from my VPS server because of false spam alerts (probably caused by lfd sending too many emails). So I changed my Exim config to send emails from a different IP (my VPS comes with 3) and that fixed the issue. I also enabled DKIM and SPF on my domains for added measure. But now, all my emails appear as ("From: Sender Name via server.domain1.com") where server.domain1.com is my VPS hostname. I previously had the same issue in Outlook and turning off "Set SMTP Sender: headers" solved that problem. But I believe adding the DKIM and SPF now makes Gmail add "via server.domain1.com" to my messages. How do I fix this? This is a typical header for a message (as it appears at gmail): Delivered-To: [email protected] Received: by 10.60.44.163 with SMTP id f3csp248622oem; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received: by 10.50.106.200 with SMTP id gw8mr452788igb.10.1333081398523; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Return-Path: <[email protected]> Received: from domain2.com ([X.X.X.X]) by mx.google.com with ESMTPS id y1si810998igb.3.2012.03.29.21.23.18 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) client-ip=X.X.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=server.domain1.com; s=default; h=Date:Message-Id:From:Content-type:MIME-Version:Subject:To; bh=wF8bBRgh01EYg4t5DAeVPv1Ps906UVIeRnQCb/HvSYw=; b=k/Pg7lnrO+Ud/z1mOTv+O/3DiJzzQgyBhfIizIaFHM8tF/eNJt5P2k+9yQB224sxYstZIWwVRBJmiqvcM1QhARv1HWqWma0crppZ3JOn+LRHANan634OBi+58SIRA+gu; Received: (Exim 4.77) id 1SDTVE-0005HA-9Y for [email protected]; Fri, 30 Mar 2012 00:31:56 -0400 To: [email protected] Subject: Password Reset Request MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Sender Name <[email protected]> Message-Id: <[email protected]> Date: Fri, 30 Mar 2012 00:31:56 -0400 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.domain1.com X-AntiAbuse: Original Domain - domain2.com X-AntiAbuse: Originator/Caller UID/GID - [507 504] / [47 12] X-AntiAbuse: Sender Address Domain - server.domain1.com

    Read the article

  • Windows 2008 IIS 7.0 HTTP to HTTPS Redirect -- Versus IIS 6.0 Mechanism

    - by Dan7el
    This topic, creating a mechanism for redirection from HTTP to HTTPS on a Windows 2008 server running IIS 7.0 is a much written-about topic on the Internet. How this is done is really not so much my issue. My issue is more of explaining why this can't be done with the standard HTTP Redirect module that ships with Windows 2008 IIS 7.0. Instead, there are other methods needed that are more arduous. First, the IIS 6.0 method requires no externally available modules nor does it require any additional modifications to the web.config or any type of other development effort. It's outlined here: http://blogs.microsoft.co.il/blogs/dorr/archive/2009/01/13/how-to-force-redirection-from-http-to-https-on-iis-6-0.aspx And, you can see the basic steps are to run the snap-in, get the properties on the site, and do some modifications. Presto, you have the HTTP -- HTTP redirect setup. Now, on the IIS 7.0 platform, it doesn't seem this simple. An initial search found the following site: http://www.sslshopper.com/iis7-redirect-http-to-https.html Which has two separate approcates: 1. Involves installing a separately available Microsoft module -- URL Rewrite Module, and then adding XML to the web.config. 2. Custom Error Page. ...there might be other methods, but these are the basic ones and the first is listed as the primary method. But wait...There exists on the IIS 7.0 an HTTP Redirect Module. So...why can't I use the HTTP Redirect Module to do this very thing? This is really my big question. I need to know this because my management is going to insist I use the HTTP Redirect Module and set up the HTTP to HTTPS redirect in a similar fashion to how we do in IIS 6.0. Can someone please explain to me, in clean, simple, easy to understand, terms that both I and my management can understand as to why I need to go get the URL Rewrite Module and install that on the server and make the web.config changes suggested by the article instead of simply using the HTTP Redirect module that's already installed on the site? Thanks a bunch.

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart? [migrated]

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Some questions regarding Hostname

    - by user481913
    I just bought a new VPS hosting plan and i have a few questions. Hope someone here can clear the doubts for me. 1) Is it necessary to have a real domain for a vps hostname? I suppose i can just use a non-real domain like anydomain.com and something like 'server' for the computer name. Therefore i'll end up with something like server.anydomain.com as the vps's hostname. I want to do this for the sake of putting in a hostname to configure the vps to get it going . So, since this non-real domain name does not need to be publicly accessible i don't need to register or own it and instead access the server by the ip address. Is that correct? But i suppose that this also depends upon if my web host allows that? 2)I would also like to run some real sites with real domain names on this vps, so can i just configure the zone file on the primary nameserver and make entries for these domains and point an A record at the Vps's IP to make them publicly accessible over the internet? For example for my 1st domain i could make an entry like this: $TTL 86400 mydomain1.com. IN SOA ns1.mywebhost.com. \ admin.mydomain1.com. ( 2004011522 ; Serial no., based on date 21600 ; Refresh after 6 hours 3600 ; Retry after 1 hour 604800 ; Expire after 7 days 3600 ; Minimum TTL of 1 hour ) server IN A 200._._._ ns1.mywebhost.com. IN A 216._._._ ns2.mywebhost.com. IN A 205._._._ @ IN NS ns1.mywebhost.com. @ IN NS ns2.mywebhost.com. @ IN MX 10 server www IN CNAME server server IN CNAME @ (so this particular line tells the nameserver to point the url mydomain1.com to server.anydomain.com at the particular ip addresss in the A record.... is that right?) Similarly for my 2nd domain i could have a similar entry : $TTL 86400 mydomain2.com. IN SOA ns1.mywebhost.com. \ admin.mydomain2.com. (..... ............................so on........ ......................................... ......................................... ......................................... ......................................... ......................................... Is that correct? 3) Suppose for my vps hostname, i ignorantly chose a domain that someone else alreadys owns , however i think that it won't affect the public accessibility of the real domain or website since only the real owner of the domain has the rights to provide for the nameservers addresses in the TLD registeries through his Domian Registerar? Is that correct? 4)Can i change my vps's hostname later? Would this create any complications?

    Read the article

  • Any way to recover ext4 filesystems from a deleted LVM logical volume?

    - by Vegar Nilsen
    The other day I had a proper brain fart moment while expanding a disk on a Linux guest under Vmware. I stretched the Vmware disk file to the desired size and then I did what I usually do on Linux guests without LVM: I deleted the LVM partition and recreated it, starting in the same spot as the old one, but extended to the new size of the disk. (Which will be followed by fsck and resize2fs.) And then I realized that LVM doesn't behave the same way as ext2/3/4 on raw partitions... After restoring the Linux guest from the most recent backup (taken only five hours earlier, luckily) I'm now curious on how I could have recovered from the following scenario. It's after all virtually guaranteed that I'll be a dumb ass in the future as well. Virtual Linux guest with one disk, partitioned into one /boot (primary) partition (/dev/sda1) of 256MB, and the rest in a logical, extended partition (/dev/sda5). /dev/sda5 is then setup as a physical volume with pvcreate, and one volume group (vgroup00) created on top of it with the usual vgcreate command. vgroup00 is then split into two logical volumes root and swap, which are used for / and swap, logically. / is an ext4 file system. Since I had backups of the broken guest I was able to recreate the volume group with vgcfgrestore from the backup LVM setup found under /etc/lvm/backup, with the same UUID for the physical volume and all that. After running this I had two logical volumes with the same size as earlier, with 4GB free space where I had stretched the disk. However, when I tried to run "fsck /dev/mapper/vgroup00-root" it complained about a broken superblock. I tried to locate backup superblocks by running "mke2fs -n /dev/mapper/vgroup00-root" but none of those worked either. Then I tried to run TestDisk but when I asked it to find superblocks it only gave an error about not being able to open the file system due to a broken file system. So, with the default allocation policy for LVM2 in Ubuntu Server 10.04 64-bit, is it possible that the logical volumes are allocated from the end of the volume group? That would definitely explain why the restored logical volumes didn't contain the expected data. Could I have recovered by recreating /dev/sda5 with exactly the same size and disk position as earlier? Are there any other tools I could have used to find and recover the file system? (And clearly, the question is not whether or not I should have done this in a different way from the start, I know that. This is a question about what to do when shit has already hit the fan.)

    Read the article

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • I want to virtualize my workstation (Tier 1), Looking for Bare Metal Hypervisor for consumer grade components

    - by Chase Florell
    I find myself in this similar bind at least once a year. The bind whereby I'm either upgrading a motherboard, or an OS hard drive. It drives me crazy to have to reinstall Windows, Visual Studio, all my addins, reconfigure my settings etc... every single time. I have a layout and I like and I want to stick with it. My question is... Is there a Bare Metal Hypervisor on the market that will enable me to virtualize my consumer grade workstation? I really want to avoid Host/Client virtualization. Bare Metal is definitely a better way to go for my needs. Is this a good approach, or am I going to suffer some other undesirable side effects by doing this? Clarification My machine has very limited purposes. My primary use is Visual Studio 2010 Professional where I develop ASP.NET MVC Web Applications. The second piece of software that I use (that's system intensive) is Photoshop CS3. Beyond that, my applications are limited to Outlook, Internet Explorer, Firefox, Opera, Chrome, LinqPad, and various other (small) apps. Beyond this, I'm considering working on a node.js project and might run ubuntu on the same hypervisor if possible. System Specs: Gigabyte Motherboard Intel i7 920 12 GB Ram basic 500GB 7200RPM HDD for OS 4 VelociRaptors in Raid 1/0 for build disk Dual GTS250 (512MB) Graphics cards (non SLI) for quad monitors On a side note I also wouldn't be opposed to an alternative suggestion if the limitations are too great. I could install the ESXi (or Zen Server) on my box, and build a separate "thin client" to RDP into the virtual machine. It appears as though RDP supports dual monitors. Edit (Dec 9, 2011) It's been nearly a year since I first asked this question. Since then, there have been a lot of great strides in Hypervisor technology... AND MokaFive is now released for corporate use. I'd love to dig into this question a little more and find out if there is a solid BareMetal Hypervisor for workstations running consumer grade components (IE: not Dell, HP, Lenovo, Etc).

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >