Search Results

Search found 14311 results on 573 pages for 'stan note'.

Page 480/573 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Blank Page: wordpress on nginx+php-fpm

    - by troutwine
    Good day. While this post discusses a similar setup to mine serving blank pages occasionally after having made a successful installation, I am unable to serve anything but blank pages. My setup: Wordpress 3.0.4 nginx 0.8.54 php-fpm 5.3.5 (fpm-fcgi) Arch Linux Configuration Files php-fpm.conf: [global] pid = run/php-fpm/php-fpm.pid error_log = log/php-fpm.log log_level = notice [www] listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0660 user = www group = www pm = dynamic pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 nginx.conf: user www; worker_processes 1; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; include /etc/nginx/sites-enabled/*.conf; } /etc/nginx/sites-enabled/blog_sharonrhodes_us.conf: upstream php { server 127.0.0.1:9000; } server { error_log /var/log/nginx/us/sharonrhodes/blog/error.log notice; access_log /var/log/nginx/us/sharonrhodes/blog/access.log; server_name blog.sharonrhodes.us; root /srv/apps/us/sharonrhodes/blog; index index.php; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_intercept_errors on; fastcgi_pass php; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } }

    Read the article

  • Does VMware_ThinApp_4.0.3_169725.msi contain Trojan.Win32.Vapsup in it? [closed]

    - by Joe
    Today I ran a full system scan using Online Armor++. It detected Trojans in the installer. I have had this installer on the computer for many months and I do not remember if I ever installed it on this PC or not. For some reason I unpacked the installer with 7zip though. I was probably going to attempt to make it portable. Anyway so I had the installer in a folder, and another folder next to it with all of the installers files unpacked. The VMwareVS.cab file that was extracted from the installer, also had its files extracted into another folder. This was all done many months ago. OA++ did not detect the installer itself as as Trojan VMwareVS.cab, but it did detect 4 of the files that I had unpacked as Trojans. Here are the details of what the scan detected on my PC today. Note: I uploaded these files to VirusTotal....the Ikarus and A-squared engines(the engines from Online Armor++) are not detecting anything. But some of the other engines are detecting the same Trojan that OA++ detected(Trojan.Win32.Vapsup). C:\Downloads\VMware_ThinApp_4.0.3_169725.msi [This file was not detected by the Virus Scan as infected] CRC-32: 50189335 MD5: 9e32e3272d2637fb6e0759a604879e6f SHA-1: 19ef5a6d586ddcc5b9222ba57b0f14159655f3f8 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS.cab [This file was not detected by the Virus Scan as infected] CRC-32: d3a9694a MD5: ddc278a8fe0a25486277d9800e6af85a SHA-1: 456b731c8b6fdb7a1d7bcff3d1fbe9df58ccc73a Online Armor++ Virus Scan Results: Detected Trojan.Win32.Vapsup.vee!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\Binary.ThinstallProcess CRC-32: 4888b13c MD5: 4884cb4622278c0835b9a5dcd2ae0473 SHA-1: ed879ae65147805dd69e1355c17df814b9d434ce Detected Trojan.Win32.Vapsup.vef!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS\AppSync.exe CRC-32: fd20b378 MD5: cbdcdd590f7ffc52b6ce68fa11f2bda4 SHA-1: aebf685e02d6693df9eaa92c67dc5746792b5ecf Detected Trojan.Win32.Vapsup.veg!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS\logging.dll CRC-32: 8adee5d5 MD5: 56ff9b83f58ba8eacb6e939aa4759bf0 SHA-1: b52fa38765a25fe6a2c4f60d76545a4dd64904eb Detected Trojan.Win32.Vapsup.vek!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS\thinreg.exe CRC-32: 423c5652 MD5: c436feff8d9096e7475c84a6bca6096c SHA-1: 685b84af796132ce144aacd6ff23379e17ddf1a7 Are these files indeed infected by this Trojan, or is it just a false positive? Does anybody have the same version of the original installer, who could find out if the Checksums of the installer and unpacked files match? Should I be worried about whether this Trojan has spread and infected my machine? Thanks in advance for any help!

    Read the article

  • Problem with authentication of users via IE when using "host header value"

    - by Richard
    Hi, I'm trying to set multiple web sites up in an IIS 6. I've got a working virtual site residing under the default web site, but if I create a new web site in the IIS and asign it a host header value, let it point to the very same file structure as in the prevoiusly mentioned site and finally asign windows integrated security only to the site - I still cannot log in to the new site using MSIE 6 or 8 but FF 3.5 works fine. In the web log I get these entries if I access the localhost site 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/Default.asp - 80 xxx\Administrator 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 200 0 0 If I however access via the host headre value site I get prompted to login but the login fail and I also get an error "401 1 2148074252" which not present when it succeeds. Can this be the issue? Pre login screen 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 post login screen (note that win credentials have not been submitted) 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 Firefox will try to access using anonymous access and will prompt for login, after submitting win credentials it all works fine. For what reasoon is the IE so stubornly refusing to submit credentials to the "host header value" site? The site is in the Local intranet Zone and login is ticked for that zone. No teaming NIC's no FW, no nothing, I'm cluless :( /Richard

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest. Edit: Had a look at Deoendency Walker and Process Explorer. Both great tools. Here is a image of the TCP connections for explorer.exe in Process Explorer http://img210.imageshack.us/img210/3563/61930284.gif

    Read the article

  • Suggestions for sharing and using data between Ubuntu and Windows 7 dual boot

    - by wizjany
    Note: TL;DR, scroll to bottom for summary. I recently set up my computer for a dual boot between Ubuntu 9.10 and Windows 7. My current drive setup is as follows. | A1 | A2 | | B1 | B2 | B3 | A1: 100 mb, windows 7 "System Reserved" boot partition A2: 230 gb, data section, this one needs to be shared between the operating systems B1: 125 gb, windows 7 OS B2: 123 gb, ubuntu OS B3: 2gb, linux swap space Pretty much i want to have my documents, music, pictures, videos, etc accessible from both operating systems. My first attempt involved making the data (a2) partition NTFS, and moving my home folder from ubuntu to the data partition. However, as I read NTFS does not work nice with permission, and it messed up my home folder. My next idea is one of the following: 1) format the data partition to ext2/3/4 and move my home folder from linux there, and get a driver to read ext partitions in windows 7. The problem with that is that most of the ext drivers/software are not compatible with windows 7 or do not integrate with windows explorer (I really don't want to open a separate software window just to access my data, plus it's probably not compatible with other software.) http://www.fs-driver.org/ looks promising, but I'm not sure how it works with ext4 and windows 7 (not officially supported, when trying in vista compatibility mode, it tells me I need to format the ext drive to use it). My next idea, 2) keep the home folder in ubuntu where it is, but create symlinks for the Documents, Music, etc folders to an NTFS formatted Data (A2) partition, and add those locations to the windows 7 libraries. I'm not totally sure how the permissions would work out, but it should be fine since it's only the documents, music, etc and not the important config files in the rest of /home/user/. Correct me if I'm wrong. Currently, symlinks is my best idea, although i'm not sure how it will work. Any suggestions, additions to my ideas, links, pointers, whatever would be greatly appreciated. Even if it means i should reformat both my drives and repartition (2 250gb drives if you want to suggest a setup for that), I won't be too opposed if that's the best suggestion (I've gone through the format/install/format/reinstall process 5 times over the past 3 days, once more won't hurt me). TL;DR, summary: I have two hard drives. One is partitioned for Ubuntu and Windows 7, the second one I want accessible to both operating systems to store documents, music, pictures, videos, etc. Suggestions on how to set up the data drive please =) P.S. bonus if I can get an apache server document root folder working between the two OS's as well (permissions could become very complicated, so don't worry too much about that) P.P.S. Related question, but data viewing is one way: http://superuser.com/questions/84586/partition-scheme-and-size-for-dual-boot-windows-7-and-ubuntu-9-10-with-separate-p

    Read the article

  • Odd squid transparent redirect behavior

    - by EMiller
    This is the first time I've set up squid. It's running a redirect script that does some text search/replace on html pages, and then saves them to a location on the same machine on the nginx path - then issues the redirect to that URL (it's an art project :D). The relevant lines in squid.conf are http_port 3128 transparent redirect_program /etc/squid/jefferson_redirect.py The jefferson_redirect.py script is based on this script: http://gofedora.com/how-to-write-custom-redirector-rewritor-plugin-squid-python/ The issue: I'm getting strange http redirect behavior. For example, here is the normal request/response from a PHP script that issues a header("Location:"); - a 302 redirect: http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.1 302 Found Date: Tue, 13 Apr 2010 05:15:43 GMT Server: Apache X-Powered-By: PHP/5.2.11 Location: http://www.google.com/search?q=yreka Content-Type: text/html Vary: User-Agent,Accept-Encoding Content-Encoding: gzip Content-Length: 2108 Keep-Alive: timeout=3, max=100 Connection: Keep-Alive Here's what it looks like when running through the squid proxy (note that "redirector.mysite.com" is not the site running squid or nginx): http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive If-Modified-Since: Tue, 13 Apr 2010 05:21:02 GMT HTTP/1.0 200 OK Server: nginx/0.7.62 Date: Tue, 13 Apr 2010 05:21:10 GMT Content-Type: text/html Content-Length: 17865 Last-Modified: Tue, 13 Apr 2010 05:21:10 GMT Accept-Ranges: bytes X-Cache: MISS from jefferson X-Cache-Lookup: HIT from jefferson:3128 Via: 1.1 jefferson:3128 (squid/2.7.STABLE6) Connection: keep-alive Proxy-Connection: keep-alive It is basically working - but the URL http://redirector.mysite.com/?unicmd=g+yreka remains unchanged, while displaying the google page (mostly broken as it's using URLs relative to redirector.mysite.com) I've experienced a similar thing with google results pages: when clicking to another page from google, I get a google URL, with the other site's content. Sorry for the long post - many thanks if you've read this far! Any ideas?

    Read the article

  • How to configure DNS so that www.example.com goes to one server, *.example.com to another

    - by fishwebby
    I'm trying to set up my domain as follows, but I'm not actually sure if it's possible. I have a domain where I would like the base and www addresses to go to my static site, but others to go to my application server. For example: My domain is registered with Dreamhost, and my application is on a VPS at Webbynode. I've set up the domain in Dreamhost to use Webbynode's nameservers: ns1.dnswebby.com ns2.dnswebby.com ns3.dnswebby.com And in Webbynode I've set up a wildcard A record to point to the IP address of my VPS: * 1.2.3.4 A and this works nicely, if I go to app.example.com it resolves to my application server at Webbynode. However, what I'd like to do is have example.com and www.example.com go to my static site, hosted back at Dreamhost, whilst still having any other domain go to my app. What I've done to try and achieve this is set up these DNS "NS" entries at Webbynode, trying to get Dreamhost to resolve these domain names: (empty) ns1.dreamhost.com NS (empty) ns2.dreamhost.com NS (empty) ns3.dreamhost.com NS www ns1.dreamhost.com NS www ns2.dreamhost.com NS www ns3.dreamhost.com NS (I don't have a fixed IP address at Dreamhost so I can't just set up simple A records). However this doesn't work... does anyone have any idea if this is possible and if so how it could be done? Update: I've got this working now, as above for the domain (i.e. registered with Dreamhost, but using Webbynode's nameservers). To delegate the DNS for www.example.com to Dreamhost, I've got the following DNS entries set up: www.example.com. ns1.dreamhost.com. NS www.example.com. ns2.dreamhost.com. NS www.example.com. ns3.dreamhost.com. NS (note the full stops at the end) And to get example.com to resolve to my static site, I set up CNAME record: example.com. www.example.com. CNAME So now, example.com and www.example.com go to my static site on Dreamhost, and if they change the IP address of my shared hosting it won't affect me, and all other subdomains go to my application server. This seems to work nicely, but if anyone knows a better way to do it I'd be happy to hear it. Thanks to all who replied.

    Read the article

  • Get 5.1 surround sound from computer through a VCR config?

    - by Wedding Nails
    I'm posting to see if my idea of this setup is right and can be done. I currently have the following "equipment": a JVC VCR -quite old-, which has built in surround sound (aka it has several speaker outputs, which I believe is 5.1 and are connected to several speakers that are in every corner of the room), a computer with SPDIF optical output and a new flat screen TV (with built in HDMI). I want the computer to take advantage of the VCR's surround system (all the speakers in the room) in order to play mainly music and video always with all the speakers (5.1) and with the maximum sound quality. Currently, the computer plays sound only through the front speaker (I connect one output to the on board pc audio input) and the quality is really bad. As a side note, the computer video runs with S-video (old school), and the picture quality as you would imagine, is really bad with the new big LCD screen. My main goals are: to upgrade the picture with a new video card which would support HDMI (my tv has HDMI). to buy a SPDIF optical cable, connect one end to the VCR SPDIF input and the other end to the PC output This is theoretically what I've researched so far, and I came out with several questions: in this case, with the SPDIF cable connected, and all the configurations done in windows allowing the 5.1, will I get every content I play "converted" or played through all of my speakers? (I read this forum post). I already know that in order for this setup to play from all the speakers, the content/audio source has to be 5.1. but my question is, if there is a way to play from all of the speakers no matter what type of content I'm playing (that's why I said conversion there) I already know that HDMI cables carry digital sound. Is there a way I can only use said HDMI cord to the tv, and get sound through the VCR? (I'm not too sure about this, I would have to disable the TVs speakers and use the VCR surround as default, but I have no clue wether this can be done or not). Update: The ultimate question is, do I really have to rely on "sound virtualization" technology to get sound from all the speakers, no matter what content I play? (do I require a newer sound card, like a creative soundblaster with said technology?) Thanks!

    Read the article

  • Turn off email notification from abrt (Automatic Bug Reporting Tool)

    - by Banjer
    I'm configuring CentOS 6.2 and have seen a few "[abrt] full crash report" emails. I understand that abrt is useful for creating crash dumps and what not, so I don't want to disable the service, I just would like to stop getting the crash report emails. I probably have to add something to the config file in /etc/abrt/abrt.conf. I can't seem to find anything in my searches. Any idea? Thanks. Edit: Here is my abrt.conf, which is rather simple. [root@myhost~]# cat /etc/abrt/abrt.conf # Enable this if you want abrtd to auto-unpack crashdump tarballs which appear # in this directory (for example, uploaded via ftp, scp etc). # Note: you must ensure that whatever directory you specify here exists # and is writable for abrtd. abrtd will not create it automatically. # #WatchCrashdumpArchiveDir = /var/spool/abrt-upload # Max size for crash storage [MiB] or 0 for unlimited # MaxCrashReportsSize = 1000 # Specify where you want to store coredumps and all files which are needed for # reporting. (default:/var/spool/abrt) # #DumpLocation = /var/spool/abrt And a listing of /etc/abrt: [root@myhost~]# ls -la /etc/abrt total 32 drwxr-xr-x. 3 root root 4096 Apr 13 06:14 . drwxr-xr-x. 97 root root 12288 Apr 13 03:50 .. -rw-r--r--. 1 root root 527 Dec 13 22:50 abrt-action-save-package-data.conf -rw-r--r--. 1 root root 572 Dec 13 22:50 abrt.conf -rw-r--r--. 1 root root 175 Dec 13 22:50 gpg_keys drwxr-xr-x. 2 root root 4096 Apr 13 06:13 plugins [root@myhost~]# ls -la /etc/abrt/plugins/ total 12 drwxr-xr-x. 2 root root 4096 Apr 13 06:13 . drwxr-xr-x. 3 root root 4096 Apr 13 06:14 .. -rw-r--r--. 1 root root 278 Dec 13 22:50 CCpp.conf Actually all of those conf files above are only a few lines and do not mention anything about mail, email, or notifications.

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Question: what's the best way to handle sub-projects in eclipse when using git for SCM? Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link to a subdirectory of the directory containing the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • Setting up Apache and PHP on Mac OS X Snow Leopard

    - by Martin Bean
    I've recently purchased an Apple iMac. Unfortunately, enabling Apache and PHP has thrown up some problems. I enabled Mac's built-in Web Sharing through System Preferences, at which point I got an output and could add HTML files to my user directory. However, PHP files were being displayed rather than interpreted. I then discovered this is because PHP isn't enabled by default on Mac's Apache set-up. After a quick Google search, I came across this page: http://developer.apple.com/mac/articles/internet/phpeasyway.html I proceeded to the section, Enabling PHP in Apache, copying and pasting the following code snippet into a new Terminal window and hitting Return: set admin_email to (do shell script "defaults read AddressBookMe ExistingEmailAddress") user_www=$HOME/Sites filename=php-test user_index=${user_www}/${filename}.php user_db=${user_www}/${filename}-db.sqlite3 # NOTE: Having a writeable database in your home directory can be a security risk! conf=`apachectl -V | awk -F= '/SERVER_CONFIG/ {print \$2}'| sed 's/"//g'` conf_old=$conf.$$ conf_new=/tmp/php_conf.new touch $user_db chmod a+r $user_index chmod a+w $user_db chmod a+w $user_www echo "Enabling PHP in $conf ..." sed '/#LoadModule php5_module/s/#LoadModule/LoadModule/' $conf | sed "s^[email protected]^<b>\$admin_email</b>^" > $conf_new echo "(Re)Starting Apache ..." osascript <<EOF do shell script "/bin/mv -f $conf $conf_old; /bin/mv $conf_new $conf; /usr/sbin/apachectl restart" with administrator privileges EOF Unfortunately, this has completed thrown Apache and now nothing is being served; instead I'm receiving "Failed to open page" errors because it cannot connect to the server, despite Web Sharing still being active in System Preferences. So therefore I guess my question is this: how can I undo the changes made by the copy-and-pasting of the above code snippet? Admittedly, I don't understand what the above did; I just thought it looked like a Terminal command and tried it. I have no experience in setting up Apache on Mac OS X (and I've only installed XAMPP and WampServer on Windows). So any points on reversing the aforementioned, and then successfully enabling PHP would be great. EDIT: I've discovered, via Console, the following error message is being recorded when trying to browse to 127.0.0.1... (org.apache.httpd) Throttling respawn: Will start in 10 seconds no listening sockets available, shutting down Unable to open logs (org.apache.httpd[13453]) Exited with exit code: 1 Does this point any more to the issue? EDIT #2: I'm now getting this in Console... 15/02/2010 21:24:14 osascript[3597] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find: /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • prevent filesystem from entering read-only mode

    - by user788171
    I have found that my server's filesystem is continuously entering read-only mode. There have been some issues with the raid1 array, but I have removed the bad disk from the array. However, it is still physically plugged into the system because I haven't had a chance to go over to the datacentre, I suspect udev and the system kernel is still picking up the bad disk and throwing errors. In /var/log/messages, there are errors like this: Mar 2 06:53:14 nocloud kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x4010000 action 0xe frozen Mar 2 06:53:14 nocloud kernel: ata1: irq_stat 0x00400040, connection status changed Mar 2 06:53:14 nocloud kernel: ata1: SError: { PHYRdyChg DevExch } Mar 2 06:53:14 nocloud kernel: ata1: hard resetting link Mar 2 06:53:20 nocloud kernel: ata1: link is slow to respond, please be patient (ready=0) Mar 2 06:53:21 nocloud kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 2 06:53:21 nocloud kernel: ata1.00: configured for UDMA/133 Mar 2 06:53:21 nocloud kernel: ata1: EH complete This happens fairly randomly throughout the day until eventually the filesystem becomes read-only. When this happens, my system becomes non-operational which kind of defeats the purpose of having a raid1. Note, ata1 is the bad disk (I think ata1 corresponds to /dev/sda because they are both first in line). Under mdadm, /dev/sda1,2 is no longer being used, but I can't prevent the system kernel from continuing to query that disk when I am no longer using it and throwing these errors. Is there a way to prevent my filesystem from automatically going into read-only mode? Furthermore, is it safe to do so? Thanks in advance. EDIT: Additional information: output from cat /proc/mdstat md1 : active raid1 sdb2[1] 976554876 blocks super 1.1 [2/1] [_U] bitmap: 5/8 pages [20KB], 65536KB chunk md0 : active raid1 sdb1[1] 204788 blocks super 1.0 [2/1] [_U] Output from mount: /dev/mapper/VolGroup-LogVol00 on / type ext4 (rw,noatime) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/md0 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) EDIT2: pvdisplay output: --- Physical volume --- PV Name /dev/md1 VG Name VolGroup PV Size 931.32 GiB / not usable 2.87 MiB Allocatable yes (but full) PE Size 16.00 MiB Total PE 59604 Free PE 0 Allocated PE 59604

    Read the article

  • Installing Cygwin C and C++ compilers for NetBeans IDE 7.2

    - by user1294663
    I am very new to Cygwin, C, C++ and NetBeans IDE 7.2. My PC is running MICROSOFT WINDOWS 7 OS. I have read the documentation on how to install the Cygwin C C++ compilers. http://netbeans.org/community/releases/72/cpp-setup-instructions.html#compilers I have tried to run Cygwin setup.exe that has the most recent version of the Cygwin DLL is 1.7.16-1. I am not very sure which exact package to install when the Cygwin setup.exe installer prompted for the selection of packages to download and install. I want to install the Cygwin C and C++ compilers so that i can create C and C++ projects using NetBeans 7.2 I selected those packages that has contains the following names gcc, g++, gdb and make. Then i proceed on to install the selected packages The installation took up a long time so i stopped after about 45 minutes or so. I browsed the installation folder and i saw some packages i selected were installed. I noticed that some packages came in some sort of "zip" file with tar.gz extension. i added the folder path into the PATH variable in the windows 7 environment variables window. I think this command works C: cygcheck -c cygwin but the rest doesn't work i think. C: gcc --version C: g++ --version C: make --version C: gdb --version I tried to create the C C++ project using the Netbeans IDE 7.2 and the IDE pops out a dialog message saying that there was no c c++ compilers found. Have i made some mistake here? like installing the wrong packages or something else??? Are there packages shown in the Cygwin setup.exe installer that contains exact names and exact version that is compatible with NetBeans IDE 7.2?? This i am not too sure. Because i i think i didn't really see some required packages with exact names and versions. My question is : Which exact packages do i install using the Cygwin setup.exe installer so that i can create C & C++ projects using Netbeans IDE 7.2? and what other steps do i have to take note to ensure complete successful installation? do i have to wait all the selected required packages to be installed? I WOULD LIKE TO KNOW THE EXACT NAMES AND THE VERSIONS FOR THE REQUIRED PACKAGES (NAMES AND VERSIONS DISPLAYED IN THE CYGWIN SETUP.EXE INSTALLER WHEN PROMPTED) NEEEDED FOR C & C++ PROGRAMMING USING NETBEANS IDE 7.2??

    Read the article

  • Fedora 13 post security update boot problem

    - by Alex
    Hello. About a month ago I installed a security update that had new Kernek 2.6.34.x from 2.6.33.x), this is when the problem occurred for the first time. After the install my computer would not boot at all, black screen without any visible hard drive activity (I gave it good 30 minutes on black screen, before took actions)... I poped in installation DVD and went in rescue mode to change back the boot option to old kernel (was just a guess where the problem was). After restart computer loaded just file, took a long time for it to start because of SELinux targeted policy relabel is required. Relabeling could take very long time depending on file size. I assumed that the update got messed up somehow and continued working with modified boot option. Couple of days ago, there was another kernel update. I installed it and same problem as before. This rules out corrupted update theory... Black screen right after 'BIOS' screen before OS gets loaded. I had to rescue system again... Below is copy of my grub.conf file. I am fairly new to LINUX (couple of years of experience), mostly development and basic config... nothing crazy. # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_obalyuk-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=2 timeout=0 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.34.6-54.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-54.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-54.fc13.i686.PAE.img title Fedora (2.6.34.6-47.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-47.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-47.fc13.i686.PAE.img title Fedora (2.6.33.8-149.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.33.8-149.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.33.8-149.fc13.i686.PAE.img I like my system to be up to date... Let me know if I can post any other files that can be of help. Has anyone else had this problem? Does anyone has any ideas how to fix this problem? p.s. Anything helps, you ppl are great! thx for ur time.

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • Trouble setting up PATH for Java on Debian

    - by milkmansrevenge
    I am trying to get Oracle Java 7 update 3 working correctly on Debian 6. I have downloaded and set up the files in /usr/java/jre1.7.0_03. I have also set the following two lines at the end of /etc/bash.bashrc: export JAVA_HOME=/usr/java/jre1.7.0_03 export PATH=$PATH:$JAVA_HOME/bin Logging in as other users and root is fine, Java can be found: chris@mc:~$ java -version java version "1.7.0_03" Java(TM) SE Runtime Environment (build 1.7.0_03-b04) Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode) However there are two cases where Java cannot be found as detailed below. Note that both of these worked fine when I have previously installed OpenJDK Java 6 via aptitude, but I need Oracle Java 7 for various reasons. Most importantly, I cannot run commands as another user via su, despite the PATH showing that Java should be present. The user was created with adduser chris root@mc:~# su chris -c "echo $PATH" /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jre1.7.0_03/bin:/bin root@mc:~# su chris -c "java -version" bash: java: command not found root@mc:~# su chris -c "/usr/java/jre1.7.0_03/bin/java -version" java version "1.7.0_03" ... How can it be in the PATH but not be found? Update 05/04/2012: explained by Daniel, to do with it being a non-interactive shell so files such as /etc/profile and /etc/bash.bashrc are not executed. Doing a full swap to that user and running Java works: root@mc:~# su chris chris@mc:/root$ java -version java version "1.7.0_03" ... I run a script on start up which exhibits similar but slightly different problems. The script is located in /etc/init.d/start-mystuff.sh and calls a jar: #!/bin/bash # /etc/init.d/start-mystuff.sh java -jar /opt/Mars.jar I can confirm that the script runs on start up and the exit code is 127, which indicates command not found. Inserting a line to print/save the PATH shows that it is: /sbin:/usr/sbin:/bin:/usr/bin This second problem isn't as important because I can just point directly to the Java executable in the script, but I am still curious! I have tried setting the full PATH and JAVA_HOME explicitly in /etc/environment which didn't help. I have also tried setting them in /etc/profile which doesn't seem to help either. I have tried logging in and out again after setting PATH in the various locations (duh!). Anyway, long post for what will probably have a simple one line solution :( Any help with this would be greatly appreciated, I have spent far too long trying to fix it by myself. Motivation The first problem may seem obscure but in my system I have users that are not allowed SSH access yet I still want to run processes as them. I have a ton of scripts operating in this way and don't want to have to change them all.

    Read the article

  • Easiest way to replace preinstalled Windows 8 with new hard drive with Windows 7

    - by Andrew
    There are all kinds of questions and answers relevant moving Windows 8 to a new hard drive. I'm not seeing anything quite applicable to my situation. I have a new, unopened, unbooted notebook with pre-installed Windows 8. I will be replacing the hard drive before ever booting, unless that is not possible for some reason. I want to "downgrade" to Windows 7 Pro, and I want a clean installation. To do so legitimately, I apparently either need to: Upgrade Windows 8 to Windows 8 Pro using Windows 8 Pro Pack, then downgrade; or Just install a newly-licensed copy of Windows 7 Pro. (Let me know if I've missed an option.) Installation media is likely not a problem, though if I need something vendor-specific that I cannot otherwise download, that could present an issue (Asus notebook, if that matters). If I could, I would just buy the Pro Pack upgrade, swap the hard drive (without ever booting), then install Windows 7 Pro directly on the new hard drive, using the Pro Pack key for activation. Will this work? Are there any activation issues? Edited to clarify, as some comments and answers indicate confusion: Here is, ideally, what I want to do: Before ever powering on the notebook, remove the current hard drive. Replace this hard drive with a new, blank hard drive. Install a clean copy of Windows 7 Pro on this new, blank hard drive. Unless I have no choice to accomplish the end result (a clean install of Win7 Pro on the newly-installed, previously-blank hard drive), I am not wanting to: Install Windows 7 "over" the current Windows 8 install (after upgrading to Win8 Pro). That would involve using the currenly-installed hard drive. I want to use a new, different hard drive. Copy the Win8 install to the new hard drive, then install Windows 7 "over" that installation. Install Windows 7 "over" the current Windows 8 install (after upgrading to Win8 Pro), then copy the installation to the new hard drive. If I have to use one of those three options, I will, but only if there is no other choice. Please note that this question is not about licensing: I will purchase the necessary license(s) to accomplish this procedure legally (apparently either Win8 Pro Pack or Win7 Pro -- the former currently appears less expensive).

    Read the article

  • Remove DRM from *.pdb e-book that I own - while maintaining footnotes, etc.?

    - by ziesemer
    Background: I've already reviewed Remove DRM from ePub Files? and How can I remove DRM from Kindle books? - the answers to which have already brought me partial process. The challenge is that I have a few purchased *.pdb e-books that I purchased in years past, e.g. 2006. In particular, they were purchased from the palm eBook Store (ebooks.palm.com - now defunct, possibly part of http://www.ereader.com / Barnes & Noble?) - originally for use on a Palm Treo that has since died. Of particular note is that I have a revision / publication of a book that is no longer published, and not available as an e-book from anywhere else that I've been able to find. (I feel fortunate to have even found the *.pdb files on backup.) I have a copy of the electronic invoice for it - which includes the details necessary for unlocking - the "Purchaser's Name" and the "Unlock Code" - which is the digits of my credit card # that I had used to purchase it. Given the above information, I was surprised to be able to open the book using the Windows eReader software and unlock it. Here I am able to view the complete contents and functionality of the book as I had done on the Palm Treo - including viewing of linked annotations / footnotes, etc. Following the full spirit of Remove DRM from ePub Files?, I want to ensure that I can access this on any device of my choosing - especially now and in the future, and as new technologies arrive and disappear. Ideally, I'm just looking to accomplish the minimum necessary to allow import into calibre. Outstanding Issue: I've found a few solutions that have given me "90%" success - all based on various versions of some Python scripts - including versions 0.21 and 0.11 of "erdr2pml.py" (based on "ereader2html"). Unfortunately, unless I'm missing something, these programs are attempting to also "convert" - instead of just "decrypting". As such, the outputs are missing embedded images and/or footnotes. I.E., there is a linked, underlined, and super-scripted "a" after some text - but the content of the footnote no longer exists. I can validate this by inspecting the generated *.pmlz file, and nowhere does it contain the original footnotes that are still visible in the original *.pdb file. I'm hoping to find a process that focuses on the decryption only, instead of attempting any type of a content conversion - or if a content conversion is required / involved, that it maintains all of the features and content of the original. (Again, I'm confident that if/once a version is obtained that calibre can import, I'll be able to fulfill the rest of my requirements.)

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

  • How to make Shared Keys .ssh/authorized_keys and sudo work together?

    - by farinspace
    I've setup the .ssh/authorized_keys and am able to login with the new "user" using the pub/private key ... I have also added "user" to the sudoers list ... the problem I have now is when I try to execute a sudo command, something simple like: $ sudo cd /root it will prompt me for my password, which I enter, but it doesn't work (I am using the private key password I set) Also, ive disabled the users password using $ passwd -l user What am I missing? Somewhere my initial remarks are being misunderstood ... I am trying to harden my system ... the ultimate goal is to use pub/private keys to do logins versus simple password authentication. I've figured out how to set all that up via the authorized_keys file. Additionally I will ultimately prevent server logins through the root account. But before I do that I need sudo to work for a second user (the user which I will be login into the system with all the time). For this second user I want to prevent regular password logins and force only pub/private key logins, if I don't lock the user via" passwd -l user ... then if i dont use a key, i can still get into the server with a regular password. But more importantly I need to get sudo to work with a pub/private key setup with a user whos had his/her password disabled. Edit: Ok I think I've got it (the solution): 1) I've adjusted /etc/ssh/sshd_config and set PasswordAuthentication no This will prevent ssh password logins (be sure to have a working public/private key setup prior to doing this 2) I've adjusted the sudoers list visudo and added root ALL=(ALL) ALL dimas ALL=(ALL) NOPASSWD: ALL 3) root is the only user account that will have a password, I am testing with two user accounts "dimas" and "sherry" which do not have a password set (passwords are blank, passwd -d user) The above essentially prevents everyone from logging into the system with passwords (a public/private key must be setup). Additionally users in the sudoers list have admin abilities. They can also su to different accounts. So basically "dimas" can sudo su sherry, however "dimas can NOT do su sherry. Similarly any user NOT in the sudoers list can NOT do su user or sudo su user. NOTE The above works but is considered poor security. Any script that is able to access code as the "dimas" or "sherry" users will be able to execute sudo to gain root access. A bug in ssh that allows remote users to log in despite the settings, a remote code execution in something like firefox, or any other flaw that allows unwanted code to run as the user will now be able to run as root. Sudo should always require a password or you may as well log in as root instead of some other user.

    Read the article

  • My computer's dying :(

    - by j-t-s
    Hi All I have a sweet computer and have never had any real problems with it until now. I recently formatted my computer and all was well. But now my monitor randomly flickers. The monitor will just go black for about a second or two, then it'll return back to its normal state with all the stuff still on the screen. This is getting progressively worse. Another problem that has started is my computer randomly restarts. (I've managed to prevent it from restarting by removing the check in Automatically Restart from the Startup and Recovery Dialog. But I know this doesn't solve the problem). It also completely freezes up on me. One last thing, this morning I got a big blue screen. I can't remember what it said, but if it happens again I will take note of it and repost. Or if I can find some kind of a log file containing that bluescreen error I will post. I have checked all cords, and they're all fine. Nothing's loose. My computer isn't overheating either. I've taken the case off for 3 whole days and haven't used the computer for those 3 days, which has had no effect. I've checked the connections inside and nothing's loose there either. I know there's nothing wrong with my monitor because my friend has 2 computers and it works perfectly on those computers. I don't understand how my computer could suddenly become so unstable. I'm almost certain that I have no viruses; I full-scan my pc everyday for nastys, and have strict firewall settings. Anyway, I don't see how it could be a virus anyway simply due to the fact that I had these problems right after I formatted, and before I even had the chance to install or copy anything or even connect to the internet. I know it has nothing to do with the way I formatted the computer too, I've been fixing computers for years. Sorry for rambling on, just making sure I don't leave anything out. Has anybody else had this problem? Can somebody please try to help me with this? Thank you :)

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >