Search Results

Search found 5169 results on 207 pages for 'lcd displays'.

Page 179/207 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • Windows 7 boot problem (with colorful blinking smilies)

    - by Ishmael Smyrnow
    I put my computer (Windows 7) to sleep, and a couple hours later, tried to wake it back up, but the monitor wouldn't come back on. I did a hard reset (held power button), but I still couldn't get the monitor to show anything. I plugged it into my laptop, and the monitor works fine. I then swapped out the video card with an older one I have. The monitor came on and started showing the boot process. However, shortly after the Windows 7 animated logo came up, the screen went blank, it made this weird beeping noise, and I seen the strangest thing ever. Small, colorful blocks started to fill my screen, and flash, as if something was loading. Inside of those blocks, were smilies (like the ASCII character kind). This continued for about a minute, then the computer rebooted. It scared the sh!t out of me. I've never had a virus before, and I'm savvy enough to keep myself from one, but I'm wondering if that's what it was. I've been using computers for ages, and never seen anything quite like this. Has anyone ever seen something like this? I'm doing hardware diagnostics before trying to boot into Windows again. Hopefully I can figure this out, but I thought I would consult the SU community while I wait on these results. -- UPDATE -- I did a Memory Diagnostic, which turned up nothing. I also booted into Safe Mode no problem, and scheduled a disk check on both of my drives (I dual boot XP & 7). I was feeling good, and tried putting my regular video card back in, and the monitor won't display anything with it. Also, even though the monitor displays nothing, the system sounds like it's booting up. However, I hear a clicking in one of my hard drives that isn't there with the older video card. Could this be a problem with my hard drive, video card, or PSU? PSU makes sense, except for the fact I've been using the same setup for over a year, and the video card doesn't require it's own power plug thing.

    Read the article

  • iPhone 3G refuses to transfer purchased apps to iTunes

    - by andynormancx
    My iPhone 3G refuses to transfer purchased apps to iTunes. This is causing me major problems with syncing. Whenever I attempt to transfer apps from the iPhone to iTunes it goes through the motions, but never actually transfers anything. It displays the various apps in the info area at the top of the screen, but the progress bar never advances. In comparison when I sync other iPhones, using the same install of iTunes, the progress bar advances and apps are transferred. The same also happens on clean installs of iTunes on other computers, it seems to be my iPhone that is the common factor. I have tried restoring the phone from a backup, which makes no difference. This started happening months ago and the phone has since been upgraded to 3.0 and 3.1, but the problem still persists. Originally it was just a minor irritation, but I made and attempt to fix it which has made things worse. I deleted all the apps from with iTunes and then did "Transfer purchases" in the hope that it might fix something. It didn't fix anything. Also, I cannot now sync at all. If I do sync iTunes now does "transferring purchases", fails to transfer and then deletes all the apps (and data) from my iPhone. It also means I can't sync music, podcasts or anything else. I can't sync anything else, because I can't temporarily turn off app syncing because then iTunes warns that the apps on the iPhone will be deleted. I also tried de-authorising and re-authorising. What can I do to get app syncing working again ? P.S. I have considered deleting all the apps and reinstalling them one by one, in the hope that it will fix the problem. However I don't really want to embark on doing that for 55+ apps and re-entering login details etc for the apps that need them, especially as I might then find out it didn't solve the problem. Update: The latest update to iTunes 9 has improved things in one key aspect. If I let a sync run to completion iTunes no longer deletes all the apps from my phone. So I can now sync all my other data, even if I still can't sync my apps. Resolved: See my answer to the question for how I finally resolved the problem.

    Read the article

  • Nameserver not resolving or domain not pingable [closed]

    - by Ricky
    Sorry, if anyone can think of a better title please change it! I want to host my own websites from home. For testing purposes, I have a virtual machine running a trial version of Windows Server 2008 Enterprise. Note I currently run a VPS and host my own websites but due to a nice speed upgrade on our line I now want to host from home. I have several domains but I wanted to test with one, that is rickyoleary.com. Our ISP does not provide static IP addresses unless we have a business account so I've been looking at no-ip.com. I admit my networking isn't the best, hence this question but I've been bashing my head all day on this one. I created a host name, muffinbubble.no-ip.org which runs on IP: 86.148.124.15. I've setup IIS on the server with a simple test page. I've then forwarded port 80 traffic from the router and from what I can see, it's working. If I access my website (I was unable to link to this for some reason so please copy and paste this) - http://86.148.124.15/ - I see my test page. So the next step was to create my nameservers. This domain is with namecheap.com so I created my nameservers, ns1.rickyoleary.com and ns2.rickyoleary.com. Both these point to the same IP (and yes, that will be changed after testing), the same IP as above: 86.148.124.15. On the server itself I have set up DNS entries as below which I believe to be correct and added rickyoleary.com and www.rickyoleary.com in the host headers (or bindings) in IIS 7.0. If I try and look up my domain, rickyoleary.com it shows ns1.rickyoleary.com and ns2.rickyoleary.com as the nameservers. I then tried to use just-ping.com on my nameserver ns1.rickyoleary.com. I get 100% packets lost, but the correct IP address is returned (I'm guessing the router does not allow pings, but is still accessible...). I get no response when pinging rickyoleary.com. Here's the problems: I cannot ping ns1.rickyoleary.com or ns2.rickyoleary.com from a command prompt. I'm not sure if this is an issue. When I added the nameservers in Windows Server 2008 and clicked 'resolve' a message box displays stating "No such host is known". I cannot ping rickyoleary.com. rickyoleary.com is not showing my test page on my server. Now - please note, I've waited around 6 hours for propagation. From my experience, although you're told to wait 24 - 48 hours, the changes are normally pretty quick so perhaps I'm being impatient or naive to think it should all be working fine until then. I would really appreciate some help here. Thanks.

    Read the article

  • !! 0xc01a00d !! aka Vista won't boot

    - by Chris
    Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box. One of my users has an HP DV5-1235dx laptop running Windows Vista Professional x64. Last night, our WSUS server pushed out a few updates including "Security Update for Windows Vista for x64-based Systems (KB960859)". When we try to boot the laptop today, a black screen with white text comes up displaying: xxx/169894 (something) Where xxx increments rapidly and something is some dll or registry key. Eventually that stops and the screen displays !! 0xc01a00d !! 35566/169894 (\Registry\Machine\COMPONENTS\DerivedDat...) No other computers that received this update are displaying the same error. So far I've tried running CHKDSK off of HBCD. It repaired a thing or two, but the computer still doesn't boot. I tried repairing the Windows install from the Vista CD, but I get a black screen with white text displaying something along the lines of: 0 No Emulation System Type 00 1 No Emulation System Type 00 Select one of the above Booting in Last Known Good Configuration doesn't work. Booting in Safe Mode freezes at Loading Windows Files [snip] Loaded: \windows\system32\drivers\crcdisk.sys Please wait... My next step is trying to boot Safe Mode with Command Prompt and try to run rstrui.exe. While I do that, does anybody have any guidance? Edit: Booting into Safe Mode with Command Prompt will not work. See Booting in Safe Mode above. Edit 2: I managed to boot from the Vista DVD. I ran the system repair, and now I get a black screen with white text saying: !! 0xc0000034 !! 290/169894 (_0000000000000000.cdf-ms) Edit 3: I ran the system repair again, and it attempted to repair my hard drive. It failed. Problem Signature: Problem Event Name: Startup Repair V2 Problem Signature 01: External Media Problem Signature 02: 6.0.6001.18000.6.0.6001.18000 Problem Signature 03: 4 Problem Signature 04: 196611 Problem Signature 05: CorruptVolume Problem Signature 06: NoBootFailure Problem Signature 07: 0 Problem Signature 08: 0 Problem Signature 09: unknown Problem Signature 10: 1168 OS Version: 6.0.6002.2.2.0.256.1 Locale ID: 1033 Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box.

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • Linux not picking up new partition correctly on emc pseudo device

    - by James
    Hi We have a database server running oracle rac. We were recently running out of space on the main LUN that it is attached to. I created a new 100GB LUN and concatenated this onto the existing LUN creating a new MetaLUN. After some messing I managed to get linux to recognise the new space. I then created a new partition in on the pseudo device, to use the new space. Previously when I have done this on other system the next step is to create an ASM disk on the new partition and add this disk to the oracle disk group. This however fails. I am aware of various issues with ASM and powerpath, but I don't think this is the issue here. As on while investigating the issue I discovered that one of the underlying logical device is not reflecting the size change. See below; Powermt displays all of the underlying logical units [root@XXXXX~]# powermt display dev=emcpowerd Pseudo name=emcpowerd CLARiiON ID=CKM00091500009 [VFRAC2] Logical device ID=6006016030312200787502866C65DE11 [LUN 30] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 1 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3 qla2xxx sde SP A0 active alive 0 0 3 qla2xxx sdj SP B0 active alive 0 0 4 qla2xxx sdo SP A1 active alive 0 0 4 qla2xxx sdt SP B1 active alive 0 0 Fdisk on the pseudo device shows correct space. [root@XXXXX ~]# fdisk -l /dev/emcpowerd Disk /dev/emcpowerd: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/emcpowerd1 1 39162 314568733+ 83 Linux /dev/emcpowerd2 39163 52216 104856255 83 Linux fdisk on one of the logical units is wrong [root@XXXXX~]# fdisk -l /dev/sde Disk /dev/sde: 322.1 GB, 322122547200 bytes 255 heads, 63 sectors/track, 39162 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 39162 314568733+ 83 Linux /dev/sde2 39163 52216 104856255 83 Linux fdisk on the rest of the units is fine [root@XXXXX ~]# fdisk -l /dev/sdj Disk /dev/sdj: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 39162 314568733+ 83 Linux /dev/sdj2 39163 52216 104856255 83 Linux Also when I created the the partition linux did not create the any entries in the /dev directory for the second partition so I created these manually [root@XXXXX dev]# mknod sde2 b 8 66 [root@XXXXX dev]# ls -al sd[ejot]? brw-r----- 1 root disk 8, 65 Dec 29 14:20 sde1 brw-r--r-- 1 root disk 8, 66 Apr 8 20:31 sde2 brw-r----- 1 root disk 8, 145 Dec 29 14:19 sdj1 brw-r--r-- 1 root disk 8, 146 Apr 8 20:33 sdj2 brw-r----- 1 root disk 8, 225 Apr 6 23:12 sdo1 brw-r--r-- 1 root disk 8, 226 Apr 8 20:33 sdo2 brw-r----- 1 root disk 65, 49 Dec 29 14:19 sdt1 brw-r--r-- 1 root disk 65, 50 Apr 8 20:33 sdt2 This is a production server that we cannot easily reboot. Any ideas would be much appreciated. J

    Read the article

  • Showing Directory Root When Launching Rails App Using Apache2 and Passenger

    - by LightBe Corp
    I have done the following in an attempt to host a Rails 3.2.3 application using Apache 2.2.21 and Passenger 3.0.13: Installed gem Passenger rvmsudo passenger-install-apache2-module Added website info in /etc/apache2/extra/httpd-vhosts.conf Added line to /etc/hosts (not sure if this was needed or not; not mentioned in Passenger documentation Uncommented out the line in /etc/apache2/httpd.conf to Include /etc/apache2/extra/httpd-vhosts.conf Restarted Apache When I try to pull up my website the following displays: Index of / Name Last modified Size Description Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8r DAV/2 PHP/5.3.10 with Suhosin-Patch Phusion_Passenger/3.0.13 Server at lightbesandbox2.com Port 443 Here is /etc/hosts entry for the website: 127.0.0.1 www.lightbesandbox2.com Here is my /etc/apache2/extra/httpd-vhosts.conf entry for the website: NameVirtualHost *:80 <VirtualHost *:80> ServerName www.lightbesandbox2.com ServerAlias lightbesandbox2.com PassengerAppRoot /Users/server1/Sites/iktusnetlive_RoR/ DocumentRoot /Users/server1/Sites/iktusnetlive_RoR/public <Directory /Users/server1/Sites/iktusnetlive_RoR/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> When I do rvmsudo passenger-status I get the following output: ----------- General information ----------- max = 6 count = 1 active = 0 inactive = 1 Waiting on global queue: 0 ----------- Application groups ----------- /Users/server1/Sites/iktusnetlive_RoR/: App root: /Users/server1/Sites/iktusnetlive_RoR/ * PID: 8140 Sessions: 0 Processed: 2 Uptime: 20m 51s None of my assets are in the public folder in my Rails app. I have written an application using the template presented in Michael Hartl's Ruby on Rails Tutorial. The home page is in /app/views/static_pages/home.html.erb. I decided to copy an index.html file in the public folder to see if it would display. It displayed as I had hoped.. Is there a way to get Passenger to find my assets without me having to rewrite my application? Any help would be appreciated. Update 6/23/2012 10:00 am CDT GMT-6 I corrected the problems with my file and have successfully executed the rake assets:precompile command. I still get the index page as before. I have made no other changes. I did a passenger-status command and it is still loaded. Restarting Apache did nothing.

    Read the article

  • nginx probably deliering wrong filetype for .css file with php tags

    - by Katai
    And again - NGINX is giving me many Questions today :) Like always, I already tried around for a while, but cant seem to fix this issue: I just configured NGINX to handle my .css files equal to my .php files (to parse PHP tags inside the CSS file). This works perfectly, and the file is found and delivered. I could debug it with FIrebug, and everything is OK (it displays the contents of the .css inside the opened <link> tag). So, everything working, right? Wrong. It has the CSS, but it does not interpret it! What I mean by this: apparently, the file-type of the CSS (or aplication-type, whatever) is wrong. The Page can access the CSS, but doesnt bother at all to actually use it. What I checked / tried: There are no PHP errors inside of the .css, so that one is out The .css is accessible. I can call the URI manually, or check if the included URL finds it: both works The .css has no syntax errors (i switched to a css that just has body {background-color: #000; } It works whitout NGINX I deleted the browser cache / restarted NGINX after config rewrites Here the configuration: server { listen 80; server_name localhost; access_log /var/log/nginx/board.access_log; error_log /var/log/nginx/board.error_log warn; root /var/www/board/public; index index.php; fastcgi_index index.php; location / { try_files $uri $uri /index.php; } location ~ (\.php|\.css)$ { try_files $uri =404; include /etc/nginx/fastcgi_params; #keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:7777; } } Firebug 'Network' Response Header: Connection keep-alive Content-Encoding gzip Content-Type text/html Date Sat, 16 Jun 2012 10:08:40 GMT Server nginx/1.0.5 Transfer-Encoding chunked X-Powered-By PHP/5.3.6-13ubuntu3.7 I think I just answered my own question. Is the Content-Type text/html the problem? How can I remove that? My personal guess is that I have to use this in some way include /etc/nginx/mime.types; default_type application/octet-stream; But I'm not sure... anyone an idea how to solve this? TLDR; CSS file is delivered correctly, but it doesnt seem to be 'used' as CSS from the browser. (Tested, works on apache)

    Read the article

  • getUserPrincipal() in JCIFS / Lan-Manager authentitation level setting in Windows 2k8

    - by Chris
    I have to find out in which exact format JCIFS stores the user principal in the "getUserPrincipal()" property. Therefor i created a test Environment like this: Windows Server 2008 Domain Controller Domain named "MYDOMAIN" Many Testusers in Active Directory Tomcat Application Server with my Web Application (which simply reads the user Principal and displays its values). The user should be logged in to the web-application with SSO therefor i need the format that jcifs is using to store the user. (For example user@MYDOMAIN or MYDOMAIN\user...) I tested the Authentication with other SSO frameworks with Kerberos Method and it works as expected. I'm now trying to use SSO through the NTLMHttpFilter of JCIFS. When i try to login i get the following error message: jcifs.smb.SmbException: The parameter is incorrect. jcifs.smb.SmbTransport.checkStatus(SmbTransport.java:541) jcifs.smb.SmbTransport.send(SmbTransport.java:641) jcifs.smb.SmbSession.sessionSetup(SmbSession.java:322) jcifs.smb.SmbSession.send(SmbSession.java:224) jcifs.smb.SmbTree.treeConnect(SmbTree.java:176) jcifs.smb.SmbSession.logon(SmbSession.java:153) jcifs.smb.SmbSession.logon(SmbSession.java:146) jcifs.http.NtlmHttpFilter.negotiate(NtlmHttpFilter.java:189) jcifs.http.NtlmHttpFilter.doFilter(NtlmHttpFilter.java:121) Regarding to the documentation i'm using to configure this, this is a know issue with the Group policy. It is stated there, that i have to change the Group policy "Networkaccess: lan-manager authentication level" to respond to NTLMv1 request. I have done this, but it's still not working. So what i also have to configure is the same policy on the client computer. I have to change the policy, so that the client computer sends NTLMv1. But it is always sending NTLMv2 tokens. The problem now is that i'm somehow not able to change this setting. (I already was before) because the dropdown box to choose the authentication method is "greyed out". edit: just to make this clear, this dialog is on the client-side in the "local-security policies" As you can see from this screenshot, the chosen method is "Only send NTLMv2 responses" which is the wrong setting and i'm pretty sure that this is causing the error above. My question is now, why can't i change this setting? Why is it greyd out?

    Read the article

  • Adobe Reader not loading form content

    - by wullxz
    We have an FDL file which is used to offer an online application possibility. The FDL is filled out and sent to a mailbox. When I open the received file, Adobe Reader starts, loads the document in Internet Explorer (had to change my default browser because it doesn't work in chrome - the customer uses IE as default) and displays a warning that Adobe Reader has blocked the connection to the server where the initial document is saved: I can then click on "Trust this document once" (translated by me!) or "Add this host to trusted hosts" (also translated by me!). The second option doesn't work at all. The first option works but is a little bit annoying. I looked into Adobe Readers options (Edit - "Voreinstellungen" in german / the last option - Security (advanced)) and found the possibility to add hosts, files and directories or allow Adobe Reader to use the "Trusted Websites" list from Internetoptions. When I add the website either to Trusted Websites or the trusted list in Adobe Readers options, the warning doesn't pop up but the content in the prefilled (by the applicant) input boxes of the document doesn't show up on Windows 7 but it does show up on Windows XP. This Screenshot shows the settings window described in the last paragraph. The big input box at the bottom normally holds the trusted files/directories/hosts list. System Information: Windows 7 Enterprise x64 Adobe Reader X multiple IE versions (mine is latest but there's also IE 7 or 8) How do I get Adobe Reader to load the content of the form? This behaviour can be reproduced on a PC. When opening an fdf from a command line the form fields are blank even though there is data in the fdf and the pdf is located in a mnaully entered trsuted folder. Steps to reproduce: Clean install a Windows 7 PC (or use a virtual box) Map a network drive to a shared folder with a subfolder e.g. c:\test\docs becomes m:\docs Set security permissions to allow full control to everyone Add an fdf and a matching pdf file in the subfolder Manually add m:\docs to each of the trusted folders in the trust manager registry settings Ensure that Enhanced Security is on Run a command line to open the fdf file Expected result: pdf is opened in Adobe Reader with form fields filled out with data Actual results: pdf is opened with blank fields 'Yellow bar' appears asking to add document to trusted locations It appears that Adobe Reader XI is ignoring the privileged locations entries in the registry. Adding the document via the 'yellow bar' adds the individual document, with the same folder, to the privileged locations but means that the process has to be repeated for every document that needs to be opened from the folder.

    Read the article

  • Simple mdadm RAID 1 not activating spare

    - by Nick Liu
    I had created two 2TB HDD partitions (/dev/sdb1 and /dev/sdc1) in a RAID 1 array called /dev/md0 using mdadm on Ubuntu 12.04 LTS Precise Pangolin. The command sudo mdadm --detail /dev/md0 used to indicate both drives as active sync. Then, for testing, I failed /dev/sdb1, removed it, then added it again with the command sudo mdadm /dev/md0 --add /dev/sdb1 watch cat /proc/mdstat showed a progress bar of the array rebuilding, but I wouldn't spend hours watching it, so I assumed that the software knew what it was doing. After the progress bar was no longer showing, cat /proc/mdstat displays: md0 : active raid1 sdb1[2](S) sdc1[1] 1953511288 blocks super 1.2 [2/1] [U_] And sudo mdadm --detail /dev/md0 shows: /dev/md0: Version : 1.2 Creation Time : Sun May 27 11:26:05 2012 Raid Level : raid1 Array Size : 1953511288 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953511288 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon May 28 11:16:49 2012 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : Deltique:0 (local to host Deltique) UUID : 49733c26:dd5f67b5:13741fb7:c568bd04 Events : 32365 Number Major Minor RaidDevice State 1 8 33 0 active sync /dev/sdc1 1 0 0 1 removed 2 8 17 - spare /dev/sdb1 I've been told that mdadm automatically replaces removed drives with spares, but /dev/sdb1 isn't being moved into the expected position, RaidDevice 1. UPDATE (30 May 2012): A badblocks destructive read-write test of the entire /dev/sdb yielded no errors as expected; both HDDs are new. As of the latest edit, I assembled the array with this command: sudo mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1 The output was: mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding. Rebuilding looks like it's progressing normally: md0 : active raid1 sdc1[1] sdb1[2] 1953511288 blocks super 1.2 [2/1] [U_] [>....................] recovery = 0.6% (13261504/1953511288) finish=2299.7min speed=14060K/sec unused devices: <none> I'm now waiting on this rebuild, but I'm expecting /dev/sdb1 to become a spare just like the five or six times that I've tried rebuilding before. UPDATE (31 May 2012): Yeah, it's still a spare. Ugh! UPDATE (01 June 2012): I'm trying Adrian Kelly's suggested command: sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1 Waiting on the rebuild now... My questions are: Why isn't the spare drive becoming active sync? How can I make the spare drive become active?

    Read the article

  • Chrome Problems on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites! Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). Is anybody else experiencing such issues? Does anybody has a solution to any of these? Any Help is appreciated! Thank You!

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    Hi experts, I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • Change Logon DPI setting in Windows 8.1

    - by jmc302005
    I love how M$ keeps making decisions for me about how I want my desktop to look. Now they have added per-user dpi settings. The problem this has created is that there is no adjustable dpi setting for the Lock/Logon screen. Let me explain. you can change the dpi setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen tv for my monitor on my desktop. Big enough for me to sit in my recliner and use my comp. But I don't have a wireless keyboard. And it sucks having the wire from the keyboard running across the floor. Plus I really don't want to keep a keyboard next to me. So I use the on screen keyboard for logging in and quick typing (search, web address, etc.) So the problem is that with the new dpi setup my on screen keyboard takes up nearly half the screen. Does M$ think we are all blind? Oh no I remember they think desktops should look like tablets and phones. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS.DEFAULT\Control Panel\Desktop there is a string value named "LogicalDPIOverride" with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. So now M$ has also added a way for us to not be able to change a setting in the registry. They are making it harder and harder for us power users to be able to do anything with the settings in windows. Soon we will all have the same exact Windows with absolutely no customization. ok sorry for the quick rant. The real question here is. How can I change this defualt dpi crap? Can I use the LogPixels string that worked for dpi in Windows 7? Here are 2 Screen shots 1 of the Lock Screen and 1 of the Logon Screen http://i.imgur.com/6RM5ufE.jpg http://i.imgur.com/cnY5bmm.jpg Please any help will be appreciated.

    Read the article

  • How does formatting works with a PowerShell function that returns a set of elements?

    - by Steve B
    If I write this small function : function Foo { Get-Process | % { $_ } } And if I run Foo It displays only a small subset of properties: PS C:\Users\Administrator> foo Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 86 10 1680 412 31 0,02 5916 alg 136 10 2772 2356 78 0,06 3684 atieclxx 123 7 1780 1040 33 0,03 668 atiesrxx ... ... But even if only 8 columns are shown, there are plenty of other properties (as foo | gm is showing). What is causing this function to show only this 8 properties? I'm actually trying to build a similar function that is returning complex objects from a 3rd party .Net library. The library is flatting a 2 level hierarchy of objects : function Actual { $someDotnetObject.ACollectionProperty.ASecondLevelCollection | % { $_ } } This method is dumping the objects in a list form (one line per property). How can I control what is displayed, keeping the actual object available? I have tried this : function Actual { $someDotnetObject.ACollectionProperty.ASecondLevelCollection | % { $_ } | format-table Property1, Property2 } It shows in a console the expected table : Property1 Property2 --------- --------- ValA ValD ValB ValE ValC ValF But I lost my objects. Running Get-Member on the result shows : TypeName: Microsoft.PowerShell.Commands.Internal.Format.FormatStartData Name MemberType Definition ---- ---------- ---------- Equals Method bool Equals(System.Object obj) GetHashCode Method int GetHashCode() GetType Method type GetType() ToString Method string ToString() autosizeInfo Property Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo autosizeInfo {get;set;} ClassId2e4f51ef21dd47e99d3c952918aff9cd Property System.String ClassId2e4f51ef21dd47e99d3c952918aff9cd {get;} groupingEntry Property Microsoft.PowerShell.Commands.Internal.Format.GroupingEntry groupingEntry {get;set;} pageFooterEntry Property Microsoft.PowerShell.Commands.Internal.Format.PageFooterEntry pageFooterEntry {get;set;} pageHeaderEntry Property Microsoft.PowerShell.Commands.Internal.Format.PageHeaderEntry pageHeaderEntry {get;set;} shapeInfo Property Microsoft.PowerShell.Commands.Internal.Format.ShapeInfo shapeInfo {get;set;} TypeName: Microsoft.PowerShell.Commands.Internal.Format.GroupStartData Name MemberType Definition ---- ---------- ---------- Equals Method bool Equals(System.Object obj) GetHashCode Method int GetHashCode() GetType Method type GetType() ToString Method string ToString() ClassId2e4f51ef21dd47e99d3c952918aff9cd Property System.String ClassId2e4f51ef21dd47e99d3c952918aff9cd {get;} groupingEntry Property Microsoft.PowerShell.Commands.Internal.Format.GroupingEntry groupingEntry {get;set;} shapeInfo Property Microsoft.PowerShell.Commands.Internal.Format.ShapeInfo shapeInfo {get;set;} Instead of showing the 2nd level child object members. In this case, I can't pipe the result to functions waiting for this type of argument. How does Powershell is supposed to handle such scenario?

    Read the article

  • GPU not powering on

    - by Lerp
    So I got home from work yesterday and went to turn my computer on as per usual to be greeted by this: The screens remained black, so I rebooted; I go as far as GRUB before my screens went black again. I rebooted again, they didn't turn on. I rebooted again, I got as far as the windows login screen. This time I unplugged it, opened it up and cleaned it but to no luck. The GPU was still being tempermental. I repeated the process of turning off and on several times until one time it work as normal. I happily played games for the rest of the night (5-6 hours?) thinking everything was jolly good now. Well I get home from work today and it is doing the SAME thing. Sometimes everything displays normally for a few seconds to minutes then the screens go black; then sometimes the screens don't come on at all. Summary and additional points Screens sometimes turn on before shortly turning off, sometimes they don't; I cannot seem to determine any pattern between when they do or do not turn off. The build has been working fine for about 8 months now so I know it's not hardware incompatibility. If I plug a monitor into the on board graphics I can use the PC normally (just in low graphics mode) I have two monitors and it's a case of they both turn on or not. So I think I can rule out the monitors being dead. I have tried replacing the GPU I have tried replacing the RAM I have tried flashing the CMOS I have tried cleaning the inside The GPU is a Radeon HD 7870 My questions Is my GPU dead? It's not very old and I would rather have a method of being certain it's the GPU before I fork out some money I can't really afford. I do not have a second PC here to test it in. If my GPU is dead why does it sometimes work and sometimes not? Update Okay, it was working again.. at least I thought it was. I left it running for 10-20minutes with the screens black. Turned it off and straight back on and it worked for all of 10minutes. I was then updating the post in joy thinking I could play some games for the rest of the night when BAM it went black again. So yeah, I don't know :C

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I want to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I've searched for solution for 17 hours straight and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offend anyone, I am truly sorry and I apologize. Thanks in advance. And sorry for such a long post.

    Read the article

  • Init script & the green [ OK ]

    - by Lord Loh.
    I am trying to install fast-cgi for nginx on an EC2 instance. I followed the steps explained here, but that is meant for Debian and does not work out of the box for a red-hat based system. I modified the script a bit to look like - #!/bin/bash ### BEGIN INIT INFO # Provides: php-fcgi # Required-Start: $nginx # Required-Stop: $nginx # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts php over fcgi # Description: starts php over fcgi ### END INIT INFO . /etc/rc.d/init.d/functions (( EUID )) && echo .You need to have root priviliges.. && exit 1 BIND=/tmp/php.socket USER=nginx PHP_FCGI_CHILDREN=15 PHP_FCGI_MAX_REQUESTS=1000 PHP_CGI=/usr/bin/php-cgi PHP_CGI_NAME=`basename $PHP_CGI` PHP_CGI_ARGS="- USER=$USER PATH=/usr/bin PHP_FCGI_CHILDREN=$PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=$PHP_FCGI_MAX_REQUESTS $PHP_CGI -b $BIND" RETVAL=0 start() { echo -n "Starting PHP FastCGI: " #ORIGINAL LINE #daemon $PHP_CGI --quiet --start --background --chuid "$USER" --exec /usr/bin/env -- $PHP_CGI_ARGS #MODIFIED LINE daemon --user=$USER $PHP_CGI -b $BIND& RETVAL=$? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/php-fcgi #echo "$PHP_CGI_NAME." } stop() { echo -n "Stopping PHP FastCGI: " killall -q -w -u $USER $PHP_CGI RETVAL=$? echo "$PHP_CGI_NAME." rm /var/lock/subsys/php-fcgi } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL The problem I have now is - service php-fcgi start keeps the shell blocked. If I run service php-fcgi start & and then ps aux, I see the php-cgi process running bound to the socket. I see the start command stop only when I execute service php-fcgi stop. How do I solve this blocking issue? I have tried adding an & at the end of the line spawning the daemon. But other scripts do not seem to be doing this. This is the most complicated script I am attempting to modify yet :-( How do I get the script to display the green [ OK ]? I checked scripts like httpd and saw that all they were doing was something as shown below. But I never see a green [ OK ] when I execute php-fcgi. I also discovered that putting echo_success with functions sourced displays the green [ OK ] but I do not see any other scripts in the /etc/rc.d/init.d/ executing echo_success or echo_failure. What have I got wrong? Also, How do i specify PHP_FCGI_CHILDREN with daemon? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/

    Read the article

  • Triple (3) Monitors under Linux

    - by widgisoft
    I have a 3 monitor setup (each 1680x1050) via an Nvidia NVS440 (2 GPUs, 2 outputs per GPU totalling 4 outputs); this works fine under Windows XP,7 but caused considerable headaches under Linux (Ubuntu 9.04). I had previously used an XFX 9600GT and the onboard XFX 9300GS to produce the same result but the card was noisy and power hungry and I was hoping that there was some magical switch in the NVS4400 that got rid of this annoying problem - turns out the NVS440 is just 2 cards on one physical PCB :-p (I searched the net high and low for people using this card under Linux but found nothing, if anything the card uses less power and is fan less so I was to benefit from it either way) Anyway, using either set up there were 5 solutions available: Have 3 separate X instances, all un joined Have 3 separate X instances, adjoined by Xinerama Have 2 separate X instances - One using twin-view, both adjoined by Xinerama Have 2 separate X instances - One using twin-view but no Xinerama Have a single Twin-view setup and leave the 3rd screen unplugged :-p The 4rd option, using 2 separate X instances and twinview (but no xinerama) was the best balance in terms of performance and usability but caused 2 really annoying issues You couldn't control (without altering the shortcuts) which screen an application opened onto - and once it was opened you couldn't move it to another screen without opening up terminal and forcing it to move Nvidia's overriding or falsifying of Xinerama breaks and the 2 screens joined by Twin view behave like a single huge screen causing popups to open in the middle of both screens and maximising of windows stretches to the width of the first 2 screens Firefox can only run one instance as the same user so having multiple firefox windows requires at least 2 users The second option "feels" like the right option, but OpenGL is basically disabled and playing any sort of game or even running anything graphical causes a huge performance drop and instability - even trying to run a basic emulator for gba or gens just causes the system to fall over. It works just enough to stare at your desktop and do nothing but as soon as you start doing some work - opening windows, dragging things around - running multiple copies of firefox it just really feels slow. The last open, only going dual screen works perfectly and everything performs as required, full GPU acceleration - two logical screen spaces - perfect, just make it work across GPUs like windows! :-p Anyway, I know RandR was supposed to pick up the slack when it would introduced GPU objects of sorts to allow multiple GPUs to be stitched together to create one huge desktop at a much deeper layer than Xinerama. I was wondering if this has now been fixed (I noticed X server 1.7 is out) and whether anyone has got it running successfully? Again, my requirements are: One huge desktop to drag any window across Maximising of windows to each screen (as XP does) Running fullscreen apps on the primary screen and disabling the mouse from moving onto the others or on all 3 stretched Finally as a side note; I am aware of the Matrox triple (and dual) head splitter but even the price they go for on eBay is more than I can afford atm, my argument: I shouldn't have to buy extra hardware to get something to work on Linux when it's something that's existed in the windows world for a long time (can you tell I don't get on with X :-p); If I had the cash I'd have bought the latest version of this box already (the new version finally supports large resolutions as the displays I have 1680x1050 each).

    Read the article

  • Cannot access boot menu with compaq 8510p

    - by pinouchon
    I have a problem with my HP compaq 8510p laptop: when I start it, the fan starts and the power light is on, but the screen displays nothing. When I insert a bootable hard drive, it activates the hard drive light (meaning that the CD is recognized) but it stops after a few seconds. Same thing with any hard drive: the drive is recognized but does not boot. What I've tried so far: Changing the hard drive or booting with no hard drive (same problem) Plugging anoher display via VGA : no display on the other screen Inserting a windows-7 CD (same problem) Booting only on battery, with battery and power cable, only with power cable (same problem) So it looks like something is preventing the laptop from booting and displaying the boot menu. Do you have experienced something similar with a laptop ? What could be wrong ? The laptop is out of warranty. The system used to be windows-7 x64. Edit: I went to the help desk of my university. A guy took a look (he also tried to plug an external screen) and said that the computer is dead: on the HP laptops eventually the GPU card dies and so does the motherboard because they are linked. He saw this many times, and even if I can fix the problem, the laptop would crash again after a while. Do you have similar experience with HP laptops ? (mine is 4 years old) Edit 2: Believe it or not, my laptop is magically working again. I have no clue about what is going on. Now it is like starting and old car: when you turn it on you secretly hope it will actually start... With that said, I expect my laptop to break again in the near future (its an HP after all) and I will accept an answer or add my own accordingly. Edit 3: As expected, the laptop is down again. This time, sometimes when I power it up, it shuts down automatically after 3 seconds, sometimes not at all. In addition, when it does not shut down on its own, the power button does not work : the only way to shut it down is by unplugging the battery. As before, the screen is black, and only the power and battery lights are on. (the other ones: hard drive and wifi are off). I have tried to plug in another power plug, removing the battery and removing the hard drive without success. I might buy another laptop. I've brought the laptop to a repair shop. The problem is indeed that the graphic card is down. It will be replaced by a new one.

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • Mysterious visitor to hidden PHP page

    - by B. VB.
    On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors. It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious vistor. Doing a whois on that IP address showed it's from the new york area, and from the "Websense" ISP. The lowest order 8 bits of their address are always different, but always from 208.80.194.*/8. From most of the computers that I access my website, doing a tracerout to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. I have NO idea how, why this is happening. Does anyone have any clue, or have seen something as strange as this before? It seems completely benign, but unexplainable and a little creepy. Thanks in advance!

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >