Search Results

Search found 34813 results on 1393 pages for 'im fine'.

Page 285/1393 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Permissions error when creating desktop shortcut

    - by Ryan M.
    Hey guys, I have a user that's got a weird permissions problem on Windows 7. He's trying to create a shortcut for Outlook on his desktop(he doesn't want it in his start menu or his taskbar...). If we right click the outlook.exe and do Send to Desktop, it works just fine. If we do a search for "outlook" in the search bar, and then try and drag and drop the outlook icon to the desktop, we get the error message "You need Permission to perform this action. You require permission from SYSTEM to make changes to this file: Microsoft Office Outlook 2007". Dragging and dropping other exe's onto the desktop work just fine. They create shortcuts without any problems. But if I try to do ANY of the Office programs (Word, Excel, Outlook, etc..) I get this permission error. Any ideas? He's using an A.D. account and he's in the local administrators group. He's an executive so he's not accepting "this isn't a real problem because I found another way to make a shortcut" as an answer. Any help is appreciated.

    Read the article

  • upstart scripts: run a task after networking goes up

    - by The Journeyman geek
    I'm working on moving my current server setup to newer hardware, and migrating from ubuntu karmic koala to lucid lynx. Currently i'm using gw6c (compiled from the gogo6 website, as opposed to the version from the repositories) to get ipv6 access for my systems. On the karmic koala system, i used simple init.d script to get the ipv6 client started #! /bin/sh /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf I figured since this runs at any runlevel, it should translate to respawn console none start on startup stop on shutdown script exec /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf emit free6_ipv6_started end script this works fine started from initctrl, but it apparently fails to start when it boots. - its status being stop/waiting. It works fine (and respawns) when started otherwise.Any ideas on where i'm going wrong, and what would be the appropriate 'start on' arguement? EDIT: the exact error is 'init: gw6c main process (xxx) ended with status 8' followed by the process respawning , with xxx being a PID i suspect. I'm also half suspecting this is cause gw6c starts before networking does, and i need my eth0 up before gw6c is

    Read the article

  • OS X Application icons missing after filesystem tampering

    - by dylan
    A while back I installed some Google software, I forget what it was, but it was something that attached itself to Google Chrome and was un-uninstallable short of uninstalling all of Chrome. It was a total pain in the a$$. Anyways, I was trying to put up a fight before going nuclear and just wiping Chrome, and during that fight, something went wrong (i.e., I didn't know what I was doing). I remember messing around with the Applications folder somehow. I don't remember what exactly I did, but it perhaps involved some interchanging between the /Applications and /User/Username/Applications folders? Definitely some tampering in those areas, I'm sure at least about that. Anyways, I've since updated and restarted my computer. Now, while all my apps work fine, many of them have blank icons - not on the Dock, those icons are fine, but in Spotlight search results, etc. Currently, my /Applications folder contains only OS X-cooked-in applications. My ~/Applications folder contains cooked in apps AND post-OS/third-party apps. There hasn't been any functionality deficit so far, but I don't like working on a machine where something isn't how it should be. I know my information was vague, but does anyone know what went wrong / how to fix it? OS X Mavericks 10.9.3

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • How to prevent an SSD from disappearing from BIOS

    - by Midimatt
    I've only recently upgraded my old machine to a new one with a brand new 60gb SSD as my boot drive and a 1TB main drive. Paranoid about completely breaking my SSD, I read up on a lot of issues that I needed to watch out for, including making sure AHCI was turned on and trim enabled. PC has been working fine for a few weeks now, until today. My wife was watching some TV on the machine when it started to act strange and eventually blue screened. She rebooted and the boot mgr was missing. When I got home from work I checked the BIOS and the drive had disappeared. I panicked and looked up some possible fixes, and I discovered a large amount of people having problems with the drive firmware, especially on OCZ Vertex and Agility drives, and my drive is an Agility 3 drive. The problems included blue screens followed by missing drives, and a solution was to reset the CMOS and try again. This worked, and now everything seems to be working fine. My question is, is there any way to prevent this from happening? Am I missing a setting for my SSD? All of the posts I found were from early to mid-2011 nothing for the end of 2011 to 2012. So I am wondering if I've missed anything. EDIT: Checked my drives firmware and it is 2.15, which has had issues reported by users.

    Read the article

  • How can I get mouse capture to work in virtualbox when I move guest window to second monitor?

    - by Dan
    I'm running VirtualBox on a Win7 host. Guest OS is Centos. My setup is a laptop (screen 1) with a huge external monitor (screen 2). I'm only telling Centos there is one monitor though. When I start up VirtualBox on screen 1, the mouse capture works fine. I don't have mouse integration because (I think) the kernel on Centos is too old to support it. That's fine, I don't mind doing the Right-Control thing. The problem I have is that when I drag the whole VM window over to my second monitor, the mouse capture doesn't work right anymore. I click inside the VM and can move the VM cursor a little bit, but I can't always get to the edges of the VM screen -- before I get all the way to an edge, the cursor will escape from the VM as if I had hit right-control. But it's still captured according to the icon, and if I then hit right-control, the guest cursor jumps to a different screen location. My workaround: if I have the VM window mostly on screen 2, but a small corner of it still on screen 1, then the mouse capture works correctly. Is there a setting to make this work better?

    Read the article

  • MSMQ on Win2008 R2 won't receive messages from older clients

    - by Graffen
    I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Despeckle line art

    - by Dour High Arch
    We have a number of line-art charts unfortunately saved as JPEGs. They are now riddled with distracting compression artifacts or "speckles". Is there any way of removing these? I do not have the original files and it will be very difficult to recreate them. I am running Windows 7 and tried Paint.Net; none of the filters help. Posterize washed out all the colors and leaves the speckles. Blur makes text unreadable. Noise Reduction wrecks antialiasing of curved lines, and perversely enhances the speckles, making them look like checkerboards. Yes, I have Googled for software to do this; there are many programs that advertise despeckling but, after my experience with Paint.Net, do not want to experiment with applications that show no before and after images. The only example I have seen that does what I want is from a Photoshop tutorial. I have dozens of files and the tutorial requires considerable manual fine-tuning. I would prefer to automate or batch-process this task. Commercial apps are fine, but I do not want to spend over $600 and learning a complex program for a single task.

    Read the article

  • UNC shared path not accessible though necessary permissions are set

    - by Vysakh
    I have 2 environments A and B. A is an original environment whereas B is a clone of A, exactly except AD servers. AD server of B has been assigned a trust relationship with A, so that all the service and user accounts of A can be used in B too. And trusting works fine, perfect!! But I encounter some issues accessing UNC paths(\server2\shared) with these service accounts. I had a check in A environment and all the permissions set in that environment is done in B too (already set since it is a clone of A),but the issue is with B environment only. And FYI, the user is an owner of that folder in both the environments. I tried creating a folder inside the share(\server2\shared) using command prompt, but failed with error "access denied". What I done a workaround is that I added that user in "security" tab of folder permissions and after that it worked fine. But this was not done in the original environment. Is this something related to trust relationship? Why the share to the same location for the same user works differently in 2 environments, though they've been set with the same permissions. FYI, these are windows 2003 servers. Can someone please help.

    Read the article

  • Connections to IIS sometimes get stuck in CLOSE_WAIT state

    - by randomhuman
    Our application includes an ASP.Net web service that only needs to deal with a handful of clients. As such, the 10 incoming connection limit of Windows XP Pro is generally not a problem. However, on one particular server, connections are occasionally becoming stuck in the CLOSE_WAIT state. These connections build up over time and eventually new client connections are refused because the maximum number of connections are used up. From my googling it sounds like a failure of the webservice to properly close the connection can cause this problem, but as it works just fine on hundreds of other Windows XP pro machines I can't see it being a bug in our code. It also ran fine on the affected machine until some shenanigans on the part of the end user (I think they set about deleting duplicate files in order to reduce their disk usage, but they did not exactly come clean about it). What could the user have changed to introduce this problem? Is there any way I can force connections that are in CLOSE_WAIT to time out rather than letting them hang around? I have seen suggestions to reduce TcpTimedWaitDelay, but that only relates to the TIME_WAIT state, and changing it did not have any effect.

    Read the article

  • lamp -- edit PHP file but doesn't change web output -- including die()

    - by Reid W
    Server is standard Linux server on Amazon Web Services. Cent OS 5/Apache/PHP 5.3. No APC. It's worked fine for over a year, but now when I edit some but not all PHP files on the server using vi, the changes don't affect the web output. For example, I edit myfile.php and put a die() at the top, but when I load the page in my web browser, instead of the die() I see the content that would show up if the die() weren't there. svn updating the file in question doesn't help either. Files are on an Amazon EBS partition symlinked to /var/www/html. Just to reiterate -- this has worked fine for a long time. Restarting apache didn't help, nor did rebooting the server. What's weird is that it's just some of the files but not all. File ownership/permissions are the same for the "good" and "problem" files. I'm not a Linux newbie but am at a complete loss with this, and couldn't find anything on Google either. Any hints would be much appreciated!

    Read the article

  • [SOLVED} How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. Update: Here's a sample of how I hear others people voices.. Audio Sample After some tests, also the desktop has the same problem. (I tried TeamSpeak3) Here are some details on my laptop and desktop Laptop Dell Studio 1555 Core 2 Duo P8600 2.4Ghz 4Gb Ram Dual Channel Ati HD 4570 512Mb dedicated (up to 2048) IDT High Definition Audio Desktop Motherboard Asus P5KPL-AM Dual Core CPU E5200 2.50Ghz 2x2GB PC6400 Dual Channel Ati Radeon HD 4650 512MB VIA High Definition Audio Both computers have Windows 7 Professional 64Bit. So how do I restore my audio? SOLVED The problem was in router firmware, there was a bug that recognized VoIP traffic as a DOS attack and the router grambled every packet... I've installed the newest firmware and everything is fine :)

    Read the article

  • Laptop does not charge old or new battery with Old and New AC adapters.

    - by Jeff
    My Sager computer has been having a strange issue with the charging. For a long time it would be working perfectly as long as I was active on it. After I'd leave idle for a while it would suddenly decide it didn't want to use AC power anymore and would just discharge the battery until it shutdown because of low battery levels. Was not a huge deal to me since I just sent it to standby when done with it and it worked fine. Recently, however, it would not detect AC power while the battery was in. It ran from the battery just fine but until you powered it down, unplugged the battery, then plugged in the AC adapter it would not be on AC. In addition if I plug the battery back in after it's on AC power, it will see it but the battery won't charge though it can still discharge it. This is OS independent. I tried both a replacement battery and a replacement AC adapter. Neither solved my issue. I'm fairly comfortable opening and servicing a laptop but I don't know where to start. I'd like to avoid replacing my system board if possible. Any Ideas?

    Read the article

  • Computer turns on and off very quickly, then nothing, then works?

    - by hellohellosharp
    The strange nature of this problem is what is stumping me. I built my computer about 7 months ago using all new parts off of Newegg (not a kit or anything). One day, I wake up and turn on my computer. I press the power button and it turns on, but then back off after half a second. I press the power button again, this time nothing. I continue pressing the power button while at the same time turning the power supply on and off (to try and reset things). The power button still does nothing. But then, after about 5 minutes, WALLAH, it works just fine like nothing was ever wrong. It goes for an entire week working just fine. Then, one morning, the entire process starts again. I press the power button and it comes on and then right back off. I press the power button several times and nothing happens, and then it works again after a couple minutes of trying. What is going on with my computer?

    Read the article

  • (Some) security perms in WinXP corrupted (shows GUID instead of username)

    - by Andy
    I've been using my Win XP machine (part of a domain) over the holiday period, so until yesterday it hadn't rebooted for about five days. I used it yesterday perfectly fine and shut it down. When I switched it on this morning the majority (but not all) of my shortcut links in the Quick Launch toolbar showed as generic file icons. If you open the folder and get properties on one of the failing shortcuts it says ''Target type: This is not a valid shortcut''. Then in Outlook I noticed my signature wasn't showing (I checked my sent folder and the sig was ok yesterday). Checking the signature folder, I can't see the security tab on any of the sig files, and I have an access denied message on trying to open them. I can see the security tab on the signature folder itself, just none of the contents. If I try and use the parent folder's security tab and ''Replace permission entries on all child objects with entries shown here that apply to child objects'' it appears to work fine, but makes no actual difference. I logged in as administrator and saw that the owner of the files showed up as a GUID (clearly should've been my account in its place). Any ideas what might have made that happen? So far I haven't heard any similar complaints from anyone else at the office...

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • Windows xp blinking under score after bios

    - by heyjoe
    so this is for an older pc I have to repair for a friend. The pc has an hdd of about 60 something gb, It uses win xp and let's say 60-70% of the boots it hangs on showing only an underscore bilking line after bios screen, rest of the times it boots fine or the computer shuts down on xp loading screen. Sometimes if you let it alone while the underscore is blinking, it will boot after a while, like a few minutes, some times it won't boot at all even if you give him more time, like one hour. When it boots successfully the pc seems to work fine. I think it's a bad hard disk and i'm about to suggest buying a new one and switching it but I don't have enough experience and i would hate making him buy a new hdd and not solving the problem. anyone has any tips? I know there are other topics about blinking underscores or cursors while xp is booting but the issues about the pc shutting itself down or sometimes booting really freaks me out. Can't format everything and re install until about 10 days from now, cause the dude has some program for his business on this pc and I have to migrate it when the next computer arrives, however he needs to use it until then. so please advise, thx.

    Read the article

  • IIS 6.0 https not working "connection was reset"

    - by cad
    Application Server Windows Server 2003 SP2 with IIS 6.0 IIS has a "Default Web Site" (port 18000, ssl 443, ID=1) with a certificate created by me. I have an specific site called "scj.galaxy.Weekly" (port 80, ssl 443, ID=1272369728) that is working fine. I have an entry in windows/system32/drivers/etc/hosts that links galaxy.Weekly.scjdev.ds to the server ip in both my local machine and in the application Server. These sites works: http://scj.galaxy.weekly/test.html works http://scj.galaxy.weekly/test.aspx works But https://scj.galaxy.weekly/test.html fails Error message is: The connection was reset The connection to the server was reset while the page was loading. The certificate was working fine for months. It was created with something similar to this: Selfssl /N:CN=*.scjdev.ds /V:3650 /S:1 /P:443 I have tried several options and none of them are working: 1) Create a certificate only in "Default Web Site" and link it to SecureBindings with command prompt cscript adsutil.vbs set /w3svc/1272369728/SecureBindings ":443:galaxy.Weekly.scjdev.ds" 2) Create a certificate only in "Galaxy Site" and link it to SecureBindings 3) Create a certificate in both and link them to secureBindings. Probably I am missing an step or something, but I can't see it. Here is the relevant config of Galaxy Site: <IIsWebServer Location ="/LM/W3SVC/1272369729" AuthFlags="0" LogPluginClsid="{FF160663-DE82-11CF-BC0A-00AA006111E0}" SSLCertHash="c36a514a0be90fbc121d9c19bb052842289d5aee" SSLStoreName="MY" SecureBindings=":443:galaxy.Weekly.scjdev.ds" ServerAutoStart="TRUE" ServerBindings=":80:galaxy.Weekly.scjdev.ds" ServerComment="galaxy.Weekly.scjdev.ds" > </IIsWebServer> <IIsWebVirtualDir Location ="/LM/W3SVC/1272369729/root" AccessFlags="AccessRead | AccessScript" AppFriendlyName="Default Application" AppIsolated="2" AppRoot="/LM/W3SVC/1272369729/Root" AuthFlags="AuthAnonymous | AuthNTLM" DefaultDoc="Default.aspx" DirBrowseFlags="EnableDirBrowsing | DirBrowseShowDate | DirBrowseShowTime | DirBrowseShowSize | DirBrowseShowExtension | DirBrowseShowLongDate" Path="D:\Webs\Galaxysite" ScriptMaps="some config... " > </IIsWebVirtualDir>

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • Accessing or Resetting Permissions of a Mounted Registry Hive of a Different User / From a Different System

    - by Synetech
    I’m currently stuck using my backup system until I can replace my dead motherboard. In the meantime, I have put my hard-drive in this system so that I can access my files and keep working on the backup system. Fortunately, I don’t have a permission issues with the files (the partitions are FAT32). The issue I’m having is with the registry. I need to import some of my settings from the hives of my (old? normal?) installation of Windows into the one I’m currently using. Settings from the system hives (SYSTEM, SOFTWARE, etc.) are fine, but the user hive is giving me trouble. I’ve copied the NTUSER.DAT file from my other drive and mounted it with the reg command. Most of the keys (eg Software) are fine and I can access them without problem, but some of them (particularly the Identities key where Outlook Express settings are stored) complains that it cannot be opened. If I open the permissions dialog, I get an error about being unable to view the current permssions. If I then ignore it and try to take ownership of the key and it’s subkeys, I get an access-denied error. If I then add permissions for my user account on this system, I get an error, however I am then able to see the subkeys and values of the key. If I then try to access the subkeys, I get the same original errors. If I repeat the process for each subkey, I can see their values and subkeys, and so on, but of course this gets to be incredibly annoying and time-consuming (especially since the Identities key has a lot of subkeys). Is there an easier/temporary/more correct way to dump a key so that I can import it into my backup system?

    Read the article

  • Is my current htaccess setting hurting SEO?

    - by user656002
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http (The https redirect must be done inside the virtual host file--I think). I don't have anything in the htaccess file for https RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • Nvidia GTX 660m crashes games

    - by dcap
    I just recently bought a Lenovo y580 with both HD intel graphics and an Nvidia GTX 660m. It works great except for one thing: playing games. Every time I load a game, either with Steam or Games for Windows Live, games will end up crashing. I've already talked with lenovo tech support and they couldn't help other than send my new laptop for repair which would take 7 days. So before I do that I thought I'd ask around. These are the games I've tested and what happens when they load: Civilization V: Game loads fine but once it gets loaded to the game, there's noticeable "tearing" popping up and certain things flash. Within a minute of this, the game crashes. Does the same thing regardless if Vsync is on or off. Total War Shogun2: Game gets to the menu screen. The background of the menu screen shows what is expected - slideshow of in-game environments rendered on high settings (this is expected). However, within 2 seconds of the menu loading up it crashes. Age of Empires 3 (Non-steam): This game is several years old so it should work on this brand new laptop fine. However the results are similar to that of Civilization V. Noticeable "Tearing" and after a few seconds it'll freeze/crash. I've done tests on all these games with both the latest stable Nvidia driver 285 as well as the nightly build 307. In addition, Nvidia control panel is set on using the dedicated graphics card for all programs. So is there anything I can do to fix this or will I have to send it back for a week to tech support?

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >