Search Results

Search found 5842 results on 234 pages for 'break'.

Page 160/234 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Port 5357 TCP on Windows 7 professional 64 bit?

    - by Registered
    Is there a reason this port is open, a quick Nmap scan and Nessus scan reveal it's open, why? Are there any ramifications if I close this port via the firewall rule set? Or does anyone here now more info about this port besides Google? WTF? 1)http://www.symantec.com/connect/blogs/who-left-tunnel-door-open-windows-firewall-vista-0 I know the talk is about Vista, but I am pretty sure it's the same port on 7, also. 2)Port 5357 common errors:The port is vulnerable to info leak problems allowing it to be accessed remotely by malicious authors. (Web Services for Devices) I am blocking this crap, if I have issues will just re-enable. Damn windows. Inbound rule for Network Discovery to allow WSDAPI Events via Function Discovery. [TCP 5357] You just got blocked, until I break something, will see. Time to re-Nmap and re-Nessus. Nmap scan 0 open ports after closing Port 5357,Win7 still works for now, one more scan with Nessus just to make sure all is well.

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • grep --color=auto with -i option disables the matching text color, why?

    - by emptyset
    I was messing around with grep and put this in my .zshenv: export GREP_OPTIONS="--color=auto" export GREP_COLORS='mt=1;34' I was bonking my head on the keyboard and changing GREP_COLORS around for a minute trying to figure out why the folder colors were working, but the matching text wasn't. I was doing this: $ grep -R -n -i -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * The line number and file names were set with the default colors, but the matching text wasn't. After spending way too much time, I thought to do this: $ grep -R -n -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * (I removed the -i option.) That's all it took to get the matching text to correctly show up in bold blue. This is a Cygwin on Vista setup, with rxvt running zsh. Any idea why grep colors would break on specifying a case-insensitive match? Update: Under cygwin 1.7, it's a little bit better - case insensitive search works correctly, but it only highlights the word that matches the expression exactly. In other words, "FunctionFoo" highlights "FunctionFoo" but not "functionFoo" and vice versa. Probably a grep issue so I'll be submitting it to that list.

    Read the article

  • SQL Server Read Locking behavior

    - by Charles Bretana
    When SQL Server Books online says that "Shared (S) locks on a resource are released as soon as the read operation completes, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared (S) locks for the duration of the transaction." Assuming we're talking about a row-level lock, with no explicit transaction, at default isolation level (Read Committed), what does "read operation" refer to? The reading of a single row of data? The reading of a single 8k IO Page ? or until the the complete Select statement in which the lock was created has finished executing, no matter how many other rows are involved? NOTE: The reason I need to know this is we have a several second read-only select statement generated by a data layer web service, which creates page-level shared read locks, generating a deadlock due to conflicting with row-level exclusive update locks from a replication prcoess that keeps the server updated. The select statement is fairly large, with many sub-selects, and one DBA is proposing that we rewrite it to break it up into multiple smaller statements (shorter running pieces), "to cut down on how long the locks are held". As this assumes that the shared read locks are held till the complete select statement has finished, if that is wrong (if locks are released when the row, or the page is read) then that approach would have no effect whatsoever....

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • How do I move a linked file on Unix?

    - by r3mbol
    I have a bunch of files in one directory and links to each one of those files in another directory. So ls -l looks something like this: lrwxrwxrwx 1 rembol rembol 89 Jan 25 10:00 copyright.txt -> /home/rembol/solr/target/deploy/data/core/copyright.txt lrwxrwxrwx 1 rembol rembol 92 Jan 25 10:00 jar-versions.xml -> /home/rembol/solr/target/deploy/data/core/jar-versions.xml lrwxrwxrwx 1 rembol rembol 85 Jan 25 10:00 lgpl.html -> /home/rembol/solr/target/deploy/data/core/lgpl.html lrwxrwxrwx 1 rembol rembol 79 Jan 25 10:00 lib -> /home/rembol/solr/target/deploy/data/core/lib lrwxrwxrwx 1 rembol rembol 87 Jan 25 10:00 readme.html -> /home/rembol/solr/target/deploy/data/core/readme.html drwxr-xr-x 3 rembol rembol 4096 Jan 25 10:00 server drwxr-xr-x 2 rembol rembol 4096 Jan 25 10:00 startup Now I want to move those linked files from /home/rembol/solr/target/deploy to /home/rembol/output/. If I do that my simply calling mv, links will break. I don't want to re-link each file separately, cause there are hundreds of them (they are generated automatically). Is there some clever way to move linked files, rather than writing a script that unlinks, moves and relinks recursively for each file in each subdirectory?

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Switch to IPv6 and get rid of NAT? Are you kidding?

    - by Ernie
    So our ISP has set up IPv6 recently, and I've been studying what the transition should entail before jumping into the fray. I've noticed three very important issues: Our office NAT router (an old Linksys BEFSR41) does not support IPv6. Nor does any newer router, AFAICT. The book I'm reading about IPv6 tells me that it makes NAT "unnecessary" anyway. If we're supposed to just get rid of this router and plug everything directly to the Internet, I start to panic. There's no way in hell I'll put our billing database (With lots of credit card information!) on the internet for everyone to see. Even if I were to propose setting up Windows' firewall on it to allow only 6 addresses to have any access to it at all, I still break out in a cold sweat. I don't trust Windows, Windows' firewall, or the network at large enough to even be remotely comfortable with that. There's a few old hardware devices (ie, printers) that have absolutely no IPv6 capability at all. And likely a laundry list of security issues that date back to around 1998. And likely no way to actually patch them in any way. And no funding for new printers. I hear that IPv6 and IPSEC are supposed to make all this secure somehow, but without physically separated networks that make these devices invisible to the Internet, I really can't see how. I can likewise really see how any defences I create will be overrun in short order. I've been running servers on the Internet for years now and I'm quite familiar with the sort of things necessary to secure those, but putting something Private on the network like our billing database has always been completely out of the question. What should I be replacing NAT with, if we don't have physically separate networks?

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Windows 8 using as a webserver

    - by Jason
    I have a few hobby websites that I currently host on CentOS 6. Apache, mail serving, PHP, MySQL nothing special. In the past I used Windows XP to do this same task, for years, and I was OK. I switched to Linux and for the last few years it has been such a pain. updates break, certain apps only support certain distros without compiling from source. It prevents me from working on my hobby sites more because I am always fixing something. With Windows I locked it down, I run a hardware firewall and packet analyser, kept up on updates and A/V and never had a problem. I dont allow RDC from outside the local LAN, no FTP open, run OpenSSH on an obscure port.. I am considering switching to Windows 8 (since it is a cheaper license now that Windows 7) and running apache, HMailServer, PHP, MySQL, just like my CentOS install. My questions: I am not familiar with Windows 8, can the above be done like XP? No new security restrictions or the OS preventing this from happening? The machine is a Athlon 64-bit X2 with 32GB of RAM. Will Windows 8 see all of the RAM? Technically the machine came with Windows 7, and there is a serial number on it but I am sure I wiped away the Windows 7 recovery partition when I switched to Linux....

    Read the article

  • How can I use wildcards in an Nginx map directive?

    - by Ian Clelland
    I am trying to use Nginx to served cached files produced by a web application, and have spotted a potential problem; that the url-space is wide, and will exceed the Ext3 limit of 32000 subdirectories. I would like to break up the subdirectories, making, say, a two-level filesystem cache. So, where I am currently caching a file at /var/cache/www/arbitrary_directory_name/index.html I would store that instead at something like /var/cache/www/a/r/arbitrary_directory_name/index.html My trouble is that I can't get try_files, or even rewrite to make that mapping. My searching on the subject leads me to believe that I need to do something like this (heavily abbreviated): http { map $request_uri $prefix { /aa* a/a; /ab* a/b; /ac* a/c; ... /zz* z/z; } location / { try_files /var/cache/www/$prefix/$request_uri/index.html @fallback; # or # if (-f /var/cache/www/$prefix/$request_uri/index.html) { # rewrite ^(.*)$ /var/cache/www/$prefix/$1/index.html; # } } } But I can't get the /aa* pattern to match the incoming uri. Without the *, it will match an exact uri, but I can't get it to match just the first two characters. The Nginx documentation suggests that wildcards should be allowed, but I can't see a way to get them to work. Is there a way to do this? Am I missing something simple? Or am I going about this the wrong way?

    Read the article

  • Permissions in OS X for iTunes library with multiple users

    - by John
    I currently have a lot of music on an external drive and my iTunes set up from there. However, periodically, when the external drive isn't connected, iTunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my Mac's HD is large enough to house the music collection. However, I have 4 family members – all with their own logins – using this same gob of music. I don't want four copies of the library, only one with all libraries referencing it. So, what I want to do is: Move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. Get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let iTunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the iTunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any XML library file editing if possible. Am I dreaming?

    Read the article

  • Macros in Excel 2010 hangs

    - by Ahmad
    I have a spreadsheet with several macros. Generally, when previously using Excel 2007, a user clicks a button and everything works as expected (calculations, some email sending & file I/O). Typically, the expected run-time is about 90 seconds. The spreadsheet is a xlsm file created with Excel 2007. With Excel 2010 however, the same user process results in a non-responsive excel and forces us to kill excel from the task manager. Some note that I have gathered so far in trying to debug this issue: When monitoring CPU usage, it seems that Excel does start the macro. CPU usage increases as expected to about 47% for a few seconds. Excel.exe than drops to 0% usage and I now have a non-responsive Excel (even after 1 hour). If I set debug break points across modules and different functions and step through the code (after clicking the button) , the process works as expected albeit much slower. To add, there were no exceptions. I am at a complete loss as to what the issue may be. I initially thought it may be the add in that is being used but that was debunked by point 2. This seems to be a very odd situation. I can provide more information if required, but I'm at wits end about the root cause could be. I need help in diagnosing and resolving this issue.

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • Vim clobbering scrollback buffer outside of screen

    - by dotancohen
    If I'm not in a screen session, then when exiting Vim I get a bash prompt below the remnants of the VIM window. A side effect of this is that my scrollback buffer is clobbered, especially if I have paged through a long file in VIM. The problem only occurs if I'm not in screen, inside a screen window VIM exits to show the bash prompt and the previous lines just as before. I tried adding sett_ti=t_te= to my .vimrc to fix the problem, but the only effect that it has was to break VIM such that the problem occurs inside screen as well as outside. Thus, I removed the line. For good measure I do have altscreen on in .screenrc. This is on Ubuntu Server 12.04.1 LTS, with Bash 4.2.24, Screen 4.00, and VIM 7.3 (not vim-tiny), accessed over SSH in Cygwin version NT-6.1-WOW64 on a Windows 7 laptop. Thanks. EDIT: Note that in the same Cygwin install I can SSH into a different server (CentOS) and there VIM does not clobber the scrollback buffer. Therefore, I do not suspect a Cygwin issue. The CentOS machine does not have screen installed, and I did not have to add set t_ti= t_te= to .vimrc.

    Read the article

  • Using awk to split text file every 10,000 lines

    - by Sneaky Wombat
    I have a large gzip'd text file. I'd like to something like: zcat BIGFILE.GZ | awk (snag 10,000 lines and redirect to...)|gzip -9 smallerPartFile.gz the awk part up there, I basically want it to take 10,000 lines and send it to gzip and then repeat until all lines in the original input file are consumed. I found a script that claims to do this, but when I run it on my files and then diff the original to the ones that were split and then merged, lines are missing. So, something is wrong with the awk part and I'm not sure what part is broken. Here's the code. Can someone tell me why this doesn't yield a file that can be split and merged and then diff'd to the original successfully? # Generate files part0.dat.gz, part1.dat.gz, etc. # restore with: zcat foo* | gzip -9 > restoredFoo.sql.gz (or something like that) prefix="foo" count=0 suffix=".sql" lines=10000 # Split every 10000 line. zcat /home/foo/foo.sql.gz | while true; do partname=${prefix}${count}${suffix} # Use awk to read the required number of lines from the input stream. awk -v lines=${lines} 'NR <= lines {print} NR == lines {exit}' >${partname} if [[ -s ${partname} ]]; then # Compress this part file. gzip -9 ${partname} (( ++count )) else # Last file generated is empty, delete it. rm -f ${partname} break fi done

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Can't delete ntuser.dat file to remove profiles after reboot

    - by Matrix Mole
    I've ran into an issue where some servers will not release the handle on the ntuser.dat file even after a reboot. Or quite possible, after the reboot, the ntuser.dat file is getting re-loaded into memory. The user accounts are definitely not being accessed (some of them belong to users that have not been with the company in over a year). It seems to be on Windows 2003 servers, but I can't be 100% certain that there aren't some 2000 servers showing this issue as well. When I try to use process explorer or handle.exe from sysinternals to kill the handle on these ntuser.dat files, the handle remains open and connected. Handle.exe even reports that the handle was broken while it remains in use. I've even taken ownership on the file and tried to kill the handle to no effect (windows shows I have ownership of the file, but still refuses to release the handle). I have looked into the registry to see if I can discover where the files may be getting loaded at. Unfortunately, the username is appearing in too many places for me to be certain which one is actually loading their reg file into memory. Any suggestions on how I can either break the handle on the files, or prevent them from getting re-loaded after a reboot?

    Read the article

  • 85 Hz on old/new driver looks the same like 75 Hz on previous one?

    - by jon
    I have old philips 107T5 CRT and Nvidia graphics card. I used old Nvidia driver (but it wasn't 'legacy' one when I installed it) for few years but recently I decided to install other Linux distribution. I used 75 Hz refresh rate and 1024x768 resolution on my previous distribution. After I installed the new distribution I had to install a Nvidia driver so I downloaded one from the Nvidia site (this time only legacy supported my card so I downloaded legacy and installed it). It wasn't automatically updating xorg.conf but I had my previous xorg.conf copy and I used it. When I run X I could only choose 85 and 75 Hz, 85 was checked as default. And now what shocks me: that default 85 Hz looks identically like 75 Hz on previous driver looked (at least to me). I tried 75 Hz out of curiosity and it's too bright, hurts, etc. But on the previous driver 75 Hz wasn't hurting my eyes. Why is it different? It's the same number after all, so it should always give the same results, right? That's my first question. Second question: Is 85 Hz OK for that monitor model? Would it break it? I tried to find the optimal refresh rate for this model but couldn't find it.

    Read the article

  • Broken Vista. Can't open Windows settings.

    - by serena
    My neighbor has a Lenovo laptop with Windows Vista Home Basic. She's a noob and just uses the laptop for internet purposes. She said she had to close down Windows improperly (sometime ago, maybe 6 months) because of system freeze. She realized there's something wrong with her Windows when she tried to open Windows update settings. I took a look at the system and determined the following errors: When I click on Windows Updates, a bare white window opens for a sec. and closes immediately. When I try to open Computer Properties, the same thing happens. (Windows+Break doesn't work either.) When I try to open Bluetooth settings, the same thing happens. So Vista won't let me open any Windows settings, but installed programs work correctly (games, applications etc.). She has no Windows Vista discs since the laptop came with preinstalled genuine Vista. She also has no recovery discs. I don't think there is a system restore point for the time the system was stable. Now what can we do to solve this big problem?

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Why is XSP/mono giving me file/write errors with lucence?

    - by acidzombie24
    I am using mono 2.6.7, xsp 2.8.? and i am not sure what else. I'll try downgrading xsp and whatever else. Today i notice my server randomly gets Http Error 500, internal server error. I looked in my log files http://www.pastie.org/1426236 and it appears that the problem is locking lucence which it does by creating a file called write.lock. This use to work with no problem but now it gives me pain. It will fix itself if i refresh the page several times but it will break just as easily. My other site will not work everytime i restart apache and i need to by hand delete the file (at least it doesnt get random 500 errors). However now it just says Lock obtain timed out: NativeFSLock@/var/www/SITE_2/App_Data/LuceneIndex_a/write.lock no matter what. I even chmod -R 777 the directory and still no luck. write.lock doesnt even exist. I have no clue whats going on. Any ideas?

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >