Search Results

Search found 5172 results on 207 pages for 'stackoverflow podcast'.

Page 149/207 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • How to access remote network resource from local machine

    - by jerluc
    I just configured VPN access successfully so that I now can connect to my workstation at work from my personal Linux box at home. The problem is that all of my dev files for a server I'm locally running are on my personal box and cannot be transfered to my workstation (at least not in any timely manner over this connection given the amount of data, in addition to the many reconfigurations which would be required for the server to run even if I could somehow get the files across). So essentially, I am able to run my server locally on my personal computer, however, the data-sources required for the back-end are accessible only from within the office's network. But is there some way for me to somehow either access the data-sources directly through a VPN connection or even if I need to be a bit more convoluted by connecting via VPN to my workstation and then somehow connecting to the data-sources through my workstation to my personal computer? And here I could really care less about the speed of the connection from my server to the data-sources since they will probably only be fetched a few times every hour or so. Thanks! Sorry if this a stupid question and/or doesn't make any sense! (And sorry for anyone who read this at stackoverflow, I posted it in the wrong area.)

    Read the article

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Why does this loopback device creation malfunction?

    - by user50118
    The stackoverflow people thought this was more appropriate here, I put it there as it is part of a program but I can see their POV, so here it is: At the bottom of the code you can see it failing. In fact, I'll put it here at the start too because it is the problem I need to solve: [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks) I don't understand why the device is supposedly too small. I made this partition two days ago with normal fdisk, it was created and formatted with ext4 supplying no options other than the partition (/dev/sdb2) to format. The only explaination I can think of is that ext4 has the size of the partition wrong somehow but that seems very unlikely. What is wrong with my math? The offset is correct, you can see that with the file command, and the size should be correct too because End - Start comes to the same number of sectors minus 1, just like it should (A disk starting on sector 1 and ending on sector 2 would be 2 - 1 = 1 and have two sectors). # sfdisk -luS /dev/sdb Disk /dev/sdb: 9729 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdb2 78295040 156296384 78001345 83 Linux # losetup -r -f --show -o $((78295040 * 512)) --sizelimit $((78001345 * 512)) /dev/sdb /dev/loop0 # file -s /dev/loop0 /dev/loop0: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (large files) (huge files) # mount -o ro -t ext4 /dev/loop0 /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail -n 1 [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks)

    Read the article

  • .tex file remains in use by process when batch file is triggered by .Rnw Sweave processing.

    - by drknexus
    This is a pretty specialized question. I'm using the Eclipse IDE in a Windows XP environment with the StatET plug-in so I can write R code as an R/Sweave document. This produces a .tex file that is then post processed by pdflatex.exe. When I create the file as normal everything works great (except maybe my file named russfnc2.Rnw seems to result in russfnc.pdf even though pdflatex.exe on the console window correctly says that the output is being writen to russfnc2.pdf). The big problem is when I trigger a batch file from within my Rnw code. My goal here is to spawn a side process that waits for the PDF to be made and uploads it to the server. So the Rnw contains: if(file.exists("rsp.finalize.bat")) {system("rsp.finalize.bat",wait=FALSE,invisible=FALSE)} The batch file calls Rterm.exe to run a script: setwd("C:/theprojectdirectory") while(!file.exists("russfnc.pdf")) { Sys.sleep(1) } Sys.sleep(60) At the end of that script, I use a shell call to launch psftp.exe and upload the files. All of this works fine, when I use my Eclipse profile to trigger Sweave... that is unless I have that batch file at the end of the .Rnw. When it is located there, I get the error message pdflatex.exe: Permission denied: c:\thepath\thetexfile.tex. After that, the .tex file (as far as XP is concerned) is in use by another process and I have to reboot in order to delete it (and, of course, the pdf is not made). If I manually trigger the batch file after pdflatex.exe has done its things, everything works fine. How can I make this work correctly using the tools I'm familiar with vis., R and Dos-style batch files? I'm not sure if this is a SuperUser question or a StackOverflow question, so I'm starting here.

    Read the article

  • Adding gcc 4.9 as a compiler option in Xcode

    - by user2129150
    I asked this question on StackOverflow, but it's pretty much stagnant. Sorry if this is considered a repost/double-post. I just installed gcc 4.9 (with C11 support), and want to add it to Xcode 4.6.3's build options as a compiler option. I ran make and make install, and the packages are all there (under /usr/bin/gcc. Running gcc --version confirms that gcc 4.9 is installed rather than an older version. When I go into an existing Xcode project's build settings, the only compiler options available are Apple LLVM compiler 4.2 LLVM GCC 4.2 Other... Clearly, GCC 4.9 would have to be added using the "Other..." option, although I'm not sure how. I've tried inputting the path to gcc (/usr/bin/gcc), although the default value for other isn't a path at all: com.apple.compilers.llvmgcc42. I've also tried following the instructions from the answer to this question as well, but the machine I'm on doesn't have the /Developer directory at all, since I believe Apple integrated the developer tools that required (and created) this directory into Xcode. How do I add gcc 4.9 as a compiler option in Xcode?

    Read the article

  • Installing Numpy locally

    - by Néstor
    I posted this question originally on StackOverflow, but a user suggested I moved it here so here I go! I have an account in a remote computer without root permissions and I needed to install a local version of Python (the remote computer has a version of Python that is incompatible with some codes I have), Numpy and Scipy there. I've been trying to install numpy locally since yesterday, with no success. I successfully installed a local version of Python (2.7.3) in /home/myusername/.local/, so I access to this version of Python by doing /home/myusername/.local/bin/python. I tried two ways of installing Numpy: I downloaded the lastest stable version of Numpy from the official webpage, unpacked it, got into the unpacked folder and did: /home/myusername/.local/bin/python setup.py install --prefix=/home/myusername/.local. However, I get the following error, which is followed by a series of other errors (deriving from this one): gcc -pthread -shared build/temp.linux-x86_64-2.7/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.7 -lptf77blas -lptcblas -latlas -o build/lib.linux-x86_64-2.7/numpy/core/_dotblas.so /usr/bin/ld: /usr/local/lib/libptcblas.a(cblas_dptgemm.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC Not really knowing what this meant (except that the error apparently has to do with the LAPACK library), I just did the same command as above, but now putting LDFLAGS='-fPIC', as suggested by the error i.e., I did LDFLAGS="-fPIC" /home/myusername/.local/bin/python setup.py install --prefix=/home/myusername/.local. However, I got the same error (except that the prefix -fPIC was addeded after the gcc command above). I tried installing it using pip, i.e., doing /home/myusername/.local/bin/pip install numpy /after successfully instaling pip in my local path). However, I get the exact same error. I searched on the web, but none of the errors seemed to be similar to mine. My first guess is that this has to do with some piece of code that needs root permissions to be executed, or maybe with some problem with the version of the LAPACK libraries or with gcc (gcc version 4.1.2 is installed on the remote computer). Help, anyone?

    Read the article

  • wget recursively download from pages with lots of links

    - by Shadow
    When using wget with the recursive option turned on I am getting an error message when it is trying to download a file. It thinks the link is a downloadable file when in reality it should just be following it to get to the page that actually contains the files(or more links to follow) that I want. wget -r -l 16 --accept=jpg website.com The error message is: .... since it should be rejected. This usually occurs when the website link it is trying to fetch ends with a sql statement. The problem however doesn't occur when using the very same wget command on that link. I want to know how exactly it is trying to fetch the pages. I guess I could always take a poke around the source although I don't know how messy the project is. I might also be missing exactly what "recursive" means in the context of wget. I thought it would run through and travel in each link getting the files with the extension I have requested. I posted this up over at stackOverFlow but they turned me over here:) Hoping you guys can help.

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • How to limit SMTP delivery to hourly batches

    - by Jeremy W
    Moved over from StackOverflow. Sorry if you saw it there first In an effort to keep us from being labeled spammers by major ISPs (in addition to SPF records, privacy policies, CANSPAM compliance and the like) - I wanted to limit the amount of mail we send out an hour. Is this possible in W2K3 SMTP server? I was looking at outbound connection properties in the SMTP virtual server config screens...It's just not that clear if tinkering with those settings are going to do what I want. In a nutshell, I'd love mail being sent by this server to queue up and send for example, 5,000 messages every 10 minutes or so. Mail is being sent via ASP.Net. Also, I wouldn't be sending 1 million a day. Probably 30,000 tops - and doing that only a few times a month. I'm just trying to avoid a tidal wave of 30k going out in 1 minute and setting off every network spam monitoring alarm in North America. I know I could do it with a combination console app / scheduled job. My question was if there was an easier way to accomplish this with the Virtual SMTP Server settings on Win2k3 Is this possible?

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • PHP Causing Segmentation Fault & Apache Blank Response

    - by Joe
    I recently updated a Debian Lenny server to php 5.3,5 using the dotdeb source. Soon after doing so certain (but not all) sites on the server stopped responding to requests. A blank response would be returned - no headers, no content, nothing. I found this related question on stackoverflow which seems to describe something similar and used the same code the user had in their answer to see if I could replicate the issue: <?php class A { public function __construct() { new B; } } class B { public function __construct() { new A; } } new A; print 'Loaded Class A'; ?> This triggered the problem - the page returned absolutely nothing despite the original question stating this was fixed in PHP 5.5.0. No CPU block as you'd expect, no wait, just an almost instant zero response. I then ran this same code from the cli (php -f test.php) and the only output I got was 'Segmentation fault'. Tailing the kernel log I've spotted: Feb 16 07:04:06 creature kernel: [192203.269037] php[17710] general protection ip:76ef37 sp:7fff155e9bb0 error:0 in php5[400000+870000] Feb 16 08:57:31 creature kernel: [199639.699854] apache2[31136]: segfault at 7fff13a84fe0 ip 7f730514ea40 sp 7fff13a85008 error 6 in libphp5.so[7f7304ce8000+915000] All extremely odd and I'm not sure what it's pointing to/what I should do to debug this further. As I said some sites work but code such as the above definitely trigger it. Not that the sites I want to server have code like that - it's just an example. Any help is much appreciated!

    Read the article

  • Shared configuration for Eclipse on Debian server

    - by Joris Meys
    I've manually installed the latest Eclipse on our debian server and wanted to configure it so all users share the same configuration. It turned out less obvious than I thought: I don't seem to be able to install packages for all users. If I run it myself, all configuration data is saved under my own home directory. If I run Eclipse using sudo, everything is saved under the root directory but is not accessible for other users when they run Eclipse. I've been browsing the manual of Eclipse and some forums, but apart from a "yes, you can" I couldn't find any information on how that should be done. The biggest problem is installing plugins for all users to be found. Any help is greatly appreciated. Eclipse : 3.6.1 classic, installed using this procedure. Server uname: GNU/Linux * 2.6.26-2-amd64 Server is accessed using Putty, and Gnome desktop through realVNC. Just mentioning it if that is of any importance. Our sysadmin is on "prolonged leave" (working in Spain and never replaced), so I'm stuck without help here. EDIT : -- I asked this question also on StackOverflow as I wasn't certain this is a genuine server-related question. Please feel free to merge both questions at the appropriate place. --

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • Errors generating gpg keypair

    - by Evan Lynch
    I posted this originally on stackoverflow, but was told that it was offtopic and this would be the better place to post it, so I am reposting it here and deleting my original topic. I have a rather old PGP key, but I've long ago lost the private key for it, so I'm trying to generate a new key with GPG on Windows 7. While it technically generates the key, GPA crashes every time I generate the keypair. I've tried this four times now and just downloaded what appears to be the latest version of Gpg4Win and am still receiving this problem. A comment on my original post informed me that GPA crashes is not a very good description of the problem, but unfortunately I can't do much better than that: all it tells me is "gpa.exe has crashed and will close now", I don't get an error dump or anything. Is there anything I can do to fix this, or is this just a bug in the latest version of Gpg4Win? Here are the specs of GPG that I'm using: GPA 0.9.4. GnuPG 2.0.22. My operating system is Windows 7 64 Bit, and I have 5 GB of RAM. Also, I was told to try generating the keypair on the command line but can't find any documentation for how to do this in Windows 7. If anyone could link me to current documentation for this, that would be a good workaround for solving this problem.

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • Application runs fine manually but fails as a scheduled task

    - by user42540
    I wasn't sure if this should go here or on stackoverflow. I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • Driver denied access to PCI card

    - by Corin
    Alright, I asked this on StackOverflow (here) and they suggested trying ServerFault to get help on permissions. So here's the deal. We designed a custom PCI card and wrote the driver for it. It's been working for years without problems but now we encountered one particular installation were it doesn't work. The problem is that we cannot connect to the PCI to begin communication with it. We tried replacing the card and had the same problem. We had the motherboard replaced thinking the PCI slots were bad. That didn't help either. We tried the cards in a different computer and they all worked. So it seemed to be something specific to the computer. The Windows Device Manager indicates the device is working properly and seems to have all the correct driver info. We now have this troublesome computer back at the office for testing. With the help of some extra debug info in the driver we determined that we cannot connect because access is denied. Sounds like a permissions issue to me. I should note that we are logged into the system as a local administrator. So what configuration option in Windows can prevent access to a device?

    Read the article

  • How does Windows 7 taskbar "color hot-tracking" feature calculate the colour to use?

    - by theyetiman
    This has intrigued me for quite some time. Does anyone know the algorithm Windows 7 Aero uses to determine the colour to use as the hot-tracking hover highlight on taskbar buttons for currently-running apps? It is definitely based on the icon of the app, but I can't see a specific pattern of where it's getting the colour value from. It doesn't seem to be any of the following: An average colour value from the entire icon, otherwise you would get brown all the time with multi-coloured icons like Chrome. The colour used the most in the image, otherwise you'd get yellow for the SQL Server Management Studio icon (6th from left). Also, the Chrome icon used red, green and yellow in equal measure. A colour located at certain pixel coordinates within the icon, because Chrome is red -indicating the top of the icon - and Notepad++ (2nd from right) is green - indicating the bottom of the icon. I asked this question on ux.stackoverflow.com and it got closed as off-topic, but someone answered with the following: As described by Raymond Chen in this MSDN blog article: Some people ask how it's done. It's really nothing special. The code just looks for the predominant color in the icon. (And, since visual designers are sticklers for this sort of thing, black, white, and shades of gray are not considered "colors" for the purpose of this calculation.) However I wasn't really satisfied with that answer because it doesn't explain how the "predominant" colour is calculated. Surely on the SQL Management Studio icon, the predominant colour, to my eyes at least, is yellow. Yet the highlight is green. I want to know, specifically, what the algorithm is.

    Read the article

  • Which AMI to to use for Java/Tomcat/MySQL in Amazon EC2?

    - by Justin
    I originally posted this on stackoverflow.com and it was suggested serverfault.com might be a better place to ask this question. So here goes: I'm trying to determine which Amazon Machine Image (AMI) to use as my Virtual Server in Amazon's EC2. For now, I'll need to choose an AMI that complies with the AWS Free Usage Tier. I want to deploy a Java app that I've been developing using Eclipse on Windows XP, Tomcat 7 and MySQL 5.5. I'm aware that I can choose the Basic 32-bit Amazon Linux AMI. Then I'd manually install Tomcat and MySQL (does MySQL get installed on the image or separately on an Elastic Block Store (EBS)?). Here's the rub, I'm a bit of a Linux noob. I can start Tomcat and tail the logs and such on Linux but I'm not familiar with the install process for Tomcat and MySQL on Linux and commands like sudo and chmod. I'm happy to get more hands on with Linux but I'm short on time right now. Are there AMI's that already have Tomcat and MySQL bundled? The Request Instance Wizard shows 805 Community AMI's that are Free Tier Eligible. 51 of the Free Tier Eligible AMI's have "Tomcat" in their name. I'm willing to consider using Elastic Beanstalk but my research thus far hasn't found any discussion of using MySQL with Beanstalk. The discussions all seem to use Amazon's SimpleDB. Any advice is greatly appreciated.

    Read the article

  • Windows Explorer Hangs on Right-Click

    - by Bryan
    I am not sure if this is the right site to post this one as I typically post coding questions on stackoverflow. But I'll ask anyways and hopefully someone can move it if it's incorrect. Currently I have a customer built PC, utilizing an Intel i7 chip, 1300WATT PSU, 8Gigs of RAM, and two video cards. Originally I had the one video card (NVIDIA) that used the PSU and had two DVI output. After purchasing a third monitor I installed another ATI) graphics card not needed any PSU connectors. After installing and restarting, I noticed that when I right-click on my desktop, or through Windows Explorer it will hang, freeze then restarted. Sometimes after Windows Explorer restarts the problem dissipates. I checked to make sure everything was connected properly and it was. I repaired the ATI Catalyst Control Center to see if that had an issue, and I checked to see if either video card required updated drivers. Nothing worked. I tried restarting my PC and that didn't work. I tried using ShellXView (I forgot what it's actually called) and tried closed processes but that didn't work. Does anyone have any idea what could have caused this orpossible solutions I should try?< Thanks in advance.

    Read the article

  • Getting 404 error on MVC web-site

    - by RB
    I have an IIS7.5 web-site, on Windows Server 2008, with an ASP.NET MVC2 web-site deployed to it. The website was built in Visual Studio 2008, targeting .NET 3.5, and IIS 5.1 has been successfully configured to run it as well, for local testing. We've installed the world's simplest MVC application (the one which is created when you create a new MVC2 project in Visual Studio), and we are getting 404s on any page we try and access - e.g. <my_server>/Home/About will generate a 404. I've asked this question on StackOverflow as well, but that was before I knew it was a server issue. I have checked the following things: There are 404 entries in the IIS log, corresponding to each request. The application pool for the web-site is set to use the Integrated pipeline. The "customErrors" mode is set to off. .NET 3.5 SP1 is installed ASP.NET MVC 2 is installed I've used MVC Diagnostics to confirm all MVC DLLs are being found. ASP.NET is enabled in IIS, which we've demonstrated by running the MVC Diagnostics page. KB 2023146 did highlight that HTTP Redirection was off, so we've turned it on, but no joy. Any ideas will be greatly appreciated! Someone did suggest that there might be problems running it caused by Windows Server 2008 being 64-bit - does anyone know anything about this?

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • Odd log entries when starting up PotgreSQL

    - by Shadow
    When restarting pgSQL, I get the following log entries: 2010-02-10 16:08:05 EST LOG: received smart shutdown request 2010-02-10 16:08:05 EST LOG: autovacuum launcher shutting down 2010-02-10 16:08:05 EST LOG: shutting down 2010-02-10 16:08:05 EST LOG: database system is shut down 2010-02-10 16:08:07 EST LOG: database system was shut down at 2010-02-10 16:08:05 EST 2010-02-10 16:08:07 EST LOG: autovacuum launcher started 2010-02-10 16:08:07 EST LOG: database system is ready to accept connections 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST LOG: incomplete startup packet 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST LOG: incomplete startup packet My question regarding a potential consequence of this is posted here: http://stackoverflow.com/questions/2238954/mdb2-says-connection-failed-db-logs-say-otherwise , but I didn't realize this was happening when I asked that question, and I figured this [part of the] problem is for SF. Edit: I can connect to the database and manipulate things normally with the psql CLI and the postgres user.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >