Search Results

Search found 8577 results on 344 pages for 'latest'.

Page 315/344 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • How to display image from server (newest-oldest) to a new php page?

    - by Jben Kaye
    i have a form that create images and save it on a folder in the server. What i need to do is to display ALL created images to another page, the newest on the top and so that the oldest is at the bottom. i have a form called formdisplay.php but it's just displaying a broken image and not newest to oldest. hope you can help me with this. really need to get this working. thanks in advance for your help. i have read the posts but none of those worked for me. Pull dedicated images from folder - show image + filename (strip part of filename + file-extension) How can I display latest uploaded image first? (PHP+CSS) getting images from the server and display them Displaying images from folder in php display image from server embedding php in html formcreatesave.php imagecopymerge($im, $img2, 10, 350, 0, 0, imagesx($img2), imagesy($img2), 100); $date_created = date("YmdHis");//get date created $img_name = "-img_entry.jpg"; //the file name of the generated image $img_newname = $date_created . $img_name; //datecreated+name $img_dir =dirname($_SERVER['SCRIPT_FILENAME']) ."/". $img_newname; //the location to save imagejpeg($im, $img_dir , 80); //function to save the image with the name and quality imagedestroy($im); formdisplay.php $dir = '/home3/birdieon/public_html/Amadeus/uploader/uploader'; $base_url = 'http://birdieonawire.com/Amadeus/uploader/uploader/20131027024705-img_entry.jpg'; $newest_mtime = 0; $show_file = 'BROKEN'; if ($handle = opendir($dir)) { while (false !== ($file = readdir($handle))) { if (($file != '.') && ($file != '..')) { $mtime = filemtime("$dir/$file"); if ($mtime > $newest_mtime) { $newest_mtime = $mtime; $show_file = "$base_url/$file"; } } } } print '<img src="' .$show_file. '" alt="Image Title Here">'; please feel free to edit my code. thanks :)

    Read the article

  • Why is FF on OS X loosing jQuery-UI in click event handler?

    - by Jean-François Beauchamp
    In a web page using jQUery 1.7.1 and jQUery-UI 1.8.18, if I output $.ui in an alert box when the document is ready, I get [object Object]. However when using Firefox, if I output $.ui in a click event handler, I get 'undefined' as result. With other browsers (latest versions of IE, Chrome and Safari), the result is still [object Object] when clicking on the link. Here is my HTML Page: <!doctype html> <html> <head> <title></title> <script src="Scripts/jquery-1.7.1.js" type="text/javascript"></script> <script src="Scripts/jquery-ui-1.8.18.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { alert($.ui); // ALERT A $(document).on("click", ".dialogLink", function () { alert($.ui); // ALERT B return false; }); }); </script> </head> <body> <a href="#" class="dialogLink">Click me!</a> </body> </html> In this post, I reduced to its simplest form another problem I was having described here: $(this).dialog is not a function. I created a new post for the sake of clarity, since the real question is different from the original one now that pin-pointed where the problem resided. UPDATE: IF I replace my alerts with simply alert($); I get this result for alert A: function (selector, context) { return new jQuery.fn.init(selector, context, rootjQuery); } and this one for alert B: function (a, b) { return new d.fn.init(a, b, g); } This does not make sense to me, although I may not be understanding well enough what $ is... UPDATE 2: I can only reproduce this problem using Firefox on OS X. On Firefox running on Windows 7, everything is fine.

    Read the article

  • ASP.NET MVC 2 Preview 2 Route Request Not Working

    - by Kezzer
    Here's the error: The incoming request does not match any route. Basically I upgraded from Preview 1 to Preview 2 and got rid of a load of redundant stuff in relation to areas (as described by Phil Haack). It didn't work so I created a brand new project to check out how its dealt with in Preview 2. The file Default.aspx no longer exists which contains the following: public void Page_Load(object sender, System.EventArgs e) { // Change the current path so that the Routing handler can correctly interpret // the request, then restore the original path so that the OutputCache module // can correctly process the response (if caching is enabled). string originalPath = Request.Path; HttpContext.Current.RewritePath(Request.ApplicationPath, false); IHttpHandler httpHandler = new MvcHttpHandler(); httpHandler.ProcessRequest(HttpContext.Current); HttpContext.Current.RewritePath(originalPath, false); } The error I received points to the line httpHandler.ProcessRequest(HttpContext.Current); yet in newer projects none of this even exists. To test it, I quickly deleted Default.aspx but then absolutely nothing worked, I didn't even receive any errors. Here's some code extracts: Global.asax.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace Intranet { public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); AreaRegistration.RegisterAllAreas(); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); } protected void App_Start() { RegisterRoutes(RouteTable.Routes); } } } Notice the area registration as that's what I'm using. Routes.cs using System.Web.Mvc; namespace Intranet.Areas.Accounts { public class Routes : AreaRegistration { public override string AreaName { get { return "Accounts"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute("Accounts_Default", "Accounts/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" }); } } } Check the latest docs for more info on this part. It's to register the area. The Routes.cs files are located in the root folder of each area. Cheers

    Read the article

  • Why is this line breaking Rails with Passenger on DreamHost?

    - by Frew
    Ok, so I have a Rails app set up on DreamHost and I had it working a while ago and now it's broken. I don't know a lot about deployment environments or anything like that so please forgive my ignorance. Anyway, it looks like the app is crashing at this line in config/environment.rb: require File.join(File.dirname(__FILE__), 'boot') config/boot.rb is pretty much normal, but I'll include it here anyway. # Don't change this file! # Configure your app in config/environment.rb and config/environments/*.rb RAILS_ROOT = "#{File.dirname(__FILE__)}/.." unless defined?(RAILS_ROOT) module Rails class << self def boot! unless booted? preinitialize pick_boot.run end end def booted? defined? Rails::Initializer end def pick_boot (vendor_rails? ? VendorBoot : GemBoot).new end def vendor_rails? File.exist?("#{RAILS_ROOT}/vendor/rails") end def preinitialize load(preinitializer_path) if File.exist?(preinitializer_path) end def preinitializer_path "#{RAILS_ROOT}/config/preinitializer.rb" end end class Boot def run load_initializer Rails::Initializer.run(:set_load_path) end end class VendorBoot < Boot def load_initializer require "#{RAILS_ROOT}/vendor/rails/railties/lib/initializer" Rails::Initializer.run(:install_gem_spec_stubs) end end class GemBoot < Boot def load_initializer self.class.load_rubygems load_rails_gem require 'initializer' end def load_rails_gem if version = self.class.gem_version gem 'rails', version else gem 'rails' end rescue Gem::LoadError => load_error $stderr.puts %(Missing the Rails #{version} gem. Please `gem install -v=#{version} rails`, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed.) exit 1 end class << self def rubygems_version Gem::RubyGemsVersion if defined? Gem::RubyGemsVersion end def gem_version if defined? RAILS_GEM_VERSION RAILS_GEM_VERSION elsif ENV.include?('RAILS_GEM_VERSION') ENV['RAILS_GEM_VERSION'] else parse_gem_version(read_environment_rb) end end def load_rubygems require 'rubygems' min_version = '1.1.1' unless rubygems_version >= min_version $stderr.puts %Q(Rails requires RubyGems >= #{min_version} (you have #{rubygems_version}). Please `gem update --system` and try again.) exit 1 end rescue LoadError $stderr.puts %Q(Rails requires RubyGems >= #{min_version}. Please install RubyGems and try again: http://rubygems.rubyforge.org) exit 1 end def parse_gem_version(text) $1 if text =~ /^[^#]*RAILS_GEM_VERSION\s*=\s*["']([!~<>=]*\s*[\d.]+)["']/ end private def read_environment_rb File.read("#{RAILS_ROOT}/config/environment.rb") end end end end # All that for this: Rails.boot! Does anyone have any ideas? I am not getting any errors in the log or on the page. -fREW

    Read the article

  • Which MS technologies would be suited for a data intensive application?

    - by steve.tse
    I'm a junior VB.net developer with little application design knowledge. I've been reading a lot of material online regarding different design patterns, frameworks, and methodologies. It's become a bit confusing for me. Right now I'm trying to decide on what language would be best suited to convert an existing VB6 application (with SQL server backend.) I need to update the UI and add more user functionality and reporting capabilities. Initially I was thinking of using WPF and attempting the MVVM model for this big project. Reports would be generated from SSRS. A peer suggested using ASP.net and I don't have enough experience to determine what would be better. The senior programmers here are stuck on using VB6 and don't have any input on what to use. They are encouraging me to use the latest technologies. This application would be for ~20 users in a central location. Ideally I would stick to a Microsoft .net language. Current interface is similar to a datagrid table where the user would click in to see the detail of each record. They would need to have multiple records open at any given time. I look forward to all the advice I can get. EDIT 2010/04/22 2:47 PM EST What is your audience? Internal clients within an intranet How complex are the interactions you expect to implement? not very... displaying data from SQL server to UI. Allow user updates to said data. Typically just one user modifying a record. Do you require near real-time data updates? no How often do you expect to update the application after the first release? twice/year Do you expect a well-defined set of client platforms? Yes, windows xp environment, potentially upgrading to Win7. Currently in IE.6 moving to IE7 or 8 within a couple of months. Do users need access from anywhere? No, just from their PC.

    Read the article

  • Using Entity Framework 4.0 with Code-First and POCO: How to Get Parent Object with All its Children

    - by SirEel
    I'm new to EF 4.0, so maybe this is an easy question. I've got VS2010 RC and the latest EF CTP. I'm trying to implement the "Foreign Keys" code-first example on the EF Team's Design Blog, http://blogs.msdn.com/efdesign/archive/2009/10/12/code-only-further-enhancements.aspx. public class Customer { public int Id { get; set; public string CustomerDescription { get; set; public IList<PurchaseOrder> PurchaseOrders { get; set; } } public class PurchaseOrder { public int Id { get; set; } public int CustomerId { get; set; } public Customer Customer { get; set; } public DateTime DateReceived { get; set; } } public class MyContext : ObjectContext { public RepositoryContext(EntityConnection connection) : base(connection){} public IObjectSet<Customer> Customers { get {return base.CreateObjectSet<Customer>();} } } I use a ContextBuilder to configure MyContext: { var builder = new ContextBuilder<MyContext>(); var customerConfig = _builder.Entity<Customer>(); customerConfig.Property(c => c.Id).IsIdentity(); var poConfig = _builder.Entity<PurchaseOrder>(); poConfig.Property(po => po.Id).IsIdentity(); poConfig.Relationship(po => po.Customer) .FromProperty(c => c.PurchaseOrders) .HasConstraint((po, c) => po.CustomerId == c.Id); ... } This works correctly when I'm adding new Customers, but not when I try to retrieve existing Customers. This code successfully saves a new Customer and all its child PurchaseOrders: using (var context = builder.Create(connection)) { context.Customers.AddObject(customer); context.SaveChanges(); } But this code only retrieves Customer objects; their PurchaseOrders lists are always empty. using (var context = _builder.Create(_conn)) { var customers = context.Customers.ToList(); } What else do I need to do to the ContextBuilder to make MyContext always retrieve all the PurchaseOrders with each Customer?

    Read the article

  • Move an object in the direction of a bezier curve?

    - by Sent1nel
    I have an object with which I would like to make follow a bezier curve and am a little lost right now as to how to make it do that based on time rather than the points that make up the curve. .::Current System::. Each object in my scene graph is made from position, rotation and scale vectors. These vectors are used to form their corresponding matrices: scale, rotation and translation. Which are then multiplied in that order to form the local transform matrix. A world transform (Usually the identity matrix) is then multiplied against the local matrix transform. class CObject { public: // Local transform functions Matrix4f GetLocalTransform() const; void SetPosition(const Vector3f& pos); void SetRotation(const Vector3f& rot); void SetScale(const Vector3f& scale); // Local transform Matrix4f m_local; Vector3f m_localPostion; Vector3f m_localRotation; // rotation in degrees (xrot, yrot, zrot) Vector3f m_localScale; } Matrix4f CObject::GetLocalTransform() { Matrix4f out(Matrix4f::IDENTITY); Matrix4f scale(), rotation(), translation(); scale.SetScale(m_localScale); rotation.SetRotationDegrees(m_localRotation); translation.SetTranslation(m_localTranslation); out = scale * rotation * translation; } The big question I have are 1) How do I orientate my object to face the tangent of the Bezier curve? 2) How do I move that object along the curve without just setting objects position to that of a point on the bezier cuve? Heres an overview of the function thus far void CNodeControllerPieceWise::AnimateNode(CObject* pSpatial, double deltaTime) { // Get object latest pos. Vector3f posDelta = pSpatial->GetWorldTransform().GetTranslation(); // Get postion on curve Vector3f pos = curve.GetPosition(m_t); // Get tangent of curve Vector3f tangent = curve.GetFirstDerivative(m_t); } Edit: sorry its not very clear. I've been working on this for ages and its making my brain turn to mush. I want the object to be attached to the curve and face the direction of the curve. As for movement, I want to object to follow the curve based on the time this way it creates smooth movement throughout the curve.

    Read the article

  • jQuery animate slidding background image using mouse position wont stop... just keeps going

    - by simian
    I have a neat little jQuery script I am working on. Goal: parent div with overflow hidden holds a larger div with image. When I move the mouse to the left or right of the parent div, the div with image changes margin-left to move left or right. Problem is... if I move the mouse out of the parent div (left or right), the image keeps going. I need the image to stop when the mouse is not on the inside left or right edge of the parent div. Any ideas? <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript"> //<![CDATA[ jQuery.noConflict (); jQuery(document).ready(function(){ jQuery('#container').mousemove(function(e){ var parentWidth = jQuery('#container').width(); var parentWidthLeft = Math.round(.1 * jQuery('#container').width()); var parentWidthRight = Math.round(jQuery('#container').width() - parentWidthLeft); var parentHeight = jQuery('#container').height(); var parentOffset = jQuery('#container').offset(); var X = Math.round(e.pageX - parentOffset.left); var Y = Math.round(e.pageY - parentOffset.top); if (X<parentWidthLeft) { jQuery('#image').animate({'left': '-' + parentWidth }, 5000); } if (X>parentWidthRight) { jQuery('#image').animate({'left': parentWidth }, 5000); } if (X<=parentWidthRight && X>=parentWidthLeft) { jQuery('#image').stop(); } if (X<1) { jQuery('#image').stop(); } }); jQuery('#container').mouseleave(function(){ jQuery('#image').stop(); }); }); // ]]> </script> </head> <body> <div id="container" style="width: 500px; height: 500px; overflow: hidden; border: 10px solid #000; position: relative; margin: auto auto;"> <div id="image" style="position: absolute; left: 0; top: 0;"><img src="http://dump4free.com/imgdump/1/Carmen-Electra_1wallpaper1024x768.jpg" alt="" /></div> </div> </body> </html>

    Read the article

  • Starting jetty with spring xml as a background process/thread

    - by compass
    My goal is to set up a jetty test server and inject a custom servlet to test some REST classes in my project. I was able to launch the server with spring xml and run tests against that server. The issue I'm having is sometimes after the server started, the process stopped at the point before running the tests. It seems jetty didn't go to background. It works every time on my computer. But when I deployed to my CI server, it doesn't work. It also doesn't work when I'm on VPN. (Strange.) The server should be completed initialized as when the tests stuck, I was able to access the server using a browser. Here is my spring context xml: .... <bean id="servletHolder" class="org.eclipse.jetty.servlet.ServletHolder"> <constructor-arg ref="courseApiServlet"/> </bean> <bean id="servletHandler" class="org.eclipse.jetty.servlet.ServletContextHandler"/> <!-- Adding the servlet holders to the handlers --> <bean id="servletHandlerSetter" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"> <property name="targetObject" ref="servletHandler"/> <property name="targetMethod" value="addServlet"/> <property name="arguments"> <list> <ref bean="servletHolder"/> <value>/*</value> </list> </property> </bean> <bean id="httpTestServer" class="org.eclipse.jetty.server.Server" init-method="start" destroy-method="stop" depends-on="servletHandlerSetter"> <property name="connectors"> <list> <bean class="org.eclipse.jetty.server.nio.SelectChannelConnector"> <property name="port" value="#{settings['webservice.server.port']}" /> </bean> </list> </property> <property name="handler"> <ref bean="servletHandler" /> </property> </bean> Running latest Jetty 8.1.8 server and Spring 3.1.3. Any idea?

    Read the article

  • gauge chart is not displaying any thing

    - by Sandy
    i am trying to display the latest speed in mysql database on guage chart. i have tried so many things but gauge is not display plz any can help me...my code is attached and php part shows the correct value but dont know why guage is not display <?php $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="mysql"; // Database name $tbl_name="gpsdb"; // Table name // Connect to server and select database. $con=mysql_connect("$host", "$username")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $data = mysql_query("SELECT speed FROM gpsdb WHERE DeviceId=1234 ORDER BY TIME DESC LIMIT 1") or die(mysql_error()); while ($nt = mysql_fetch_assoc($data)) { $speed = $nt['speed']; $jsonTable = json_encode($speed); echo $jsonTable; } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title> Google Visualization API Sample </title> <script type="text/javascript" src="//www.google.com/jsapi"></script> <script type="text/javascript"> google.load('visualization', '1', {packages: ['gauge']}); </script> <script type="text/javascript"> function drawVisualization() { // Create and populate the data table. var data = new google.visualization.DataTable(<?=$speed?>); // Create and draw the visualization. new google.visualization.Gauge(document.getElementById('visualization')). draw(data); } google.setOnLoadCallback(drawVisualization); </script> </head> <body style="font-family: Arial;border: 0 none;"> <div id="visualization" style="width: 600px; height: 300px;"></div> </body> </html>

    Read the article

  • VB.Net 2008 IDE hanging - MSVB7.dll eating 100% CPU when editing code

    - by Andrew Backer
    I am having a problem with msvb7.dll eating 50%+ cpu on my dual core system. This usually lasts 10-30 seconds or so, during which time the IDE is non-responsive. This occurs when I do pretty much anything in the text editor, and can be replicated by simply adding blank lines to a function, and then deleting them. Or pasting some code. Or... lotsa stuff. SP1 installed I had DevExpress' refactor/coderush, components, and codeit.right installed, but have removed all 3 of them. (I had installed the latest version of Refactor Pro! (9.3.4), perhaps the day before) I have tried a VS.NET Repair. There is a kb that referenced some cpu destroying with vb, but it was included in SP1 Also: The solution consists of ~30 VB projects and 2 C# projects 8 other developers aren't having any issues with this (or at least not the SAME issues, we all have em) Clean get from TFS was done Project builds properly, can can even debug. This doesn't seem to happen on really small solutions, but perhaps it does and it just goes away super quick. Any clues at all as to what might be causing this, or how to fix it? I REALLY don't want to lose another day uninstalling and reinstalling and patching and so on =) If that even fixes it. Here is the stack trace (process explorer) that I get from the threads window when the msvb7.dll is churning. --- title in process explorer [threads] tab for process -------- cpu:49.28% cswitch delta: 300 to 3500 startaddress: [msvb7.dll+0x4218c] msvb7.dll version: 9.0.30729.1 --- actual stack trace ------- ntkrnlpa.exe!KiUnexpectedInterrupt+0x121 ntkrnlpa.exe!ZwYieldExecution+0x1c56 ntkrnlpa.exe!KiDispatchInterrupt+0x72e NDIS.sys!NdisFreeToBlockPool+0x15e1 // shortened stack trace. all of these are from msvb7, msvb7.dll+0x46ce7 <- 0x2676a <- 0x2698e <- 0x38031 <- 0x2659f <- 0x26644 msvb7.dll+0x25f29 <- 0x2ac7a <- 0x27522 <- 0x274a0 <- 0x2b5ce <- 0x2b6e4 msvb7.dll+0x67d0a <- 0x68551 <- 0x6817b <- 0x681f0 <- 0x67c38 <- 0x65fa8 msvb7.dll+0x666c6 <- 0x6672c <- 0x6673d <- 0x6677c <- 0x667b4 <- 0x63c77 msvb7.dll+0x63e97 <- 0x42c3a <- 0x42bc1 <- 0x41bd7 kernel32.dll!GetModuleFileNameA+0x1b4 This is the list of stuff from "copy info" in help-about, shortened to a resonable length. Microsoft Visual Studio 2008 | Version 9.0.30729.1 SP Microsoft Visual Studio 2008 Professional Edition - ENU Service Pack 1 (KB945140) KB945140 Microsoft .NET Framework | Version 3.5 SP1 Microsoft Visual Basic 2008 Microsoft Visual C# 2008 Microsoft Visual F# for Visual Studio 2008 Microsoft Visual Studio 2008 Team Explorer | Version 9.0.30729.1 Microsoft Visual Studio 2008 Tools for Office Microsoft Visual Web Developer 2008 Hotfix for Microsoft Visual Studio 2008 Professional Edition - ENU KB944899, KB945282, KB946040, KB946308, KB946344, KB946581, KB947171 KB947173, KB947180, KB947540, KB947789, KB948127, KB946260, KB946458, KB948816 Microsoft Recipe Framework Package 8.0 Process Editor WIT Designer 1.4.0.0 Process Editor for Microsoft Visual Studio Team Foundation Server, Version 1.4.0.0 tangible T4 Editor 9.0 tangible T4 Text Template Editor - T4 Editor tangibleprojectsystem 1.0 Team Foundation Server Power Tools October 2008 SQL Prompt 4.0 (disabled)

    Read the article

  • Run CGI in IIS 7 to work with GET without Requiring POST Request

    - by Mohamed Meligy
    I'm trying to migrate an old CGI application from an existing Windows 2003 server (IIS 6.0) where it works just fine to a new Windows 2008 server with IIS 7.0 where we're getting the following problem: After setting up the module handler and everything, I find that I can only access the CGI application (rdbweb.exe) file if I'm calling it via POST request (form submit from another page). If I just try to type in the URL of the file (issuing a GET request) I get the following error: HTTP Error 502.2 - Bad Gateway The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Exception EInOutError in module rdbweb.exe at 00039B44. I/O error 6. ". This is a very old application for one of our clients. When we tried to call the vendor they said we need to pay ~ $3000 annual support fee in order to start the talk about it. Of course I'm trying to avoid that! Note that: If we create a normal HTML form that submits to "rdbweb.exe", we get the CGI working normally. We can't use this as workaround though because some pages in the application link to "rdbweb.exe" with normal link not form submit. If we run "rdbweb.exe". from a Console (Command Prompt) Window not IIS, we get the normal HTML we'd typically expect, no problem. We have tried the following: Ensuring the CGI module mapped to "rdbweb.exe".in IIS has all permissions (read, write, execute) enabled and also all verbs are allowed not just specific ones, also tried allowing GET, POST explicitely. Ensuring the application bool has "enable 32 bit applications" set to true. Ensuring the website runs with an account that has full permissions on the "rdbweb.exe".file and whole website (although we know it "read", "execute" should be enough). Ensuring the machine wide IIS setting for "ISAPI and CGI Restrictions" has the full path to "rdbweb.exe".allowed. Making sure we have the latest Windows Updates (for IIS6 we found knowledge base articles stating bugs that require hot fixes for IIS6, but nothing similar was found for IIS7). Changing the module from CGI to Fast CGI, not working also Now the only remaining possibility we have instigated is the following Microsoft Knowledge Base article:http://support.microsoft.com/kb/145661 - Which is about: CGI Error: The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: the article suggests the following solution: Modify the source code for the CGI application header output. The following is an example of a correct header: print "HTTP/1.0 200 OK\n"; print "Content-Type: text/html\n\n\n"; Unfortunately we do not have the source to try this out, and I'm not sure anyway whether this is the issue we're having. Can you help me with this problem? Is there a way to make the application work without requiring POST request? Note that on the old IIS6 server the application is working just fine, and I could not find any special IIS configuration that I may want to try its equivalent on IIS7.

    Read the article

  • Can't get MySQL to install

    - by James Marthenal
    I'd like to think I know what I'm doing in a Unix shell but maybe not. I made a mistake in a configuration file for MySQL, so I decided to just uninstall it and then reinstall it, so I did: sudo apt-get --purge remove mysql-server mysql-server-5.0 mysql-client The files were deleted, so I then tried to install it, but it didn't ask me for a root password or anything else, so I uninstalled it using the above command again and then did sudo rm -rf /etc/mysql sudo rm /etc/init.d/mysql sudo rm -rf /var/lib/mysql* I then restarted the computer then installed it again: sudo apt-get install mysql-server mysql-client It asked for a root password, and everything looked like it would work, until I saw this: $ sudo apt-get install mysql-server mysql-client Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 Suggested packages: tinyca The following NEW packages will be installed: mysql-client mysql-server mysql-server-5.0 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.7MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-client mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.28101": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.28101 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 160284 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-client. Unpacking mysql-client (from .../mysql-client_5.0.51a-24+lenny5_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up mysql-client (5.0.51a-24+lenny5) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Now I can't seem to figure out what to do. I just want to get a clean MySQL installation at this point. I'm running the latest stable release of Debian. All help is appreciated—thanks! Edit: I looked at this similar question, which suggests that I uninstall mysql-common, but when I try to do so I see: The following packages will be REMOVED: apache2 apache2-mpm-prefork apache2-utils apache2.2-common git-svn libapache2-mod-php5 libapache2-mod-python libapache2-svn libaprutil1 libdbd-mysql-perl libdbd-mysql-rubygem libmysql-ruby libmysql-ruby1.8 libmysql-rubygem libmysqlclient15-dev libmysqlclient15off librdf-perl librdf0 libserf-0-0 libsvn-perl libsvn1 mysql-client-5.0 mysql-common mytop ndn-apache22-php5 ndn-apache22-svn ndn-interpreters ndn-lighttpd ndn-netsaint-plugins ndn-perl-modules ndn-php5-cgi ndn-php5-xcache ndn-php53 ndn-php53-suhosin ndn-rubygems php5 php5-mcrypt php5-mysql proftpd proftpd-mod-mysql python-django python-mysqldb python-subversion python-svn subversion subversion-tools trac zendoptimizer 0 upgraded, 0 newly installed, 48 to remove and 1 not upgraded. Eeek! Any suggestions?

    Read the article

  • How can I build pyv8 from source on FreeBSD against the v8 port?

    - by Utkonos
    I am unable to build pyv8 from source on FreeBSD. I have installed the /usr/ports/lang/v8 port, and I'm running into the following error. It seems that pyv8 wants to build v8 itself even though v8 is already built and installed. How can I point pyv8 to the already installed location of v8? # python setup.py build Found Google v8 base on V8_HOME , update it to the latest SVN trunk at running build ==================== INFO: Installing or updating GYP... -------------------- INFO: Check out GYP from SVN ... DEBUG: make dependencies ERROR: Check out GYP from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. ==================== INFO: Patching the GYP scripts INFO: patch the Google v8 build/standalone.gypi file to enable RTTI and C++ Exceptions ==================== INFO: building Google v8 with GYP for x64 platform with release mode -------------------- INFO: build v8 from SVN ... DEBUG: make verifyheap=off component=shared_library visibility=on gdbjit=off liveobjectlist=off regexp=native disassembler=off objectprint=off debuggersupport=on extrachecks=off snapshot=on werror=on x64.release ERROR: build v8 from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. The files that are installed by the v8 port are the following (in /usr/local): bin/d8 include/v8.h include/v8-debug.h include/v8-preparser.h include/v8-profiler.h include/v8-testing.h include/v8stdint.h lib/libv8.so lib/libv8.so.1

    Read the article

  • Strange DNS issue with internal Windows DNS

    - by Brady
    I've encountered a strange issue with our internal Windows DNS infrastructure. We have a website hosted on Amazon EC2 with the DNS running on Amazon Route 53. In the publicly facing DNS we have the wildcard record setup as an A record Alias pointing to an AWS Elastic Load Balancer sitting in front of our EC2 instances. For those who are not aware, the A record Alias behaves like a CNAME record, however no extra lookup is required on the client side (See http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/CreatingAliasRRSets.html for more information). We have a secondary domain that has the www subdomain as a CNAME pointing to a subdomain on the primary domain, which resolves against the wildcard entry. For example the subdomain www.secondary.com is a CNAME to sub1.primary.com, but there is no explicit entry for sub1.primary.com, so it resolves to wildcard record. This setup work without issue publicly. The issue comes in our internal DNS at our corporate office where we use the same primary domain for some internal only facing sites. In this setup we have two Active Directory DNS servers with one Server 2003 and one Server 2008 R2 instance. The zone is an AD integrated zone, but it is not the AD domain. In the internal DNS we have the wildcard record pointing to a third external domain, that is also hosted on Route 53 with an A record Alias pointing to the same ELB instance. For example, *.primary.com is a CNAME to tertiary.com, so in effect you have www.secondary.com as a CNAME to *.primary.com, which is a CNAME to tertiary.com. In this setup, attempting to resolve www.secondary.com will fail. Clearing the cache on the Server 2003 instance will allow it to resolve once, but subsequent attempts will fail. It fails even with a clean cache against the 2008 R2 server. It seems that only Windows clients are affected. A Mac running OSX Mountain Lion does not experience this issue. I'm even able to replicate the issue using nslookup. Against the 2003 server, with a freshly cleaned cache, I recieve the appropriate response from www.secondary.com: Non-authoritative answer: Name: subdomain.primary.com Address: x.x.x.x (Public IP) Aliases: www.secondary.com Subsequent checks simply return: Non-authoritative answer: Name: www.secondary.com If you set the type to CNAME you get the appropriate responses all the time. www.secondary.com gives you: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Against the 2008 R2 server things are a little different. Even with a clean cache, www.secondary.com returns just: Non-authoritative answer: Name: www.secondary.com The CNAME records are returned appropriately. www.secondary.com returns: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com tertiary.com internet address = x.x.x.x (Public IP) tertiary.com AAAA IPv6 address = x::x (Public IPv6) And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Requests directly against subdomain.primary.com work correctly.

    Read the article

  • Weird networking problem ( Linksys, Windows 7 )

    - by Rohit Nair
    Okay it's a bit tough to figure out where to start from, but here is the basic summary of the issue: During general internet usage, there are times when any attempt to visit a website stalls at "Waiting for somedomain.com". This problem occurs in Firefox, IE and Chrome. No website will load, INCLUDING the router configuration page at 192.168.1.1. Curiously, ping works fine, and other network apps such as MSN Messenger continue to work and I can send and receive messages. Disconnecting and reconnecting to the wireless network seems to fix the problem for a bit, but there are times when it relapses into not loading after every 2-3 http requests. Restarting the router seems to fix the issue, but it can crop up hours or days later. I have a CCNA cert and I know my way around the Windows family of operating systems, so I'm going to list all the things I've tried here. Other computers on the network seem to suffer the same problem, which makes me think it might be a specific problem with something in Win7. The random nature of this issue makes it a bit difficult to confirm, but I can definitely say that I have experienced this on the following systems: Windows 7 64-bit on my desktop Windows Vista 32-bit on my desktop ( the desktop has 2 wireless NICs and the problem existed on both ) Windows Vista 32-bit on my laptop ( both with wireless and wired ) Windows XP SP3 on another laptop ( both wireless and wired ) Using Wireshark to sniff packets seemed to indicate that although HTTP requests were being SENT out, no packets were coming in to respond to the HTTP request. However, other network apps continued to work i.e I would still receive IMs on Windows Live Messenger. Disabling IPV6 had no effect. Updating router firmware to the latest stock firmware by Linksys had no effect. Switching to dd-wrt firmware had no effect. By "no effect" I mean that although the restart required by firmware updates fixed the problem at the time, it still came back. A couple of weeks back, after a LOT of googling and flipping of various options, I figured it might be a case of router slowdown ( http://www.dd-wrt.com/wiki/index.php/Router%5FSlowdown ) caused by the fact that I occasionally run a torrent client. I tried changing the configuration as suggested in that router slowdown link, and restarted the router. However I have not run the torrent client for 12 days now, and yet I still randomly experience this problem. Currently the computer I am using is running Windows 7 64-bit. I would just like to reiterate some of the reasons that I was confused by the issue. Even the router config page at 192.168.1.1 would not load, indicating that it's not a problem with the WAN link, but probably a router issue or a local computer issue. For some reason, disconnecting and reconnecting to the wireless network immediately seems to fix the problem. Updating the router firmware, even switching to open source firmware did nothing. So it seemed to be a computer issue. On the other hand, I have not seen any mass outrage of people having networking problems with Windows 7 and Linksys routers, especially a problem of this sort, and I have tweaked every network setting I could think of. Although HTTP seems to have trouble, ping works fine, DNS lookups work fine, other networking apps work fine. However if I disconnect from Windows Live Messenger and try to reconnect, it fails to reconnect. So although it could receive data over the existing TCP/IP connection, trying to start a new one failed? Does anyone have any further ideas on debugging or fixing this issue? I am reasonably certain there are no viruses or other malicious apps on my network, and I am also reasonably certain that nobody is accessing my router without my consent. Router: Linksys WRT54G2 1.0 running dd-wrt firmware Wireless Card: Alfa AWUS036H OS: Windows 7 64-bit EDIT: I tried switching to a clean wireless channel free from interference, but the problem still persisted. I tried connecting directly with a cable, but the problem still persisted. Signed A very confused and bewildered geek whose knowledge seems to be useless in the face of this frustrating network issue.

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • Windows 7 Boot to VHD using a VHD clone of the system drive

    - by daveh551
    This seems like a not too difficult problem, and, after several hurdles, I'm maddeningly close. But I can't quite get there. I'm running Windows 7 in development shop. I want to start using VS2010 to work on some stuff that won't be released for awhile. My boss said no beta code on the production machine, but I could run VS2010 for this project IF I could do it in an isolated environment, like a virtual PC. Well, I've used the beta and RC of Win7 on VPC's before, and it was painfully slow because of the VPC environment. But everyone has been singing the praises of Windows 7's boot-to-VHD capability, where only the disk is virtualized, and you're actually running on the hardware. Supposed to be little slower, but nowhere near the speed penalty of VPC. I've spent a fair amount of time getting everything installed the way I want it. So I figured, I'll just clone my system drive using Disk2VHD, and boot off of that, and then install VS2010 onto that. (I keep most of my user data, including all my projects, in a separate partition, so that wouldn't have to be duplicated and would still be available.) Well, I had some difficulties with that, owing mainly to the fact that I was using an old version of Disk2VHD - (get the latest if you're going to try it.) But I did finally get it to boot. (Scott Hanselman has a good blog post on boot to VHD). But it wasn't exactly what I was expecting or hoping for. What I expected was that the VHD would become the C: drive, and the original (physical) C: drive would be either hidden or mounted under a different letter, and thus isolated and protected from any changes. What you actually get is that the VHD becomes the D: drive AND you boot from the D: drive, BUT your original C: drive is still there. Which is sort of okay EXCEPT that the Registry on the VHD is a clone of the Registry on C: drive, and includes many hard-coded references to C:. So the result is that some things come from (and modify) D: (the VHD), but some things come from (and modify) C:. (If you open a cmd prompt and do a SET to look at your environment variables, you will see a mixture of D:\ and C:\ paths.) So I don't really have an isolated environment. Most importantly, %ProgramFiles% is still set to C:\Program Files. What I really need is a tool that can access the registry files on the mounted VHD AS FILES, not as registry entries, and do a global search and replace on all the C:\ in strings to D:. I haven't found such a program. (I've tried to do it with a program called Registry Replace, but, even when running as Administrator, there are certain entries that the Registry won't let you change.) Does anyone know of one? Or any other solution to my problem (other than starting from scratch with a clean VHD and installing Win7 and all my programs on it.)?

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • mono 3.0.2 + xsp + lighttpd delivers empty page

    - by Nefal Warnets
    I needed MVC 4 (and basic .NET 4.5) support so I downloaded mono 3.0.2 and deployed it on an lighttpd 1.4.28 installation, together with xsp-2.10.2 (was the latest I could find). After going through the config tutorials I managed to get the fastcgi server to spawn, but all pages are served empty. even if I go to nonexistant urls or direct .aspx files I get an empty HTTP 200 response. The log file on Debug shows nothing suspicious. Here is the log: [2012-12-12 15:15:38Z] Debug Accepting an incoming connection. [2012-12-12 15:15:38Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_SOFTWARE = lighttpd/1.4.28) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_NAME = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PORT = 80) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_PORT = xxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_NAME = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (PATH_INFO = ) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_FILENAME = /data/htdocs/ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (DOCUMENT_ROOT = /data/htdocs) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_URI = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (QUERY_STRING = ) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-12-12 15:15:38Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_HOST = xxxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CACHE_CONTROL = max-age=0) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip,deflate,sdch) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-US,en;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3) [2012-12-12 15:15:38Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) lighttpd config: server.modules += ( "mod_fastcgi" ) include "conf.d/mono.conf" $HTTP["host"] !~ "^vdn\." { $HTTP["url"] !~ "\.(jpg|gif|png|js|css|swf|ico|jpeg|mp4|flv|zip|7z|rar|psd|pdf|html|htm)$" { fastcgi.server += ( "" => (( "socket" => mono_shared_dir + "fastcgi-mono-server", "bin-path" => mono_fastcgi_server, "bin-environment" => ( "PATH" => mono_dir + "bin:/bin:/usr/bin:", "LD_LIBRARY_PATH" => mono_dir + "lib:", "MONO_SHARED_DIR" => mono_shared_dir, "MONO_FCGI_LOGLEVELS" => "Debug", "MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log", "MONO_FCGI_ROOT" => mono_fcgi_root, "MONO_FCGI_APPLICATIONS" => mono_fcgi_applications ), "max-procs" => 1, "check-local" => "disable" )) ) } } the referenced mono.conf index-file.names += ( "index.aspx", "default.aspx" ) var.mono_dir = "/usr/" var.mono_shared_dir = "/tmp/" var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server4" var.mono_fcgi_root = server.document-root var.mono_fcgi_applications = "/:." The document root for this server is /data/htdocs. The asp.net files reside there. lighttpd error logs show nothing. Every help is greatly appreciated!

    Read the article

  • Permissions restoring from Time Machine - Finder copy vs "cp" copy

    - by Ben Challenor
    Note: this question was starting to sprawl so I rewrote it. I have a folder that I'm trying to restore from a Time Machine backup. Using cp -R works fine, but certain folders cannot be restored with either the Time Machine UI or Finder. Other users have reported similar errors and the cp -R workaround was suggested (e.g. Restoring from Time Machine - Permissions Error). But I wanted to understand: Why cp -R works when the Finder and the Time Machine UI do not. Whether I could prevent the errors by changing file permissions before the backup. There do indeed seem to be some permissions that Finder works with and some that it does not. I've narrowed the errors down to folders with the user ben (that's me) and the group wheel. Here's a simplified reproduction. I have four folders with the owner/group combinations I've seen so far: ben ~/Desktop/test $ ls -lea total 16 drwxr-xr-x 7 ben staff 238 27 Nov 14:31 . drwx------+ 17 ben staff 578 27 Nov 14:29 .. 0: group:everyone deny delete -rw-r--r--@ 1 ben staff 6148 27 Nov 14:31 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff drwxr-xr-x 3 ben wheel 102 27 Nov 14:30 ben-wheel drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Each contains a single file called file with the same owner/group: ben ~/Desktop/test $ cd ben-staff ben ~/Desktop/test/ben-staff $ ls -lea total 0 drwxr-xr-x 3 ben staff 102 27 Nov 14:30 . drwxr-xr-x 7 ben staff 238 27 Nov 14:31 .. -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file In the backup, they look like this: ben /Volumes/Deimos/Backups.backupdb/Ben’s MacBook Air/Latest/Macintosh HD/Users/ben/Desktop/test $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:34 .DS_Store 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben staff 102 27 Nov 14:51 ben-staff 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben wheel 102 27 Nov 14:51 ben-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root admin 102 27 Nov 14:52 root-admin 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root wheel 102 27 Nov 14:52 root-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown Of these, ben-staff can be restored with Finder without errors. root-wheel and root-admin ask for my password and then restore without errors. But ben-wheel does not prompt for my password and gives the error: The operation can’t be completed because you don’t have permission to access “file”. Interestingly, I can restore the file from this folder by dragging it directly to my local drive (instead of dragging its parent folder), but when I do so its permissions are changed to ben/staff. Here are the permissions after the restore for the three folders that worked correctly, and the file from ben-wheel that was changed to ben/staff. ben ~/Desktop/test-restore $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:46 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Can anyone explain this behaviour? Why do Finder and the Time Machine UI break with the ben / wheel permissions? And why does cp -R work (even without sudo)?

    Read the article

  • CPU Temperature sensor wrong?

    - by Matias Nino
    Everest Ultimate is suddenly telling me that the CPU temperature (and core temps) for my E6850 Core 2 Duo is 72 degrees Celsius. When I stress-test the machine, the temp goes up to 91 degrees and the CPU actually throttles. System remains stable though. For over a year now, my CPU has run very cool (40's) with a large commercial copper heatsink/fan that I bought separately. To top it off, I removed the cover of the box and felt the cpu heatsink and it wasn't even warm. Is there such a thing as a CPU temp sensor showing the wrong readings? Any tips would help. UPDATE #1 Temp is also just as high in BIOS. So that leads me to believe it's a CPU seating issue (even though I used thermal paste to seat it two years ago when I built the machine) UPDATE #2 Well. I removed the heatsink and cleaned off the original thermal paste (which was somewhat crusty). I polished the surface, re-applied some new paste, and reseated the heat sink. After powering it up, there was no noticeable change in the temp - ideling at 74. Ran the stress test and it went up to 94 degrees before being 100% throttled. I let it sit at 94 degrees for 20 minutes straight and the computer didn't even flinch. I then immediately shut it off and opened the case and felt around. The heatsink was completely cold to the touch. Even the copper rods were cold. The area near contact with the CPU was slightly warm but not hot to touch. Then I ran REALTEMP, which is supposedly more accurate and it told me the CPU was at 104 degrees. (LOL) At this point, I'm thinking no doubt the cpu's sensor is wrong. Sidenote: the BIOS has the latest version so no option to flash there. Reverting hasn't been known to help from what I've read. What pisses me off is the false temps force the CPU to artificially throttle from 3GHz down to 2GHz and my CPU fan is cranking at full force all the time. Should I call intel and tell them to send me another E6850? SOLUTION UPDATE I switched the processor out with another one and got the same obscene temperatures with the new processor followed by a heatsink that was cool to touch. My suspicion in the heatsink was suddenly renewed. I swapped it out with the stock heatsink/fan and lo and behold the temperatures returned to the normal 35C-50C. Even though the thermal paste was visibly flattened out every time I removed it, it looks like the heatsink was still not pressing hard enough on the CPU to effectively conduct the heat. The heatsink is a Masscool 8Wa741, which screws into a standard position on a mount on the back of the MOBO. Only thing I can surmise after 2 years of use was that, over time, the heatsink pressure on the CPU gave way until the heat began to be ineffectively conducted. Lessons learned: Intel CPU's can run SUPER HOT (upwards of 95C) and still be stable. Heatsink's need to be VERY firmly pressed against the CPU to conduct heat.

    Read the article

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • Why do GPUs overheat?

    - by JAD
    About a year ago, I added a 9800GT (1 GB version) and a Corsair CX500 PSU to an HP M8000N computer. A few weeks ago, the HDD overheated and I decided to transfer the GPU & PSU to a new build, which consists of: i3 @ 3.3Ghz Gigabyte H61 Micro ATX Mobo 4GB RAM 500GB WD HDD DVD RW Drive Cooler Master Elite 430 Tower Once I had Win7 up and running, I installed all the essential drivers that came with the Gigabyte Mobo CD. However, whenever I tried installing the Graphics Media Accelerator driver, the computer would crash and enter an endless boot sequence on the next startup. I skipped installing this driver and installed the CD driver for the 9800GT, which by now is a year old. Everything was working fine, WEI rated my GPU at 6.6 graphics & aero performance. However, after updating my Nvidia drivers to the latest, the WEI dropped my rating to 3.3 for Aero, and 4.7 for graphics performance. Just to make sure that everything was ok, I ran Bad Company 2 on medium settings. The first few minutes ran just fine at a smooth framerate, so I dismissed this as Windows being Windows. About 6 hours later, I ran BC2 again. This time I averaged anywhere from 2-5 FPS. I checked the GPU temperature through GPU-Z, and it came back as 120C. The problem with this, is that the computer was on for six hours up to that point. Wouldn't the card have experienced a reactor core meltdown a lot sooner than that? Granted, the computer was "sleeping" some of the time, but still... The next day I took out a temperature gun and ran some tests. I would point the laser at a very specific area on the reverse side of the card (not the fan or "front"), and compare the temp reading with GPU-Z. After leaving the system on idle on idle for a few minutes, I ran BC2 twice. Here are the results: GPU-Z Reading / Temp Gun Reading / Time Null / 22.3°C / Comp is Off 53°C / 33.5°C / 1:49 78°C / 46°C / 1:53 - (First BC2 run; good framerate) 102°C / 64.6°C / 2:01 - (System is again on idle) 113°C / 64.8°C / 2:10 119°C / 71.8°C / 2:17 - (Second BC2 run; poor framerate) I should also mention that I also took a temp recording of another part of the GPU from 2:01-2:17. The temp in this area jumped from 75°C to 82.9°C in that time frame. This pretty much confirms that GPU-Z is reporting the temperature accurately, and the card is overheating. But I'd like to know why; the cars is doing nothing and still the temperature climbs at a steady rate. I thoroughly cleaned the GPU and PSU when I salvaged them from the old HP M8000N computer with a can of compressed air, dust cant be the issue. Similarly, the rest of the computer is brand new. I installed various Nvidia drivers, but no luck. It seems strange to me that a year-old card is suddenly failing on me; aren't they supposed to last at least two years? Could this be a driver issue? Is the motherboard faulty? Could the PSU be overfeeding the card on voltage? Neither case seems likely, as the CPU, RAM and otherwise the rest of the comp has worked flawlessly and has stayed well within respectable temp ranges (the i3 lingers around 50C, the HDD stays at 30C, so does the PSU). How can I pinpoint the issue?

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >