Search Results

Search found 33013 results on 1321 pages for 'ease of access'.

Page 471/1321 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • How can I use Amazon's API in PHP to search for books?

    - by TerranRich
    I'm working on a Facebook app for book sharing, reviewing, and recommendations. I've scoured the web, searched Google using every search phrase I could think of, but I could not find any tutorials on how to access the Amazon.com API for book information. I signed up for an AWS account, but even the tutorials on their website didn't help me one bit. They're all geared toward using cloud computing for file storage and processing, but that's not what I want. I just want to access their API to search info on books. Kind of like how http://openlibrary.org/ does it, where it's a simple URL call to get information on a book (but their databases aren't nearly as populated as Amazon's). Why is it so hard to find the information I need on Amazon's AWS site? If anybody could help, I would greatly appreciate it.

    Read the article

  • using eclipse to connect to remote machine[linux server] using ssh

    - by lalit
    Hi, I am searching out for solution to use any of the IDE's such as Netbeans/ eclipse to run .jsp and .java files from a remote machine using linux as the OS. The .java and .jsp files are on the server . So, an IDE which would let me access the server and let me update the files directly on the server would be great. I use SSH terminal to connect to the server to update and get files. Now, if there was an easier way to just have an access using IDE to the linux remote machine. that would be great. Please let me know regarding the same. Thank you very much for your time. Regards.

    Read the article

  • Synchronization between user space process and interupt context code

    - by user1748950
    Recently I attended couple of interviews. Out of all kernel questions which were asked, thr is one specific question which I couldnt find convincing answer of. How will you use different synchronization techniques while sharing data between user space process and interrupt context function? My convincing answer was: In interrupt context code: 1. do *spin_lock_irqsave* 2. access data buffer which is shared between user space app and kernel 3. do *spin_lock_irqrestore* Then this was not convincing answer. Do I have to do irqsave and irqrestore in all the instances of data access? Regards, Yogi

    Read the article

  • PHP - login to a remote server, trough my own server, with HTTPS, cookies and proxy, and downloading the html

    - by Yunga Mohani
    Hello, so what i am trying to do is this: login to the other server with a PHP on my own server (either with my username and pass/or with my cookies) then have access to the page i want to display/download i want to write a PHP script that is located on my own server, that automatically does a login to another server, that uses HTTPS and a web form for login. after the login i have access to that page that i am trying to download. i dont know if it would be possible to login and download the html only with the cookies that i have in my browser through a previous login, or if i need to do the login in my php script through some https login method. can i do any of this with curl or fsocksopen or what would be the best way to realize this? thanks in advance!

    Read the article

  • Creating folders and uploading a file without using a web browser

    - by Ashpak
    I am using the Java Api We have a web application wherein we want our file to be uploaded on an on_click even on some submit button. We don't want the browser to prompt us to enter the username and password to get the access token. Instead, we can provide the username and the password through the code so that we get the access token and the file will be uploaded directly. Right now, OAuth 2 requires that we enter the username and password to be entered through the browser. Reading through the post, I see that we can authenticate for the first time, obtain a refresh token and then keep on refreshing that token periodically. But our application requirements does not allow for this work-flow. Is there any way to automate the process of entering the user credentials using Java code. or Is there any library that will keep the browser away from the process.

    Read the article

  • WAMP Can't have a folder named 'icons'

    - by Yxvasznalskje
    I'm having some problem with wamp. I can't seem to have an folder called icons. This is where I keep all my icons used on the site. The problem is all my icons are not showing up. If I rename it to something else it works. But I can't seem to have a folder called icons for some reason. I've tried deleting it and made a new folder called icons but it just wont let me. Forbidden You don't have permission to access /icons/ on this server. Why do I get this permission access error? I am using Windows 7.

    Read the article

  • Expanded securityadmin

    - by user80652
    I'm aware that sysadmin is documented as the server role necessary for creating logins (SQL/Windows-integrated); nevertheless, I'm tasked to find out if there's any other server role (built-in or otherwise) that can be used. To be specific, I'm looking to setup one or two logins with access to create logins, create [database] users, assign users to [database] roles. Potentially reset passwords, but most of the logins are Windows-integrated and it's not necessary. Cannot have access to data at all, nor can these logins have rights to update tables nor create/update roles. Seems my only options so far are to set these 2 logins with securityadmin server role and for the specific databases, configure with db_securityadmin and db_accessadmin... but this configuration doesn't allow for creating logins.

    Read the article

  • Accessing Node of an XML object

    - by Lizard
    I am trying to access certain pieces of data from an xml file, here is the problem. ###XML FILE <products> <product> .... .... </product> <product> .... .... </product> etc... </products> I know that the piece of data I need is in ($products->product->myProdNode) I have this mapping (and many others) stored in my database as a string e.g.'product->prodCode' or 'product->dedscriptions->short_desc' How can I access this data by using the strings stored in my database. Thanks for you help in advance!

    Read the article

  • HWID locking a PHP page?

    - by Rob
    Currently I sell a program, that accesses my webpage. The program is HWID (Hard Ware ID) locked, and the only reason I use the program to access the webpage instead of direct access via a webbrowser, is so that I can use HWID authentication. However, I've just been told I can code a script to get computer information, such as hardware ID etc. Is this actually possible completely server-side? If so, can I do it with PHP? If not, what language would this be, and what functions would I have to look into for this?

    Read the article

  • CakePHP ACL use case(s)

    - by Jonathan
    I have got a simple web app in development, i want to establish a couple of user groups; Admin, Doctors & Patients. Each group would have their access restricted to particular controller actions rather than individual content. So for example, Doctors can view patient records (index & view actions), but cannot delete them. Usually i would create a groups model, and assign the various users to a group. And filter in the beforeFilter() method to determine if the user has access. But if ACL can do the job, why right the code, right? Thanks

    Read the article

  • PHP and Javascript cookies

    - by shummel7845
    Can I access a cookie written with jQuery's cookie plug-in with PHP? I know you can't set Javascript equal to PHP or vice versa, but IN ESSENCE is: $.cookie('var') = $_COOKIE['var']? Again, I know you can't set them equal to each other, but if I set it in jQuery and then go to another page, can PHP access it? I've read lots of posts about this, but I can't seem to find an answer to this part. Note, if I look in Firefox's preferences, I can see the cookies are there, so I know they're set.

    Read the article

  • Complex SQL queries (DELETE)?

    - by Joe
    Hello all, I'm working with three tables, and for simplicity's sake let's call them table A, B, and C. Both tables A and B have a column called id, as well as one other column, Aattribute and Battribute, respectively. Column c also has an id column, and two other columns which hold values for A.id and B.id. Now, in my code, I have easy access to values for both Aattribute and Battribute, and want to delete the row at C, so effectively I want to do something like this: DELETE FROM C WHERE aid=(SELECT id FROM A WHERE Aattribute='myvalue') AND bid=(SELECT id FROM B WHERE Battribute='myothervalue') But this obviously doesn't work. Is there any way to make a single complex query, or do I have to run three queries, where I first get the value of A.id using a SELECT with 'myvalue', then the same for B.id, then use those in the final query? [Edit: it's not letting me comment, so in response to the first comment on this: I tried the above query and it did not work, I figured it just wasn't syntactically correct. Using MS Access, for what it's worth. ]

    Read the article

  • private virtual function in derived class

    - by user1706047
    class base { public: virtual void doSomething() = 0; }; class derived : public base { **private:** virtual void doSomething(){cout<<"Derived fn"<<endl;} }; now if i do the following: base *b=new child; b->doSomething(); //it calls the derived class fn even if that is private. Question: 1.its able to call the derived class fn even if that is private.How is it possible? Now if i change the inheritance access specifier from public to protected/private then i get compilation error as "'type cast' : conversion from 'Derived *' to 'base *' exists, but is inaccessible" Notes: I am aware on the concepts of the inheritance access specifiers.So in second case as its derived private/protected, its inaccessible. But here it confuses me for the first question. Any input will be highly appreciated

    Read the article

  • how do i represent document.getElementById('page1:form2:amount') in jquery?

    - by Prady
    Hi, how can i represent document.getElementById('page1:form2:amount') in jQuery I know i can use $('#amount') to get access the id amount. The id amount is a "< apex:inputText/ " type which is how an input text is represented in visulforce page. To access this id i would need to use the hierarchy of page-form-id. Thanks Prady Update: Code used in visulaforce page. <apex:page controller="acontroller" id="page1"> <apex:form id="form2"> <apex:inputText value="{!amt}" id="amount" onchange="calenddate();"/> </apex:form> </apex:page>

    Read the article

  • Doesn’t <asp:A runat=”server” B=”someValue” … /> syntax violate one of the basic rules in C# languag

    - by AspOnMyNet
    Assuming server control of type A has a protected member M, then we are also able to access A.M via declaring control tag A on some aspx page: <asp:A runat=”server” M=”someValue” … /> But isn’t one of the rules in C# that protected members of class A can only be accessed from A and from classes derived from A? So doesn’t the ability to access member A.M via <asp:A M=”someValue” … /> syntax violate this rule, since we are basically accessing A.M from a class ( which is automatically generated aspx class ) not derived from A?!

    Read the article

  • Async Task in a loop

    - by Ankuj
    How does one create an AsyncTask which keeps running itself after a fixed interval of time. For eg. get data from server every 5 minutes and give notification to caller thread that it has received the data. I searched on the forum but could not find much. What I have gathered so far is that 1) A UI thread will call AsyncTask 2) onPrExecute for UI thread access before executing 3) OnPostExecute for UI thread access after executing I dont need to show any progress update to the user. Also, the task will be destroyed when the app closes. Any tutorial for this will he helpful

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Apache SSO through Kerberos using Machine Account

    - by watkipet
    I'm attempting to get Apache on Ubuntu 12.04 to authenticate users via Kerberos SSO to a Windows 2008 Active Directory server. Here are a few things that make my situation different: I don't have administrative access to the Windows Server (nor will I ever have access). I also cannot have any changes to the server made on my behalf. I've joined Ubuntu server to the Active Directory using PBIS open. Users can log into the Ubuntu server using their AD credentials. kinit also works fine for each user. Since I can't change AD (except for adding new machines and SPNs), I cannot add a service account for Apache on Ubuntu. Since I can't add I service account, I have to use the machine keytab (/etc/krb5.keytab), or at least use the machine password in another keytab. Right now I'm using the machine keytab and giving Apache readonly access (bad idea, I know). I've already added the SPN using net ads keytab add HTTP -U Since I'm using Ubuntu 12.04, the only encoding types that get added during "net ads keytab add" are arcfour-hmac, des-cbc-crc, and des-cbc-md5. PBIS adds the AES encoding types to the host and cifs principals when it joins the domain, but I have yet to get "net ads keytab add" to do this. ktpass and setspn are out of the question because of #1 above. I've configured (for Kerberos SSO) and tested both IE 8 Firefox. I'm using the following configuration in my Apache site config: <Location /secured> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms DOMAIN.COM Krb5KeyTab /etc/krb5.keytab KrbLocalUserMapping On require valid-user </Location> When Firefox tries to connect get the following in Apache's error.log (LogLevel debug): [Wed Oct 23 13:48:31 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 13:48:31 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(994): [client 192.168.0.2] Using HTTP/[email protected] as server principal for password verification [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(698): [client 192.168.0.2] Trying to get TGT for user [email protected] [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(609): [client 192.168.0.2] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(652): [client 192.168.0.2] krb5_rd_req() failed when verifying KDC [Wed Oct 23 13:48:37 2013] [error] [client 192.168.0.2] failed to verify krb5 credentials: Decrypt integrity check failed [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(1073): [client 192.168.0.2] kerb_authenticate_user_krb5pwd ret=401 user=(NULL) authtype=(NULL) [Wed Oct 23 13:48:37 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured When IE 8 tries to connect I get: [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 14:03:30 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1240): [client 192.168.0.2] Acquiring creds for HTTP@apache_server [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1385): [client 192.168.0.2] Verifying client data using KRB5 GSS-API [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1401): [client 192.168.0.2] Client didn't delegate us their credential [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1420): [client 192.168.0.2] GSS-API token of length 9 bytes will be sent back [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1101): [client 192.168.0.2] GSS-API major_status:000d0000, minor_status:000186a5 [Wed Oct 23 14:03:30 2013] [error] [client 192.168.0.2] gss_accept_sec_context() failed: Unspecified GSS failure. Minor code may provide more information (, ) [Wed Oct 23 14:03:30 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured Let me know if you'd like additional log and config files--the initial question is getting long enough.

    Read the article

  • IIS logs show sc-win32-status=64 but only through some networks

    - by wweicker
    I have an ASP.NET application running on a client server (W2k3, IIS6, .NET 2.0). FWIW, this is a Test instance, it hasn't been moved into Production yet. So it is not running under SSL, load balancing, etc. When I access one of the pages on their server from our office, the page gets hit once. Inspecting the IIS logs (c:WINDOWS\system32\LogFiles\W3SVC1) show a GET for that page, then I push a button on the page and the log file shows a POST. This seems to be working fine so far. Now when I remote into the client's network and access the page from one of their local machines, the log file shows a GET, then I push the button on the page and the log shows two POSTs at the same second. The first one shows status (sc-status, sc-substatus, sc-win32-status) 200 0 64, the second shows 200 0 0. In the log file, both POSTs are identical. Basically the log looks like this (except I masked some of the data): #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2009-08-11 20:19:32 x.x.x.x GET /File.aspx - 80 - y.y.y.y Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+WOW64;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30618;+MDDR;+OfficeLiveConnector.1.4;+OfficeLivePatch.0.0) 200 0 0 2009-08-11 20:19:45 x.x.x.x POST /File.aspx - 80 - y.y.y.y Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+WOW64;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30618;+MDDR;+OfficeLiveConnector.1.4;+OfficeLivePatch.0.0) 200 0 64 2009-08-11 20:19:45 x.x.x.x POST /File.aspx - 80 - y.y.y.y Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+WOW64;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30618;+MDDR;+OfficeLiveConnector.1.4;+OfficeLivePatch.0.0) 200 0 0 The problem is, the page is getting hit twice. The database performs an operation for the first request, then the second request detects that a duplicate operation is being performed and throws an error message. The users think their operation failed, but it actually succeeded. The error description of sc-win32-status 64 is: "The specified network name is no longer available." This leads me to believe, given that both POST requests show an HTTP status of 200, that the server is successful in serving the request, but the client is never notified and resubmits the request. How can I troubleshoot this? Any ideas what could be causing this behavior on their internal network only? I should mention, this is happening at two separate client sites, but does not happen at six of our other client sites, or in our office, or connecting to any of our eight clients over the web. What could be making this reproducible 100% of the time on their local network but 0% of the time anywhere else? Update: I found a very small number of the duplicated POST requests had sc-win32-status of 995 instead of 64 as originally reported. The error description of sc-win32-status=995 is: "The I/O operation has been aborted because of either a thread exit or an application request." This doesn't make any sense (considering I have full access to the code). I still don't understand how or why this issue is occurring, but the new error code leads me to believe it may not be a network issue after all and I am now investigating the possibility of a random code bug.

    Read the article

  • sendmail can not relay from itself

    - by Bernie
    I am running 3 centos 5.2 servers and I have configured the server for forward all messages to root to be emailed to me via .forward rule. This is working fine on two of the servers but not on the third. I have also tried copying the mail config files from the backup server and placing them on the file server and restarting sendmail. I also removed and reinstalled sendmail via yum but the results are the same. I am not sure what the issue could be they are all standard centos installs. Here is an example from the backup server which is working and the fileserver which isn't I am also going to include the mail log. good from backup server [root@backup ]# sendmail -v [email protected] < test.mail [email protected]... Connecting to [127.0.0.1] via relay... 220 backup.localhost ESMTP Sendmail 8.13.8/8.13.8; Fri, 16 Oct 2009 10:23:50 -0700 >>> EHLO backup.localhost 250-backup.localhost Hello backup.localhost [127.0.0.1], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 250-DSN 250-ETRN 250-DELIVERBY 250 HELP >>> MAIL From:<[email protected]> SIZE=73 250 2.1.0 <[email protected]>... Sender ok >>> RCPT To:<[email protected]> >>> DATA 250 2.1.5 <[email protected]>... Recipient ok 354 Enter mail, end with "." on a line by itself >>> . 250 2.0.0 n9GHNoGC020924 Message accepted for delivery [email protected]... Sent (n9GHNoGC020924 Message accepted for delivery) Closing connection to [127.0.0.1] >>> QUIT 221 2.0.0 backup.localhost closing connection bad from file server [root@fileserver bernie]# sendmail -v [email protected] < test.mail [email protected]... Connecting to [127.0.0.1] via relay... 220 fileserver.localhost ESMTP Sendmail 8.13.8/8.13.8; Fri, 16 Oct 2009 10:23:26 -0700 >>> EHLO fileserver.localhost 250-fileserver.localhost Hello fileserver.localhost [127.0.0.1], pleased to meet you 250 ENHANCEDSTATUSCODES >>> MAIL From:<[email protected]> 550 5.0.0 Access denied root... Using cached ESMTP connection to [127.0.0.1] via relay... >>> RSET 250 2.0.0 Reset state >>> MAIL From:<> 550 5.0.0 Access denied postmaster... Using cached ESMTP connection to [127.0.0.1] via relay... >>> RSET 250 2.0.0 Reset state >>> MAIL From:<> 550 5.0.0 Access denied Closing connection to [127.0.0.1] >>> QUIT 221 2.0.0 fileserver.localhost closing connection mail log Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDom028059: from=root, size=72, class=0, nrcpts=1, msgid=<[email protected]>, relay=root@localhost Oct 16 10:39:13 fileserver sendmail[28060]: n9GHdDwl028060: tcpwrappers (fileserver.localhost, 127.0.0.1) rejection Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDom028059: [email protected], ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30072, relay=[127.0.0.1] [127.0.0.1], dsn=5.0.0, stat=Service unavailable Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDom028059: n9GHdDon028059: DSN: Service unavailable Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDon028059: to=root, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31096, relay=[127.0.0.1], dsn=5.0.0, stat=Service unavailable Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDon028059: n9GHdDoo028059: return to sender: Service unavailable Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDoo028059: to=postmaster, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=32120, relay=[127.0.0.1], dsn=5.0.0, stat=Service unavailable Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDon028059: Losing ./qfn9GHdDon028059: savemail panic Oct 16 10:39:13 fileserver sendmail[28059]: n9GHdDon028059: SYSERR(root): savemail: cannot save rejected email anywhere

    Read the article

  • nginx: problem configuring a proxy_pass

    - by Ofer Bar
    I'm converting a web app from apache to nginx. In apache's httpd.conf I have: ProxyPass /proxy/ http:// ProxyPassReverse /proxy/ http:// The idea is the client send this url: http://web-server-domain/proxy/login-server-addr/loginUrl.php?user=xxx&pass=yyy and the web server calls: http://login-server-addr/loginUrl.php?user=xxx&pass=yyy My nginx.conf is attached below and it is not working. At the moment it looks like it is calling the server, but returning an application error. This seems promising but any attempt to debug this failed! I can't trace any of the calls as nginx refuses to place them in the error file. Also, placing echo statement on the login server did not help either which is weird. The nginx documentation isn't very helpful about this. Any suggestion on how to configure a proxy_pass? Thanks! user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # # The default server # server { rewrite_log on; listen 80; server_name _; #charset koi8-r; #access_log logs/host.access.log main; #root /var/www/live/html; index index.php index.html index.htm; location ~ ^/proxy/(.*$) { #location /proxy/ { # rewrite ^/proxy(.*) http://$1 break; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; proxy_pass http://$1; #proxy_pass "http://173.231.134.36/messages_2.7.0/loginUser.php?userID=ofer.fly%40gmail.com&password=y4HTD93vrshMNcy2Qr5ka7ia0xcaa389f4885f59c9"; break; } location / { root /var/www/live/html; #if ( $uri ~ ^/proxy/(.*) ) { # proxy_pass http://$1; # break; #} #try_files $uri $uri/ /index.php; } error_page 404 /404.html; #location = /404.html { # root /usr/share/nginx/html; #} # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; #fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME /var/www/live/html$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; }

    Read the article

  • nginx php-fpm empty site

    - by Katafalkas
    I am writing this as I was stuck trying to fix it whole night. We have recently migrated to a new server. Before migration we have been running our site on nginx + cgi (script). After migration we decided to try apache + mod_php. It was rather terrible, and I would like to migrate back to nginx, but this time I want it with php-fpm (as people say its a cool) So I did follow a number of guides and I think i done everything correctly. As well as that I have our old config files for "server" section, which i reviewed and places into new config. So I ended up having an empty site when I enter our URL. (By empty I mean blank page, no letter, errors or anything at all.) In the access log there are some weird errors like: 123.242.148.54 - - [22/Mar/2012:06:08:11 +0200] "-" 400 0 "-" "-" My guess is that php-fpm is not working, but not sure how to confirm it. Maybe some1 could give some help ? Would much appreciate. my nginx config: user nginx; worker_processes 12; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; #default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; fastcgi_buffer_size 256k; fastcgi_buffers 4 256k; server { listen 80; server_name www.example.com; access_log /var/log/nginx/example.access.log; error_log /var/log/nginx/error.log; root /home/www/example; index index.php; client_max_body_size 50M; #error_page 404 /404.html; # phpMyAdmin location /phpmyadmin { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; try_files $uri $uri/ /index.php; } location ~ ^/phpmyadmin/.*\.php$ { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Munin location /monitoring { root /var/www/; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/conf.d/monitoring_users; error_log /var/log/nginx/monitoring.log; index index.html; } # The site location / { try_files $uri $uri/ /index.php/?$uri&$args; } # PHP interpreter location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } }

    Read the article

  • proftpd, dynamic IP, and filezilla: port troubles

    - by Yami
    The basic setup: Two computers, one running proftpd, one attempting to connect via filezilla. Both linux (xubuntu on the server, kubuntu on the client). Both are at the moment behind a router on a residential (read: dynamic IP) connection; the client is a laptop I plan to take away from the home network, so I'll need this to work externally. I have my router set up to allow specific ports forwarded to each machine and, where possible, have plugged in those numbers into proftpd (via gadmin, double-checking the config file) and filezilla. Attempting to connect via active mode using the internal IP works: Status: Connecting to 192.168.1.139:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 192,168,1,52,153,140 Response: 200 PORT command successful Command: LIST Response: 150 Opening ASCII mode data connection for file list Response: 226 Transfer complete Status: Directory listing successful Attempting to connect via the domain name, however, leads to issues; in active mode, the PORT is the last command to be received according to the server's logs, and in passive mode, it's the PASV command. This leads me to believe I'm being redirected to a bad port? Active Sample: Status: Resolving address of <url> Status: Connecting to <ip:port> Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 174,111,127,27,153,139 Response: 200 PORT command successful Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing Passive sample: Status: Resolving address of ftp.bonsaiwebdesigns.com Status: Connecting to 174.111.127.27:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER yamikuronue Response: 331 Password required for yamikuronue Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (64,95,64,197,101,88). Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing In both cases, the log file ends at "PORT" or "PASV" - there's no record of ever receiving a "LIST" command. Just above that I can see the attempt to connect actively via the internal IP, which does indeed include a LIST command. My config file includes "PassivePorts 20001-26999", which are the port forwards I set up for the ftp server, and "Port 8085", which is also forwarded to the same machine. I also have a MasqueradeAddress set up to prevent it from reporting its internal IP, which was an earlier issue I had. I think what I'm asking is, is there another setting someplace I have to change to get this setup to work?

    Read the article

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >