Search Results

Search found 4341 results on 174 pages for 'child'.

Page 137/174 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • How to avoid the exception “Substitution controls cannot be used in cached User Controls or cached M

    - by DigiMortal
    Recently I wrote example about using user controls with donut caching. Because cache substitutions are not allowed inside partially cached controls you may get the error Substitution controls cannot be used in cached User Controls or cached Master Pages when breaking this rule. In this posting I will introduce some strategies that help to avoid this error. How Substitution control checks its location? Substitution control uses the following check in its OnPreRender method. protected internal override void OnPreRender(EventArgs e) {     base.OnPreRender(e);     for (Control control = this.Parent; control != null;          control = control.Parent)     {         if (control is BasePartialCachingControl)         {             throw new HttpException(SR.GetString("Substitution_CannotBeInCachedControl"));         }     } } It traverses all the control tree up to top from its parent to find at least one control that is partially cached. If such control is found then exception is thrown. Reusing the functionality If you want to do something by yourself if your control may cause exception mentioned before you can use the same code. I modified the previously shown code to be method that can be easily moved to user controls base class if you have some. If you don’t you can use it in controls where you need this check. protected bool IsInsidePartialCachingControl() {     for (Control control = Parent; control != null;         control = control.Parent)         if (control is BasePartialCachingControl)             return true;       return false; } Now it is up to you how to handle the situation where your control with substitutions is child of some partially cache control. You can add here also some debug level output so you can see exactly what controls in control hierarchy are cached and cause problems.

    Read the article

  • Lookup Viewer

    - by Geertjan
    The Maven integrated view that I showed yesterday I was able to create because I happened to know that an implementation of SubprojectProvider and LogicalViewProvider are in the Lookup of Maven projects. With that knowledge, I was able to use and even delegate to those implementations. But what if you don't know that those implementations are in the Lookup of the Project object? In the case of the Maven Project implementation, you could look in the source code of the Maven Project implementation, at the "getLookup" method. However, any other module could be putting its own objects into that Lookup, dynamically, i.e., at runtime. So there's no way of knowing what's in the Lookup of any Project object or any other object with a Lookup. But now imagine that you have a Lookup Viewer, as a tool during development, which you would exclude when distributing the application. Whenever new objects are found in the Lookup, the viewer displays them. You could install the Lookup Viewer into NetBeans IDE, or any other NetBeans Platform application, and then get a quick impression of what's actually in the Lookup when you select a different item in the application during development. Here it is (though I vaguely remember someone else writing something similar): Above, a Maven Project is selected. The Lookup Window shows that, among many other classes, an implementation of SubprojectProvider and LogicalViewProvider are found in the Lookup when the Maven Project is selected. If an item in the Lookup Window has its own Lookup, the content of that Lookup is displayed as child nodes of the Lookup, etc, i.e., you can explore all the way down the Lookup of each item found within objects found within the current selection. (What's especially fun is seeing the SaveCookieImpl being added and removed from the Lookup Window when you make/save a change in a document.) Another example is below, showing the Lookup Window installed in a custom application created during a course at MIT in Boston: A small trick I had to apply is that I always show the previous Lookup, since the current Lookup, when you select one of the Nodes in the Lookup Window, would be the Lookup of the Lookup Window itself! If anyone is interested in this, I can publish the NetBeans module providing the above window to the NetBeans update center. 

    Read the article

  • algorithm for project euler problem no 18

    - by Valentino Ru
    Problem number 18 from Project Euler's site is as follows: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom of the triangle below: 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o) The formulation of this problems does not make clear if the "Traversor" is greedy, meaning that he always choosed the child with be higher value the maximum of every single walkthrough is asked The NOTE says, that it is possible to solve this problem by trying every route. This means to me, that is is also possible without! This leads to my actual question: Assumed that not the greedy one is the max, is there any algorithm that finds the max walkthrough value without trying every route and that doesn't act like the greedy algorithm? I implemented an algorithm in Java, putting the values first in a node structure, then applying the greedy algorithm. The result, however, is cosidered as wrong by Project Euler. sum = 0; void findWay(Node node){ sum += node.value; if(node.nodeLeft != null && node.nodeRight != null){ if(node.nodeLeft.value > node.nodeRight.value){ findWay(node.nodeLeft); }else{ findWay(node.nodeRight); } } }

    Read the article

  • Small script to look for Project Replication actions that have failed

    - by Trond Strømme
    Today when looking at a couple of projects on a ZFS 7320 Storage Appliance I noticed on one project that one of its replication actions had failed, as I hadn't checked the Recent Alerts log yet I was not aware of this. I decided to write a small script to check if there were others that had failed. Nothing fancy, just a loop through all projects, look at the project's replication child and compare the values of the last_sync and last_try properties and print the result if they're not equal. (There are probably more sensible ways of doing this, but at least it involves me getting the chance to put on my headphones and doing just a little bit of coding.) script // this script will locate failed project level replication // it will look at the sync times for 'last_sync' and 'last_try' // and compare these, if they deviate you should investigate. // NOTE! this code is offered 'as is' Run at your own risk, // it will probably work as intended, but in now way can I // (or Oracle) be held responsible if your server starts behaving // like a three year old kid in a candy store.. (not that mine do, // they are very well behaved boys...) run('configuration'); run('storage'); printf('Host: %s, pool: %s\n', get('owner'),get('pool')); run('cd /'); run('shares'); proj=list(); printf("total projects: %d\n",proj.length +'\n'); // just for project level replication for(i=0;i<proj.length;i++){ run('select '+proj[i]); run('replication'); //get all replication actions preps = list(); for(j=0;j<preps.length;j++){ run('select ' + preps[j]); last_sync = get('last_sync'); last_try = get('last_try'); // printf("target %s\n", get('target')); //why the flip does this not get the proper name? if(!( last_sync.valueOf() === last_try.valueOf())){ printf("sync has failed for %s %s\n", proj[i], get('target')); }else{ // printf("OK %s %s\n", proj[i], get('target')); } run('done'); //done with the replica action } run('done'); run('done'); } printf("finished\n"); For a more on how to run the script, or testing it please look at my previous post. Sample output: Host: elb1sn01, pool: exalogic total projects: 45 sync has failed for ACSExalogicSystem cb3a24fe-ad60-c90f-d15d-adaafd595639 finished

    Read the article

  • Rotate camera around player and set new forward directions

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. When I look at the back of the player and press forward, player goes forward. Then I rotate 360 around the player and "forward direction" is tilted for 90 degrees. So every 360 turn there is 90 degrees of direction change. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". I have Player object with Camera as child object. Camera object has Camera script. Inside Camera script there are Player and Camera classes. Player object itself, has Input Controller. Also I'm making this script for joystick/ controller primarily. My camera script so far: using UnityEngine; using System.Collections; public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 10, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Are there any good Java/JVM libraries for my Expression Tree architecture?

    - by Snuggy
    My team and I are developing an enterprise-level application and I have devised an architecture for it that's best described as an "Expression Tree". The basic idea is that the leaf nodes of the tree are very simple expressions (perhaps simple values or strings). Nodes closer to the trunk will get more and more complex, taking the simpler nodes as their inputs and returning more complex results for their parents. Looking at it the other way, the application performs some task, and for this it creates a root expression. The root expression divides its input into smaller units and creates child expressions, which when evaluated it can use to build it's own result. The subdividing process continues until the simplest leaf nodes. There are two very important aspects of this architecture: It must be possible to manipulate nodes of the tree after it is built. The nodes may be given new input values to work with and any change in result for that node needs to be propagated back up the tree to the root node. The application must make best use of available processors and ultimately be scalable to other computers in a grid or in the cloud. Nodes in the tree will often be updating concurrently and notifying other interested nodes in the tree when they get a new value. Unfortunately, I'm not at liberty to discuss my actual application, but to aid understanding a little bit, you might imagine a kind of spreadsheet application being implemented with a similar architecture, where changes to cells in the table are propagated all over the place to other cells that need the result. The spreadsheet could get so massive that applying multi-core multi-computer distributed system to solve it would be of benefit. I've got my prototype "Expression Engine" working nicely on a single multi-core PC but I've started to run into a few concurrency issues (as expected because I haven't been taking too much care so far) so it's now time to start thinking about migrating the Engine to a more robust library, and that leads to a number of related questions: Is there any precedent for my "Expression Tree" architecture that I could research? What programming concepts should I consider. I realise this approach has many similarities to a functional programming style, and I'm already aware of the concepts of using futures and actors. Are there any others? Are there any languages or libraries that I should study? This question is inspired by my accidental discovery of Scala and the Akka library (which has good support for Actors, Futures, Distributed workloads etc.) and I'm wondering if there is anything else I should be looking at as well?

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • Windows Server Backup fails to backup Hyper-V VM with "Access is denied"

    - by Sebastian Krysmanski
    I'm trying to use Windows Server Backup on my Windows Server 2012 box to backup my Hyper-V VMs. I created a backup job but each job ends with some "Access is denied" errors. One of my VMs (Linux Server) is backed up properly. All others (one Windows 8, one Linux) are not (or at least it seems that way from the looks of the log file below). How can I solve this problem? Here's the log I'm getting: Error in backup of D:\ during read: Error [0x80070005] Access is denied. Application backup Writer Id: {66841CD4-6DED-4F4B-8F17-FD23F8DDC3DE} Component: C435964E-C07A-4958-BA73-A04C6583280F Caption : Backup Using Saved State\Alter Server Logical Path: Error : 8078010E Error Message : Copy of the files failed. Detailed Error : 80070005 Detailed Error Message : (null) Writer Id: {66841CD4-6DED-4F4B-8F17-FD23F8DDC3DE} Component: E780F138-9676-42FB-821C-4561B9B263DC Caption : Backup Using Child Partition Snapshot\Windows 8 Logical Path: Error : 8078010E Error Message : Copy of the files failed. Detailed Error : 80070005 Detailed Error Message : (null) Writer Id: {66841CD4-6DED-4F4B-8F17-FD23F8DDC3DE} Component: Host Component Caption : Host Component Logical Path: Error : 8078010E Error Message : Copy of the files failed. Detailed Error : 80070005 Detailed Error Message : (null)

    Read the article

  • Internet Explorer 8 crashes on Citrix, Windows 2003

    - by Workshop Alex
    Difficult to decide where this Q fits best. It's server-related and programming-related. But it's a user-problem so I'll put it here, first... I work on a (Delphi) application that uses an Internet Explorer component to show information to the user. It's not a web application, just a desktop application which creates HTML pages to display them within a browser component. Some of the information on these webpages are retrieved from a web server, while other information is provided "live" by the application itself. It works quite well, but it adds an IE-process (child process) next to my application. And this IE process seems to eat a lot of system resources. For normal users, this is not a real problem, so it's not an issue that I want to fix in the code. But one customer of this application uses it with about 100 users on a Citrix/Windows 2003 environment and they complain about problems with the application. IE8 tends to crash, not show, hang or cause other mayhap. Then again, I've warned them that -officially- I won't support any Citrix environment. But I'm willing to help them to find a solution to fix this, and if need be I could make minor changes to my code to help fix this issie. (If possible.) But basically, I need a solution that any user/administrator on this Citrix environment can follow/use. Any ideas on how to resolve this resource problem?

    Read the article

  • Dell 2950 Perc 6/i "physical disk" and "Enclosure(Backplane)" under Connector 1 in OMSA tree- Troubleshoot help

    - by user66357
    Just looking for someone who might know why this could occur... In OMSA, on my Dell 2950, there usually is only one "Physical Disks" child under "Enclosure (Backplane)" in the tree view. Currently, the tree looks like this: Dell PERC 6/i Integrated Connector 1 (RAID) Enclosure (Backplane) Physical Disks (1:04 good, 1:05 removed) Physical Disks (1:33 Ready but unused) Normally it's like this: Connector 1 (RAID) Enclosure (Backplane) Physical Disks (1:04 good, 1:05 good) From the front, 6 of 6 3.5" SAS drives are connected. The server is showing Slot 5 as bad and the disk as removed. It seems that the drive in Slot 5 is being sensed as external to the Enclosure. Any ideas why this would happen? Think I can get away with rebuilding the virtual disk by replacing 1:05 with 1:33? Thanks. UPDATE: The only options on the Physical Disk 1:33 were Assign as Global Hot Spare and Clear... After clearing, I assigned it as the Global Hot Spare. This allowed the rebuilding of the virtual disk. Hopefully it won't fail. I'm still unsure of the reason for this odd behavior. I'm checking the firmware next.

    Read the article

  • How to configure fastcgi to work with ligttpd in ubuntu

    - by michael
    I am able to run lighttpd on ubuntu 9.10. But when i tried to setup fastcgi with lighttpd by putting this in the ligttpd.conf file: #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => "9098", "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) This is what I get in the error.log in ligttpd: 2010-03-07 21:00:11: (log.c.166) server started 2010-03-07 21:00:11: (mod_fastcgi.c.1104) the fastcgi-backend /usr/local/bin/cgi-fcgi failed to start: 2010-03-07 21:00:11: (mod_fastcgi.c.1108) child exited with status 1 /usr/local/bin/cgi-fcgi 2010-03-07 21:00:11: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2010-03-07 21:00:11: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2010-03-07 21:00:11: (server.c.931) Configuration of plugins failed. Going down. I do have cgi-fcgi in /usr/local/bin: $ which cgi-fcgi /usr/local/bin/cgi-fcgi Thank you for your help.

    Read the article

  • SSH error 114 when connect with FinalBuilder 7

    - by mamcx
    I'm testing FB 7 and try to connect to my Mac OS X Snow Leopard machine. I can connect with paramiko (python SSH library) but not FB7. The only thing I get is: SSH error encoutered: 114 I try stopping & restarting the share session on Mac OS X. update: I enable server debug and get this log: debug1: sshd version OpenSSH_5.2p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-Dd' debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: fd 5 clearing O_NONBLOCK debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 10.3.7.135 port 49457 debug1: Client protocol version 2.0; client software version SecureBlackbox.8 debug1: no match: SecureBlackbox.8 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: privsep_preauth: successfully loaded Seatbelt profile for unprivileged child debug1: permanently_set_uid: 75/75 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr [email protected] none debug1: kex: server->client aes128-ctr [email protected] none debug1: expecting SSH2_MSG_KEXDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user mamcx service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "mamcx" Connection closed by 10.3.7.135 debug1: do_cleanup debug1: PAM: setting PAM_RHOST to "10.3.7.135" debug1: do_cleanup debug1: PAM: cleanup debug1: audit_event: unhandled event 12

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

  • How can I recover data from a dmg that cannot mount?

    - by Benjamin Lee
    I backed up a hard drive in a dmg, then reformatted the hard drive. (I had deleted the efi partition before which was preventing me from reinstalling the operation system). When I tried use the restore function in disk utility it gave an input/output error. I get this error with anything I do to the image including mounting, converting, attaching, verifying, scanning, and getting info through hdiutil imageinfo. I have run all of these with hdiutil and the -noverify, -nomount, and -ignorebadchecksums flags. When I copy the image onto another disk/partition I get a different error: something like "No filesystem". I cannot repair the image with disk utility or asr, which both throw the I/O error. When I put the -verbose flag on the command I actually get a different error: "hdiutil: attach failed - No child processes". I have output from both the -verbose and -debug flags but it is fairly long so I had to attach a link to avoid the 3000 character limit. No recovery system can get the data because it is both compressed and unmountable. How can I get the data back, and what has gone wrong? -debug -verbose

    Read the article

  • Apache log rotation: logrotate vs rotatelogs vs chronolog

    - by Enrico
    I have been researching log rotation for my server which hosts ~5 fairly high traffic sites. From what I can tell, my options are to use logrotate or to use piped logging with either rotatelogs or chronolog. logrotate requires a restart of apache and both SIGHUP and SIGUSR1 restarts are less than ideal on high traffic sites, because either you drop a bunch of connections or you need to delay compressing the old log until all child processes have died naturally. Also, downtime can be quite significant if compression is enabled. Would using logrotate - without compression and with graceful restart - and compressing old logs after the fact be the best way to minimize downtime? chronolog and rotatelogs sound promising, but are not well documented. I couldn't find examples of using either in combination with vhost specific logs. The chronolog website says, "when the expanded filename changes, the current file is closed and a new one opened". Is this globally? Or is that per AccessLog, CustomLog or ErrorLog directive? Is there a significant difference between chronolog and rotatelogs?

    Read the article

  • Key-Based SSH Permission denied (publickey) Ubuntu 12-04

    - by user125176
    I have configured sshd to accept key-based ssh logins with LogLevel on DEBUG, and uploaded my public key to ~/.ssh.authorized_keys, where permissions are set as: 700 ~/.ssh 600 ~/.ssh/authorized_keys From root, I can su - USERNAME. From the client I get Permission denied (publicly). From the server Here's how it is telling me that it "Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied". Client protocol version 2.0; client software version OpenSSH_5.2 match: OpenSSH_5.2 pat OpenSSH* Enabling compatibility mode for protocol 2.0 Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 permanently_set_uid: 105/65534 [preauth] list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 [preauth] SSH2_MSG_KEXINIT sent [preauth] SSH2_MSG_KEXINIT received [preauth] kex: client->server aes128-ctr hmac-md5 none [preauth] kex: server->client aes128-ctr hmac-md5 none [preauth] SSH2_MSG_KEX_DH_GEX_REQUEST received [preauth] SSH2_MSG_KEX_DH_GEX_GROUP sent [preauth] expecting SSH2_MSG_KEX_DH_GEX_INIT [preauth] SSH2_MSG_KEX_DH_GEX_REPLY sent [preauth] SSH2_MSG_NEWKEYS sent [preauth] expecting SSH2_MSG_NEWKEYS [preauth] SSH2_MSG_NEWKEYS received [preauth] KEX done [preauth] userauth-request for user USERNAME service ssh-connection method none [preauth] attempt 0 failures 0 [preauth] PAM: initializing for "USERNAME" PAM: setting PAM_RHOST to "USERHOSTNAME" PAM: setting PAM_TTY to "ssh" userauth_send_banner: sent [preauth] userauth-request for user USERNAME service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth] Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 Checking blacklist file /etc/ssh/blacklist.RSA-4096 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied restore_uid: 0/0 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys2 Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys2': Permission denied restore_uid: 0/0 Failed publickey for USERNAME from IPADDRESS port 57523 ssh2 Connection closed by IPADDRESS [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup

    Read the article

  • Bind9 DNS help with psuedo domains

    - by Tempname
    I have setup a dns server on my home network to manage some apps that I have written for home. Currently I have 3 "domains" that I am using: controller devserver fileserver The first issue that I am having is that when I attempt to ping the parent domain of any of these 3 I am unable to. I simply get ping: unknown host controller. I however can ping any of the subdomains I have setup for these 3 parent domains. The second issue is I am unable to ping any of the 3 parent domains or any child domains from my window machines. I have verified that these domains work on other devices in my house (ipod touch, ipad, cell phone). Any help with this is greatly appreciated Here is bind data file for my parent domain controller: ; ; BIND data file for local loopback interface ; $TTL 604800 @ IN SOA controller. admin.controller. ( 9 604800 86400 2419200 604800 ) ; @ IN NS controller. @ IN A 192.168.1.104 controller IN A 192.168.1.194 admin.controller. IN A 192.168.1.104

    Read the article

  • Recommended setting for using Apache mod_mono with a different user

    - by Korrupzion
    Hello, I'm setting up an ASP.net script in my linux machine using mod_mono. The script spawn procceses of a bin that belongs to another user, but the proccess is spawned by www-data because apache runs with that user, and i need to spawn the proccess with the user that owns the file. I tried setuid bit but it doesn't make any effect. I discovered that if I kill mod-mono-server2.exe and I run it with the user that I need, everything works right, but I want to know the proper way to do this, because after a while apache runs mod-mono-server2.exe as www-data again. Mono-Project webpage says: How can I Run mod-mono-server as a different user? Due to apache's design, there is no straightforward way to start processes from inside of a apache child as a specific user. Apache's SuExec wrapper is targeting CGI and is useless for modules. Mod_mono provides the MonoStartXSP option. You can set it to "False" and start mod-mono-server manually as the specific user. Some tinkering with the Unix socket's permissions might be necessary, unless MonoListenPort is used, which turns on TCP between mod_mono and mod-mono-server. Another (very risky) way: use a setuid 'root' wrapper for the mono executable, inspired by the sources of Apache's SuExec. I want to know how to use the setuid wrapper, because I tried adding the setuid to 'mono' bin and changing the owner to the user that I want, but that made mono crash. Or maybe a way to keep running mono-mod-server2.exe separated from apache without being closed (anyone has a script?) My environment: Debian Lenny 2.6.26-2-amd64 Mono 1.9.1 mod_mono from debian repository Dedicated server (root access and stuff) Using apache vhosts -I use mono for only that script Thanks!

    Read the article

  • r1soft agent is failing with the error: "write error while sending code: Broken pipe"

    - by curiousguy
    I have an Ubuntu 10.04.4 LTS server with r1soft agent installed in it. Recently, the backups are failing with the following error. -------- write error while sending code: Broken pipe -------- I have reinstalled the buagent but to no avail. On checking the server logs, I could see the following errors listed in it: -------- # tail -f /var/log/messages |grep -i buagent Nov 17 03:35:06 microscope buagent: Need to back up 126 sectors Nov 17 03:35:06 microscope buagent: (Righteous Backup Linux Agent) 1.79.0 build 12433 Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.195) with valid RSA key Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.201) with valid RSA key Nov 17 03:35:06 microscope buagent: sending auth challenge for allowed host at (10.128.136.201) port (47890) Nov 17 03:35:06 microscope buagent: host (10.128.136.201) port (47890) authentication successful Nov 17 03:35:06 microscope buagent: Backup request accepted. Starting backup. Nov 17 03:35:06 microscope buagent: Snapshot completed in 0.010 seconds. Nov 17 03:45:03 microscope buagent: Error reading blocks from snapshot. Nov 17 03:45:03 microscope buagent: Reading blocks failed Nov 17 03:45:03 microscope buagent: error backup aborted Nov 17 03:45:03 microscope buagent: backup failed on agent closing connection Nov 17 03:45:03 microscope buagent: Backup failed. Nov 17 03:45:03 microscope buagent: write error while sending code: Broken pipe (32) Nov 17 03:45:03 microscope buagent: tell child write failed -------- I tried changing the 'Timeout' and 'DiskAsPartition' value in '/etc/buagent/agent_config' file but no luck. Also, verified that proper route is added to the backup server. The agent is also running fine. Am I missing anything? Any help would be much appreciated. Note: CDP 2.0 is installed in the backup server.

    Read the article

  • Enabling Hyper-V Integrated Services Time Sync Services versus Internet Time Synchronization

    - by cpuguru
    Should I deselect the "Synchronize with an Internet Time Server" checkbox under the VM's "Date and Time - Internet Time Settings" tab if the "Time Synchronization Service" for a Hyper-V-based Virtual Machine is enabled? One of the Integration Services that Hyper-V provides is the Time Synchronization Service, which can be enabled/disabled by going to a VM's Settings-Integration Services setting in the Management section. I believe this is checked by default. When you install a Windows Server 2008 OS in a VM on the Hyper-V server, it comes with the "Synchronize with an Internet Time Server" option set, pointing to "time.windows.com". I'd think that if the parent Hyper-V server is set to one time server, and the child VM is pointing to a different time server, there would be a momentary blip if the two are not spot on with their times when the synchronization services run. So the question is, which time sync service should I use? I'm assuming not both. And what is the advantage of one over the other? Note: This question assumes that the machines are not joined to a domain. If they were, the machines would also try to update their time against the domain controller with the primary domain controller role too, right? Thanks!

    Read the article

  • Jira access with AJP-Proxy

    - by user60869
    I want to Configure the Jira-Acces over APJ-Proxy. I proceeded as follows (Following this howto: http://confluence.atlassian.com/display/JIRA/Configuring+Apache+Reverse+Proxy+Using+the+AJP+Protocol) : 1) In the server.xml I activate the AJP: 2) Edit VHOST Konfiguration: # Load Proxy-Modules LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so # Load AJP-Modules LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so # Proxy Configuration <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On # Basic AuthType configuration <Proxy *> AuthType Basic AuthName Bamboo-Server AuthUserFile /var/www/userdb Require valid-user AddDefaultCharset off Order deny,allow Deny from all Allow from 192.168.0.1 satisfy any </Proxy> ProxyPass /bamboo http://localhost:8085/bamboo ProxyPassReverse /bamboo http://localhost:8085/bamboo ProxyPass /jira ajp://localhost:8009/ ProxyPassReverse /jira ajp://localhost:8009/ </IfModule> EDIT: In the logs if found follow: //localhost:8080/ [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1819): proxy: worker ajp://localhost:8080/ already initialized [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1913): proxy: initialized single connection worker 1 in child 5578 for (localhost) [Fri Nov 19 14:51:32 2010] [error] ajp_read_header: ajp_ilink_receive failed [Fri Nov 19 14:51:32 2010] [error] (120006)APR does not understand this error code: proxy: read response failed from (null) (localhost) [Fri Nov 19 14:51:32 2010] [debug] proxy_util.c(2008): proxy: AJP: has released connection for (localhost) [Fri Nov 19 14:51:32 2010] [debug] mod_deflate.c(615): [client xx.xx.xx.xx Zlib: Compressed 468 to 320 : URL /jira But It dosen´t work. Somebody have an idea?

    Read the article

  • Nginx with PAM authentication through pam_script

    - by Envek
    Have anyone set up such a configuration? It's not work for me. So, I've installed nginx-extras on Ubuntu 12.04 (it's built with PAM module), and write to site config: location ^~ /restricted_place/ { auth_pam "Please specify login and password from main_site"; auth_pam_service_name "nginx"; } Afterwards, in /etc/pam.d/nginx: auth required pam_script.so dir=/path/to/my/auth_scripts And wrote simplest /path/to/my/auth_scripts/pam_script_auth (also I've tried to write complicated scripts) #!/bin/sh exit 0 # should allow anyone Doesn't work. The script is launched (I've wrote full functional script, that successfully executes, check credentials, writes to its own log and returns correct exit code, and executes noticeably long). But no access granted. Only rejected. In /var/log/nginx/error.log appears next record: 2012/09/13 10:44:42 [alert] 1666#0: waitpid() failed (10: No child processes) If I'm specify in /etc/pam.d/nginx: auth required pam_unix.so and grant for www-data user right to read /etc/shadow, unix authorization works fine. But script auth doesn't work. Can't understand, where is trouble. In nginx module, or in pam_script module.

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >