Search Results

Search found 8162 results on 327 pages for 'module packaging'.

Page 310/327 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Moving Function With Arguments To RequireJS

    - by Jazimov
    I'm not only relatively new to JavaScript but also to RequireJS (coming from string C# background). Currently on my web page I have a number of JavaScript functions. Each one takes two arguments. Imagine that they look like this: functionA(x1, y1) { ... } functionB(x2, y2) { ... } functionC(x3, y3) { ... } Currently, these functions exist in a tag on my HTML page and I simply call each as needed. My functions have dependencies on KnockoutJS, jQuery, and some other JS libraries. I currently have Script tags that synchronously load those external .js dependencies. But I want to use RequireJS so that they're loaded asynchronously, as needed. To do this, I plan to move all three functions above into an external .js file (a type of AMD "module") called MyFunctions.js. That file will have a define() call (to RequireJS's define function) that will look something like this: define(["knockout", "jquery", ...], function("ko","jquery", ...) {???} ); My question is how to "wrap" my functionA, functionB, and functionC functions where the ??? is above so that I can use those functions on my page as needed. For example, in the onclick event handler for a button on my HTML page, I would want to call functionA and pass two it two arguments; same for functionB and functionC. I don't fully understand how to expose those functions when they're wrapped in a define that itself is located in an external .js file. I know that define assures that my listed libraries are loaded asynchronously before the callback function is called, but after that's done I don't understand how the web page's script tags would use my functions. Would I need to use require to ensure they're available, such as: require(["myfunctions"],function({not sure what to put here})] I think I understand the basics of RequireJS but I don't understand how to wrap my functions so that they're in external .js files, don't pollute the global namespace, and yet can still be called from the main page so that arguments can be passed to them. I imagine they're are many ways to do this but in reviewing the RequireJS docs and some videos out there, I can't say I understand how... Thank you for any help.

    Read the article

  • Twisted + SQLAlchemy and the best way to do it.

    - by Khorkrak
    So I'm writing yet another Twisted based daemon. It'll have an xmlrpc interface as usual so I can easily communicate with it and have other processes interchange data with it as needed. This daemon needs to access a database. We've been using SQL Alchemy in place of hard coding SQL strings for our latest projects - those mostly done for web apps in Pylons. We'd like to do the same for this app and re-use library code that makes use of SQL Alchemy. So what to do? Well of course since that library was written for use in a Pylons app it's all the straight-forward blocking style code that everyone is accustomed to and all of the non-blocking is magically handled by Pylons via threading, thread locals, scoped sessions and so on. So now for Twisted I guess I'm a bit stuck. I could: Just write the sql I need directly if it's minimal and use the dbapi pool in twisted to do runInteractions etc when I need to hit the db. Use the objects and inherently blocking methods in our library and block now and then in my Twisted daemon. Bah. Use sAsync which was last updated in 2008 and kind of reuse the models we have defined already but not really and it does address code that needs to work in Pylons either. Does that even work with the latest version SQL Alchemy? Who knows. That project looked great though - why was it apparently abandoned? Spawn a separate subprocess and have it deal with the library code and all it's blocking, the results being returned back to my daemon when ready as objects marshalled via YAML over xmlrpc. Use deferToThread and then expunge the objects returned having made sure to do eager loads so that I have all my stuff that I might need. Seems kind of ugha to me. I'm also stuck using Python 2.5.4 atm so no 2.6 yet and I don't think I can just do an import from future to get access to the cool new multiprocessing module stuff in there. That's OK though I guess as we've got dealing with interprocess communication down pretty well. So I'm leaning towards option 4 mostly as that would avoid the mortal sin of logic duplication with option 1 while also staying the heck away from threads. Any better ideas?

    Read the article

  • Passing custom info to mongrel_rails start

    - by whaka
    One thing I really don't understand is how I can pass custom start-up options to a mongrel instance. I see that a common approach is the use environment variables, but in my environment this is not going to work because my rails application serves many different clients. Much code is shared between clients, but there are also many differences which I implement by subclassing controllers and views to overload or extend existing features or introduce new ones. To make this all work, I simply add the paths to client specific modules the module load path ($:). In order to start the application for a particular client, I could now use an environment variable like say, TARGET=AMAZONE. Unfortunately, on some systems I'm running multiple mongrel clusters, each cluster serving a different client. Some of these systems run under Windows and to start mongrel I installed mongrel_services. Clearly, this makes my environment variable unsuitable. Passing this extra bit of data to the application is proving to be a real challenge. For a start, mongrel_rails service_install will reject any [custom] command line parameters that aren't documented. I'm not too concerned as installing the services using the install program is trivial. However, even if I manage to install mongrel_services such that when run it passes the custom command line option --target to mongrel_rails start, I get an error because mongrel_rails doesn't recognize the switch. So here were the things I looked at: Pass an extra parameter: mongrel_rails start --target XYZ ... use a config file and add target:XYZ, then do: mongrel_rails start -C x:\myapp\myconfig.yml modify the file: Ruby\lib\ruby\gems\1.8\gems\mongrel-1.1.5-x86-mswin32-60\lib\mongrel\command.rb Perhaps I can use the --script option, but all docs that I found on it were for Unix 1 and 2 simply don't work. I played with 4 but never managed it to do anything. So I had no choice but to go with 3. While it is relatively simple, I hate changing ruby library code. Particularly disappointing is that 2 doesn't work. I mean what is so unreasonable about adding other [custom] options in the config file? Actually I think this is a fundamental piece that is missing in rails. Somehow, the application should be able to register and access command line arguments it expects. If anybody has a good idea how to do this more elegantly using the current infrastructure, I have a chocolate fish to give away!!!

    Read the article

  • Are comonads a good fit for modeling the Wumpus world?

    - by Tim Stewart
    I'm trying to find some practical applications of a comonad and I thought I'd try to see if I could represent the classical Wumpus world as a comonad. I'd like to use this code to allow the Wumpus to move left and right through the world and clean up dirty tiles and avoid pits. It seems that the only comonad function that's useful is extract (to get the current tile) and that moving around and cleaning tiles would not use be able to make use of extend or duplicate. I'm not sure comonads are a good fit but I've seen a talk (Dominic Orchard: A Notation for Comonads) where comonads were used to model a cursor in a two-dimensional matrix. If a comonad is a good way of representing the Wumpus world, could you please show where my code is wrong? If it's wrong, could you please suggest a simple application of comonads? module Wumpus where -- Incomplete model of a world inhabited by a Wumpus who likes a nice -- tidy world but does not like falling in pits. import Control.Comonad -- The Wumpus world is made up of tiles that can be in one of three -- states. data Tile = Clean | Dirty | Pit deriving (Show, Eq) -- The Wumpus world is a one dimensional array partitioned into three -- values: the tiles to the left of the Wumpus, the tile occupied by -- the Wumpus, and the tiles to the right of the Wumpus. data World a = World [a] a [a] deriving (Show, Eq) -- Applies a function to every tile in the world instance Functor World where fmap f (World as b cs) = World (fmap f as) (f b) (fmap f cs) -- The Wumpus world is a Comonad instance Comonad World where -- get the part of the world the Wumpus currently occupies extract (World _ b _) = b -- not sure what this means in the Wumpus world. This type checks -- but does not make sense to me. extend f w@(World as b cs) = World (map world as) (f w) (map world cs) where world v = f (World [] v []) -- returns a world in which the Wumpus has either 1) moved one tile to -- the left or 2) stayed in the same place if the Wumpus could not move -- to the left. moveLeft :: World a -> World a moveLeft w@(World [] _ _) = w moveLeft (World as b cs) = World (init as) (last as) (b:cs) -- returns a world in which the Wumpus has either 1) moved one tile to -- the right or 2) stayed in the same place if the Wumpus could not move -- to the right. moveRight :: World a -> World a moveRight w@(World _ _ []) = w moveRight (World as b cs) = World (as ++ [b]) (head cs) (tail cs) initWorld = World [Dirty, Clean, Dirty] Dirty [Clean, Dirty, Pit] -- cleans the current tile cleanTile :: Tile -> Tile cleanTile Dirty = Clean cleanTile t = t Thanks!

    Read the article

  • Archive tar files to a different location inperl

    - by user314261
    Hi, I am a newbee in Perl. I am reading a directory having some archive files and uncompressing the archive files one by one. Everything seems well however the files are getting uncompressed in the folder which has the main perl code module which is running the sub modules. I want the archive to be generated in the folder I specify. This is my code: sub ExtractFile { #Read the folder which was copied in the output path recursively and extract if any file is compressed my $dirpath = $_[0]; opendir(D, "$dirpath") || die "Can't open dir $dirpath: $!\n"; my @list = readdir(D); closedir(D); foreach my $f (@list) { print " \$f = $f"; if(-f $dirpath."/$f") { #print " File in directory $dirpath \n ";#is \$f = $f\n"; my($file_name, $file_dirname,$filetype)= fileparse($f,qr{\..*}); #print " \nThe file extension is $filetype"; #print " \nThe file name is is $file_name"; # If compressed file then extract the file if($filetype eq ".tar" or $filetype eq ".tzr.gz") { my $arch_file = $dirpath."/$f"; print "\n file to be extracted is $arch_file"; my $tar = Archive::Tar->new($arch_file); #$tar->extract() or die ("Cannot extract file $arch_file"); #mkdir($dirpath."/$file_name"); $tar->extract_file($arch_file,$dirpath."/$file_name" ) or die ("Cannot extract file $arch_file"); } } if(-d $dirpath."/$f") { if($f eq "." or $f eq "..") { next; } print " Directory\n";# is $f"; ExtractFile($dirpath."/$f"); } } } The method ExtractFile is called recursively to loop all the archives. When using $tar-extract() it uncompresses in the folder which calls this metohd. when I use $tar-extract_file($arch_file,$dirpath."/$file_name" ) I get an error : No such file in archive: '/home/fsang/dante/workspace/output/s.tar' at /home/fsang/dante/lib/Extraction.pm line 80 Please help I have checked that path and input output there is no issue with it. Seems some usage problem I am not aware of for $tar-extract_file(). Many thanks for anyone resolving this issue. Regards, Sakshi

    Read the article

  • Hot to get rid of memory allocations/deallocations in swig wrappers?

    - by Dmitriy Matveev
    I want to use swig for generation of read-only wrappers for a complex object. The object which I want to wrap will always be existent while I will read it. And also I will only use my wrappers at the time that object is existent, thus I don't need any memory management from SWIG. For following swig interface: %module test %immutable; %inline %{ struct Foo { int a; }; struct Bar { int b; Foo f; }; %} I will have a wrappers which will have a lot of garbage in generated interfaces and do useless work which will reduce performance in my case. Generated java wrapper for Bar class will be like this: public class Bar { private long swigCPtr; protected boolean swigCMemOwn; protected Bar(long cPtr, boolean cMemoryOwn) { swigCMemOwn = cMemoryOwn; swigCPtr = cPtr; } protected static long getCPtr(Bar obj) { return (obj == null) ? 0 : obj.swigCPtr; } protected void finalize() { delete(); } public synchronized void delete() { if (swigCPtr != 0) { if (swigCMemOwn) { swigCMemOwn = false; testJNI.delete_Bar(swigCPtr); } swigCPtr = 0; } } public int getB() { return testJNI.Bar_b_get(swigCPtr, this); } public Foo getF() { return new Foo(testJNI.Bar_f_get(swigCPtr, this), true); } public Bar() { this(testJNI.new_Bar(), true); } } I don't need 'swigCMemOwn' field in my wrapper since it always will be false. All code related to this field will also be useless. There are also unnecessary logic in native code: SWIGEXPORT jlong JNICALL Java_some_testJNI_Bar_1f_1get(JNIEnv *jenv, jclass jcls, jlong jarg1, jobject jarg1_) { jlong jresult = 0 ; struct Bar *arg1 = (struct Bar *) 0 ; Foo result; (void)jenv; (void)jcls; (void)jarg1_; arg1 = *(struct Bar **)&jarg1; result = ((arg1)->f); { Foo * resultptr = (Foo *) malloc(sizeof(Foo)); memmove(resultptr, &result, sizeof(Foo)); *(Foo **)&jresult = resultptr; } return jresult; } I don't need these calls to malloc and memmove. I want to force swig to resolve both of these problems, but don't know how. Is it possible?

    Read the article

  • JavaScript - Inheritance in Constructors

    - by j0ker
    For a JavaScript project we want to introduce object inheritance to decrease code duplication. However, I cannot quite get it working the way I want and need some help. We use the module pattern. Suppose there is a super element: a.namespace('a.elements.Element'); a.elements.Element = (function() { // public API -- constructor Element = function(properties) { this.id = properties.id; }; // public API -- prototype Element.prototype = { getID: function() { return this.id; } }; return Element; }()); And an element inheriting from this super element: a.namespace('a.elements.SubElement'); a.elements.SubElement = (function() { // public API -- constructor SubElement = function(properties) { // inheritance happens here // ??? this.color = properties.color; this.bogus = this.id + 1; }; // public API -- prototype SubElement.prototype = { getColor: function() { return this.color; } }; return SubElement; }()); You will notice that I'm not quite sure how to implement the inheritance itself. In the constructor I have to be able to pass the parameter to the super object constructor and create a super element that is then used to create the inherited one. I need a (comfortable) possibility to access the properties of the super object within the constructor of the new object. Ideally I could operate on the super object as if it was part of the new object. I also want to be able to create a new SubElement and call getID() on it. What I want to accomplish seems like the traditional class based inheritance. However, I'd like to do it using prototypal inheritance since that's the JavaScript way. Is that even doable? Thanks in advance! EDIT: Fixed usage of private variables as suggested in the comments. EDIT2: Another change of the code: It's important that id is accessible from the constructor of SubElement.

    Read the article

  • How to pipe two CORE::system commands in a cross-platform way

    - by Pedro Silva
    I'm writing a System::Wrapper module to abstract away from CORE::system and the qx operator. I have a serial method that attempts to connect command1's output to command2's input. I've made some progress using named pipes, but POSIX::mkfifo is not cross-platform. Here's part of what I have so far (the run method at the bottom basically calls system): package main; my $obj1 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{''}], input => ['input.txt'], description => 'Concatenate input.txt to STDOUT', ); my $obj2 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{'$_ = reverse $_}'}], description => 'Reverse lines of input input', output => { '>' => 'output' }, ); $obj1->serial( $obj2 ); package System::Wrapper; #... sub serial { my ($self, @commands) = @_; eval { require POSIX; POSIX->import(); require threads; }; my $tmp_dir = File::Spec->tmpdir(); my $last = $self; my @threads; push @commands, $self; for my $command (@commands) { croak sprintf "%s::serial: type of args to serial must be '%s', not '%s'", ref $self, ref $self, ref $command || $command unless ref $command eq ref $self; my $named_pipe = File::Spec->catfile( $tmp_dir, int \$command ); POSIX::mkfifo( $named_pipe, 0777 ) or croak sprintf "%s::serial: couldn't create named pipe %s: %s", ref $self, $named_pipe, $!; $last->output( { '>' => $named_pipe } ); $command->input( $named_pipe ); push @threads, threads->new( sub{ $last->run } ); $last = $command; } $_->join for @threads; } #... My specific questions: Is there an alternative to POSIX::mkfifo that is cross-platform? Win32 named pipes don't work, as you can't open those as regular files, neither do sockets, for the same reasons. The above doesn't quite work; the two threads get spawned correctly, but nothing flows across the pipe. I suppose that might have something to do with pipe deadlocking or output buffering. What throws me off is that when I run those two commands in the actual shell, everything works as expected.

    Read the article

  • Web service security not working. Java

    - by Nitesh Panchal
    Hello, I have a ejb module which contains my ejbs as well as web services. I am using Netbeans 6.8 and Glassfish V3 I right clicked on my web service and clicked "edit web service attributes" and then checked "secure service" and then selected keystore of my server. This is my sun-ejb-jar.xml file :- <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Servlet 2.5//EN" "http://www.sun.com/software/appserver/dtds/sun-web-app_2_5-0.dtd"> <sun-ejb-jar> <security-role-mapping> <role-name>Admin</role-name> <group-name>Admin</group-name> </security-role-mapping> <security-role-mapping> <role-name>General</role-name> <group-name>General</group-name> </security-role-mapping> <security-role-mapping> <role-name>Member</role-name> <group-name>Member</group-name> </security-role-mapping> <enterprise-beans> <ejb> <ejb-name>MemberBean</ejb-name> <webservice-endpoint> <port-component-name>wsMember</port-component-name> <login-config> <auth-method>BASIC</auth-method> <realm>file</realm> </login-config> </webservice-endpoint> </ejb> </enterprise-beans> </sun-ejb-jar> Here MemberBean is my ejb and wsMember is my webservice. Then i made another project and added web service client and again right clicked on "edit web service attributes" and gave password as test and test. This username and password (test) is in Glassfish server in file realm. But when i try to invoke my webservice i always get SEC5046: Audit: Authentication refused for [test]. SEC1201: Login failed for user: test What am i doing wrong? Am i missing something?

    Read the article

  • When to use new layouts and when to use new activities?

    - by cmdfrg
    I'm making a game in Android and I'm trying to add a set of menu screens. Each screen takes up the whole display and has various transitions available to other screens. As a rough summary, the menu screens are: Start screen Difficult select screen Game screen. Pause screen. Game over screen. And there are several different ways you can transition between screen: 1 - 2 2 - 3 3 - 4 (pause game) 4 - 1 (exit game) 4 - 3 (resume game) 3 - 5 (game ends) Obviously, I need some stored state when moving between screens, such as the difficulty level select when starting a game and what the player's score is when the game over screen is shown. Can anyone give me some advice for the easiest way to implement the above screens and transitions in Android? All the create/destroy/pause/resume methods make me nervous about writing brittle code if I'm not careful. I'm not fond of using an Activity for each screen. It seems too heavy weight, having to pass data around using intents seems like a real pain and each screen isn't a useful module by itself. As the "back" button doesn't always go back to the previous screen either, my menu layout doesn't seem to fit the activity model well. At the moment, I'm representing each screen as an XML layout file and I have one activity. I set the different buttons on each layout to call setContentView to update the screen the main activity is showing (e.g. the pause button changes the layout to the pause screen). The activity holds onto all the state needed (e.g. the current difficulty level and the game high score), which makes it easy to share data between screens. This seems roughly similar to the LunarLander sample, except I'm using multiple screens. Does what I have at the moment sound OK or am I not doing things the typical Android way? Is there a class I can use (e.g. something like ViewFlipper) that could make my life easier? By the way, my game screen is implemented as a SurfaceView that stores the game state. I need the state in this view to persist between calls to setContentView (e.g. to resume from paused). Is the right idea to create the game view when the activity starts, keep a reference to it and then use this reference with setContentView whenever I want the game screen to appear?

    Read the article

  • recursion resulting in extra unwanted data

    - by spacerace
    I'm writing a module to handle dice rolling. Given x die of y sides, I'm trying to come up with a list of all potential roll combinations. This code assumes 3 die, each with 3 sides labeled 1, 2, and 3. (I realize I'm using "magic numbers" but this is just an attempt to simplify and get the base code working.) int[] set = { 1, 1, 1 }; list = diceroll.recurse(0,0, list, set); ... public ArrayList<Integer> recurse(int index, int i, ArrayList<Integer> list, int[] set){ if(index < 3){ // System.out.print("\n(looping on "+index+")\n"); for(int k=1;k<=3;k++){ // System.out.print("setting i"+index+" to "+k+" "); set[index] = k; dump(set); recurse(index+1, i, list, set); } } return list; } (dump() is a simple method to just display the contents of list[]. The variable i is not used at the moment.) What I'm attempting to do is increment a list[index] by one, stepping through the entire length of the list and incrementing as I go. This is my "best attempt" code. Here is the output: Bold output is what I'm looking for. I can't figure out how to get rid of the rest. (This is assuming three dice, each with 3 sides. Using recursion so I can scale it up to any x dice with y sides.) [1][1][1] [1][1][1] [1][1][1] [1][1][2] [1][1][3] [1][2][3] [1][2][1] [1][2][2] [1][2][3] [1][3][3] [1][3][1] [1][3][2] [1][3][3] [2][3][3] [2][1][3] [2][1][1] [2][1][2] [2][1][3] [2][2][3] [2][2][1] [2][2][2] [2][2][3] [2][3][3] [2][3][1] [2][3][2] [2][3][3] [3][3][3] [3][1][3] [3][1][1] [3][1][2] [3][1][3] [3][2][3] [3][2][1] [3][2][2] [3][2][3] [3][3][3] [3][3][1] [3][3][2] [3][3][3] I apologize for the formatting, best I could come up with. Any help would be greatly appreciated. (This method was actually stemmed to use the data for something quite trivial, but has turned into a personal challenge. :) edit: If there is another approach to solving this problem I'd be all ears, but I'd also like to solve my current problem and successfully use recursion for something useful.

    Read the article

  • Drupal's profile_save_profile Doesn't Work in hook_cron, When Run by the Server's cron

    - by anschauung
    I have a problem with the following implementation of hook_cron in Drupal 6.1.3. The script below runs exactly as expected: it sends a welcome letter to new members, and updates a hidden field in their profile to designate that the letter has been sent. There are no errors in the letter, all new members are accounted for, etc. The problem is that the last line -- updating the profile -- doesn't seem to work when Drupal cron is invoked by the 'real' cron on the server. When I run cron manually (such as via /admin/reports/status/run-cron) the profile fields get updated as expected. Any suggestions as to what might be causing this? (Note, since someone will suggest it: members join by means outside of Drupal, and are uploaded to Drupal nightly, so Drupal's built-in welcome letters won't work (I think).) <?php function foo_cron() { // Find users who have not received the new member letter, // and send them a welcome email // Get users who have not recd a message, as per the profile value setting $pending_count_sql = "SELECT COUNT(*) FROM {profile_values} v WHERE (v.value = 0) AND (v.fid = 7)"; //fid 7 is the profile field for profile_intro_email_sent if (db_result(db_query($pending_count_sql))) { // Load the message template, since we // know we have users to feed into it. $email_template_file = "/home/foo/public_html/drupal/" . drupal_get_path('module', 'foo') . "/emails/foo-new-member-email-template.txt"; $email_template_data = file_get_contents($email_template_file); fclose($email_template_fh); //We'll just pull the uid, since we have to run user_load anyway $query = "SELECT v.uid FROM {profile_values} v WHERE (v.value = 0) AND (v.fid = 7)"; $result = db_query(($query)); // Loop through the uids, loading profiles so as to access string replacement variables while ($item = db_fetch_object($result)) { $new_member = user_load($item->uid); $translation_key = array( // ... code that generates the string replacement array ... ); // Compose the email component of the message, and send to user $email_text = t($email_template_data, $translation_key); $language = user_preferred_language($new_member); // use member's language preference $params['subject'] = 'New Member Benefits - Welcome to FOO!'; $params['content-type'] = 'text/plain; charset=UTF-8; format=flowed;'; $params['content'] = $email_text; drupal_mail('foo', 'welcome_letter', $new_member->mail, $language, $params, '[email protected]'); // Mark the user's profile to indicate that the message was sent $change = array( // Rebuild all of the profile fields in this category, // since they'll be deleted otherwise 'profile_first_name' => $new_member->profile_first_name, 'profile_last_name' => $new_member->profile_last_name, 'profile_intro_email_sent' => 1); profile_save_profile($change, $new_member, "Membership Data"); } } }

    Read the article

  • JavaScript: Can I declare a variable by querying which function is called? (Newbie)

    - by belle3WA
    I'm working with an existing JavaScript-powered cart module that I am trying to modify. I do not know JS and for various reasons need to work with what is already in place. The text that appears for my quantity box is defined within an existing function: function writeitems() { var i; for (i=0; i<items.length; i++) { var item=items[i]; var placeholder=document.getElementById("itembuttons" + i); var s="<p>"; // options, if any if (item.options) { s=s+"<select id='options"+i+"'>"; var j; for (j=0; j<item.options.length; j++) { s=s+"<option value='"+item.options[j].name+"'>"+item.options[j].name+"</option>"; } s=s+"</select>&nbsp;&nbsp;&nbsp;"; } // add to cart s=s+method+"Quantity: <input id='quantity"+i+"' value='1' size='3'/> "; s=s+"<input type='submit' value='Add to Cart' onclick='addtocart("+i+"); return false;'/></p>"; } placeholder.innerHTML=s; } refreshcart(false); } I have two different types of quantity input boxes; one (donations) needs to be prefaced with a dollar sign, and one (items) should be blank. I've taken the existing additem function, copied it, and renamed it so that there are two identical functions, one for items and one for donations. The additem function is below: function additem(name,cost,quantityincrement) { if (!quantityincrement) quantityincrement=1; var index=items.length; items[index]=new Object; items[index].name=name; items[index].cost=cost; items[index].quantityincrement=quantityincrement; document.write("<span id='itembuttons" + index + "'></span>"); return index; } Is there a way to declare a global variable based on which function (additem or adddonation) is called so that I can add that into the writeitems function so display or hide the dollar sign as needed? Or is there a better solution? I can't use HTML in the body of the cart page because of the way it is currently coded, so I'm depending on the JS to take care of it. Any help for a newbie is welcome. Thanks!

    Read the article

  • Extending struts 2 "property" data tag

    - by John B.
    Hi, We are currently developing a project with Struts2. We have a module on which we display a large amount of data on read-only fields from a group of beans via the "property" Struts 2 data tag (i.e. <s:property value="aBeanProperty" /) on a jsp file. In some cases most of the fields might come empty, so a blank is being displayed on screen. Our customer is now requesting us to display default string (i.e. "N/A") whenever a property comes empty, so that it is displayed in place of the blank spaces currently shown. We are looking for a way to achieve this in a clean and maintainable way. The 'property' tag comes with a 'default' attribute on which one can define a default value in cases when the accessed property comes as null. However, most of our properties are empty strings, therefore it does not work in our case. Another solution we are thinking of is to define a base class for all of our beans, and define a util method which will validate if a string is null or empty and then return the default value. Then we would call this method from each bean getter. And yes this would be tiresome and kind of ugly :), therefore we are holding out on this one in case of a better solution. Now, we have in mind a solution which we think would be the best but have not had luck on how implement it. We are planning on extending the 'property' tag some way, defining a new 'default' attribute so that besides working on null properties, it also do so on empty strings ("", " ", etc). Therefore we would only need to replace the original s:property tag with our new custom tag, and the desired result would be achieved without touching java code. Do you have an idea on how to do this? Also, any other clever solution (maybe some sort of design pattern?) on how to default the values of a large amount of property beans are welcome too! (Or maybe, even there might be some tag that does this already in Struts2??) Thanks in advance.

    Read the article

  • Django Custom Field: Only run to_python() on values from DB?

    - by Adam Levy
    How can I ensure that my custom field's *to_python()* method is only called when the data in the field has been loaded from the DB? I'm trying to use a Custom Field to handle the Base64 Encoding/Decoding of a single model property. Everything appeared to be working correctly until I instantiated a new instance of the model and set this property with its plaintext value...at that point, Django tried to decode the field but failed because it was plaintext. The allure of the Custom Field implementation was that I thought I could handle 100% of the encoding/decoding logic there, so that no other part of my code ever needed to know about it. What am I doing wrong? (NOTE: This is just an example to illustrate my problem, I don't need advice on how I should or should not be using Base64 Encoding) def encode(value): return base64.b64encode(value) def decode(value): return base64.b64decode(value) class EncodedField(models.CharField): __metaclass__ = models.SubfieldBase def __init__(self, max_length, *args, **kwargs): super(EncodedField, self).__init__(*args, **kwargs) def get_prep_value(self, value): return encode(value) def to_python(self, value): return decode(value) class Person(models.Model): internal_id = EncodedField(max_length=32) ...and it breaks when I do this in the interactive shell. Why is it calling to_python() here? >>> from myapp.models import * >>> Person(internal_id="foo") Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py", line 330, in __init__ setattr(self, field.attname, val) File "/usr/local/lib/python2.6/dist-packages/django/db/models/fields/subclassing.py", line 98, in __set__ obj.__dict__[self.field.name] = self.field.to_python(value) File "../myapp/models.py", line 87, in to_python return decode(value) File "../myapp/models.py", line 74, in decode return base64.b64decode(value) File "/usr/lib/python2.6/base64.py", line 76, in b64decode raise TypeError(msg) TypeError: Incorrect padding I had expected I would be able to do something like this... >>> from myapp.models import * >>> obj = Person(internal_id="foo") >>> obj.internal_id 'foo' >>> obj.save() >>> newObj = Person.objects.get(internal_id="foo") >>> newObj.internal_id 'foo' >>> newObj.internal_id = "bar" >>> newObj.internal_id 'bar' >>> newObj.save() ...what am I doing wrong?

    Read the article

  • Linux fsck.ext3 says "Device or resource busy" although I did not mount the disk.

    - by matnagel
    I am running an ubuntu 8.04 server instance with a 8GB virtual disk on vmware 1.0.9. For disk maintenance I made a copy of the virtual disk (by making a copy of the 2 vmdk files of sda on the stopped vm on the host) and added it to the original vm. Now this vm has it's original virtual disk sda plus a 1:1 copy (sdd). There are 2 additional disk sdb and sdc which I ignore.) I would expect sdb not to be mounted when I start the vm. So I try tp do a ext2 fsck on sdd from the running vm, but it reports fsck reported that sdb was mounted. $ sudo fsck.ext3 -b 8193 /dev/sdd e2fsck 1.40.8 (13-Mar-2008) fsck.ext3: Device or resource busy while trying to open /dev/sdd Filesystem mounted or opened exclusively by another program? The "mount" command does not tell me sdd is mounted: $ sudo mount /dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) /sys on /sys type sysfs (rw,noexec,nosuid,nodev) varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755) varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777) udev on /dev type tmpfs (rw,mode=0755) devshm on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdc1 on /mnt/r1 type ext3 (rw,relatime,errors=remount-ro) /dev/sdb1 on /mnt/k1 type ext3 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw) When I ignore the warning and continue the fsck, it reported many errors. How do I get this under control? Is there a better way to figure out if sdd is mounted? Or how is it "busy? How to unmount it then? How to prevent ubuntu from automatically mounting. Or is there something else I am missing? Also from /var/log/syslog I cannot see it is mounted, this is the last part of the startup sequence: kernel: [ 14.229494] ACPI: Power Button (FF) [PWRF] kernel: [ 14.230326] ACPI: AC Adapter [ACAD] (on-line) kernel: [ 14.460136] input: PC Speaker as /devices/platform/pcspkr/input/input3 kernel: [ 14.639366] udev: renamed network interface eth0 to eth1 kernel: [ 14.670187] eth1: link up kernel: [ 16.329607] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/ kernel: [ 16.367540] parport_pc 00:08: reported by Plug and Play ACPI kernel: [ 16.367670] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] kernel: [ 19.425637] NET: Registered protocol family 10 kernel: [ 19.437550] lo: Disabled Privacy Extensions kernel: [ 24.328857] loop: module loaded kernel: [ 24.449293] lp0: using parport0 (interrupt-driven). kernel: [ 26.075499] EXT3 FS on sda1, internal journal kernel: [ 28.380299] kjournald starting. Commit interval 5 seconds kernel: [ 28.381706] EXT3 FS on sdc1, internal journal kernel: [ 28.381747] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 28.444867] kjournald starting. Commit interval 5 seconds kernel: [ 28.445436] EXT3 FS on sdb1, internal journal kernel: [ 28.445444] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 31.309766] eth1: no IPv6 routers present kernel: [ 35.054268] ip_tables: (C) 2000-2006 Netfilter Core Team mysqld_safe[4367]: started mysqld[4370]: 100124 14:40:21 InnoDB: Started; log sequence number 0 10130914 mysqld[4370]: 100124 14:40:21 [Note] /usr/sbin/mysqld: ready for connections. mysqld[4370]: Version: '5.0.51a-3ubuntu5.4' socket: '/var/run/mysqld/mysqld.sock' port: 3 /etc/mysql/debian-start[4417]: Upgrading MySQL tables if necessary. /etc/mysql/debian-start[4422]: Looking for 'mysql' in: /usr/bin/mysql /etc/mysql/debian-start[4422]: Looking for 'mysqlcheck' in: /usr/bin/mysqlcheck /etc/mysql/debian-start[4422]: This installation of MySQL is already upgraded to 5.0.51a, u /etc/mysql/debian-start[4436]: Checking for insecure root accounts. /etc/mysql/debian-start[4444]: Checking for crashed MySQL tables.

    Read the article

  • HTTP 500 Internal Server Error on IIS 7.5 with MVC3

    - by Tor Haugen
    I am trying to install an MVC3 application on our production server with no luck. The application is from a 3rd party (compiled), and so debugging is not available to me. Besides, I strongly suspect the error occurs before any code in the site has a chance to execute. Our staging server is - as far as I can determine - set up excactly like the production server. Both run Windows Server 2008 Standard R2, both also run a Sharepoint 2010 site (though this install doesn't touch that in any way). IIS is version 7.5, and .NET Framework 4.0 (required by the MVC app) is (recently) installed (by me, with a reboot after). The application is very small and simple and, as far as I can tell sticks to fairly standard functionality - including forms authentication (ie. it doesnt' pull any dirty tricks). The error message shown in the browser is very general: HTTP Error 500.0 - Internal Server Error An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The bit about 'An error message detailing the cause' being in the application event log seems to be just speculation - a pious hope that whatever code actually caused the error will log it. Nothing useful is to be found in the event log (only the very same message, logged by IIS). Module: AspNetInitClrHostFailureModule Notification: BeginRequest Handler: StaticFile Error Code: 0x80070002 Requested URL: http://xxxxxx.xxxxxx.xx:80/ Physical Path: C:\Xxxxxxx\Prod\WebClient Logon Method: Not yet determined Logon User: Not yet determined Using Failed Request Tracing, I have been able to track the error (as also indicated above) to the AspNetInitClrHostFailureModule: 103. -NOTIFY_MODULE_START ModuleName AspNetInitClrHostFailureModule Notification 1 fIsPostNotification false Notification BEGIN_REQUEST 104. -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 105. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) So there you have it. Seemingly, the AspNetInitClrHostFailureModule fails to find some file. So some questions are: What is the AspNetInitClrHostFailureModule? It is not listed in the fairly exhausting list of modules configurable in IIS manager for the site. I have had no success googling it either. Maybe it's secret.. I access the root URL of the site. This is supposed to be redirected to /Account/LogOn by the FormsAuthenticationModule. Why then is the handler StaticFile? Is that a clue? I have tried removing the infamous system.webserver/modules/runAllManagedModulesForAllRequests attribute, and that makes the error go away (but MVC not actually working, of course). I am prepared to specify all necessary modules manually if that's what it takes, but if the AspNetInitClrHostFailureModule is actually needed, I will be just as stuck. Does anyone know, or can anyone direct me to someone who knows, exactly what modules a typical MVC3 application actually needs? This question might well be a duplicate of this one, but he didn't get any useful answer, and also asked less specific questions. So I'll have my own go. Hoping for some help here :) Edit: I have now tried setting up a trivial MVC 3 project on the server. I created a new project using the MVC Application template, compiled it and deployed it to the server. It behaves in exactly the same way. The server simply cannot run MVC 3 projects.

    Read the article

  • Getting hp-snmp-agents on HP ProLiant DL360 on Lenny working

    - by mark
    After receiving our HP ProLiant DL360 I'd like to integrate the machine into our Munin system and thus enable ProLiant specific information to be exposed via SNMP. I'm running Debian Lenny with kernel 2.6.26-2-vserver-amd64 . I've followed http://downloads.linux.hp.com/SDR/getting_started.html and the HP repository has been added to /etc/apt/sources.list.d/HP-ProLiantSupportPack.list . Setting up Lenny SNMP itself is not a problem, I configure it to have a public v1 community string to read all data and it works. I install hp-snmp-agents and run hpsnmpconfig and it adds additional lines to the top of /etc/snmp/snmpd.conf : dlmod cmaX /usr/lib64/libcmaX64.so snmpd gets restarted. Via lsof I can see that libcmaX64 was loaded and is used by snmpd, put I do not get any additional information out of snmp. I use snmpwalk -v 1 -c public ... and I can see many OIDs but I do not see the new ones I'd expect, most notably temperatures, fan speed and such. The OIDs I'm expecting are e.g. 1.3.6.1.4.1.232.6.2.6.8.1.4.1 , this is from the existing munin plugins from http://exchange.munin-monitoring.org/plugins/snmp__hp_temp/version/1 . snmpd[19007]: cmaX: Parsing shared as a type was unsucessful snmpd[19007]: cmaX: listening for subagents on port 25375 snmpd[19007]: cmaX: subMIB 1 handler has disconnected snmpd[19007]: cmaX: subMIB 2 handler has disconnected snmpd[19007]: cmaX: subMIB 3 handler has disconnected snmpd[19007]: cmaX: subMIB 5 handler has disconnected snmpd[19007]: cmaX: subMIB 6 handler has disconnected snmpd[19007]: cmaX: subMIB 8 handler has disconnected snmpd[19007]: cmaX: subMIB 9 handler has disconnected snmpd[19007]: cmaX: sent ColdStarts on ports 25376 to 25393 snmpd[19007]: cmaX: subMIB 10 handler has disconnected snmpd[19007]: cmaX: subMIB 11 handler has disconnected snmpd[19007]: cmaX: subMIB 14 handler has disconnected snmpd[19007]: cmaX: subMIB 15 handler has disconnected snmpd[19007]: cmaX: subMIB 16 handler has disconnected snmpd[19007]: cmaX: subMIB 21 handler has disconnected snmpd[19007]: cmaX: subMIB 22 handler has disconnected snmpd[19007]: cmaX: subMIB 23 handler has disconnected snmpd[19007]: cmaX: subMIB 1 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 2 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 3 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 5 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 6 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 8 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 9 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 10 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 11 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 14 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 15 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 16 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 21 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 22 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 23 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 18 handler has disconnected snmpd[19007]: cmaX: subMIB 18 will be sent on port 25393 to cpqnicd snmpd[19007]: NET-SNMP version 5.4.1 This doesn't look particular bad for me, it's just informational I guess. I've compared the walking OID output with and without the module and there's no difference in the OID served back at all. Are there any other prerequisites I'm missing? I've also noticed that from the time I installed hp-snmp-agents it adds a lot of additional daemons and that my load suddenly jumps to 1. I've uninstalled the package for now. Is this expected behavior?

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Error when installing AppFabric 1.1 on Server 2012 64bit

    - by no9
    I am trying to install AppFabric 1.1 on 64bit Windows Server 2012 R2. All updates have been installed and updates are turned ON .NET Framework 4.0 is installed .NET Framework 3.5 is installed IIS is installed Windows Powershell 3.0 should already be included in Server 2012 I am getting the following error: 2014-03-21 11:02:34, Information Setup ===== Logging started: 2014-03-21 11:02:34+01:00 ===== 2014-03-21 11:02:34, Information Setup File: c:\6c4006b0b3f6dee1bf616f1967\setup.exe 2014-03-21 11:02:34, Information Setup InternalName: Setup.exe 2014-03-21 11:02:34, Information Setup OriginalFilename: Setup.exe 2014-03-21 11:02:34, Information Setup FileVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup FileDescription: Setup.exe 2014-03-21 11:02:34, Information Setup Product: Microsoft(R) Windows(R) Server AppFabric 2014-03-21 11:02:34, Information Setup ProductVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup Debug: False 2014-03-21 11:02:34, Information Setup Patched: False 2014-03-21 11:02:34, Information Setup PreRelease: False 2014-03-21 11:02:34, Information Setup PrivateBuild: False 2014-03-21 11:02:34, Information Setup SpecialBuild: False 2014-03-21 11:02:34, Information Setup Language: Language Neutral 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup OS Name: Windows Server 2012 R2 Standard 2014-03-21 11:02:34, Information Setup OS Edition: ServerStandard 2014-03-21 11:02:34, Information Setup OSVersion: Microsoft Windows NT 6.2.9200.0 2014-03-21 11:02:34, Information Setup CurrentCulture: sl-SI 2014-03-21 11:02:34, Information Setup Processor Architecture: AMD64 2014-03-21 11:02:34, Information Setup Event Registration Source : AppFabric_Setup 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1.0 Upgrade module. 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1 Upgrade pre-install. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed, not taking backup. 2014-03-21 11:02:55, Information Setup Executing C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe with commandline -iru. 2014-03-21 11:02:55, Information Setup Return code from aspnet_regiis.exe is 0 2014-03-21 11:02:55, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Microsoft CCR and DSS Runtime 2008 R3.msi" /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-55).log" 2014-03-21 11:02:57, Information Setup Process.ExitCode: 0x00000000 2014-03-21 11:02:57, Information Setup Windows features successfully enabled. 2014-03-21 11:02:57, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Packages\AppFabric-1.1-for-Windows-Server-64.msi" ADDDEFAULT=Worker,WorkerAdmin,CacheService,CacheAdmin,Setup /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-57).log" LOGFILE="C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1_CustomActions(2014-03-21 11-02-57).log" INSTALLDIR="C:\Program Files\AppFabric 1.1 for Windows Server" LANGID=en-US 2014-03-21 11:03:45, Information Setup Process.ExitCode: 0x00000643 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Information Setup Microsoft.ApplicationServer.Setup.Core.SetupException: AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.GenerateAndThrowSetupException(Int32 exitCode, LogEventSource logEventSource) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.Invoke(LogEventSource logEventSource, InstallMode installMode, String packageIdentity, List`1 updateList, List`1 customArguments) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.InstallSelectedFeatures() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.Install() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Client.ProgressPage.StartAction() 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup === Summary of Actions === 2014-03-21 11:03:45, Information Setup Required Windows components : Completed Successfully 2014-03-21 11:03:45, Information Setup IIS Management Console : Completed Successfully 2014-03-21 11:03:45, Information Setup Microsoft CCR and DSS Runtime 2008 R3 : Completed Successfully 2014-03-21 11:03:45, Information Setup AppFabric 1.1 for Windows Server : Failed 2014-03-21 11:03:45, Information Setup Hosting Services : Failed 2014-03-21 11:03:45, Information Setup Caching Services : Failed 2014-03-21 11:03:45, Information Setup Hosting Administration : Failed 2014-03-21 11:03:45, Information Setup Cache Administration : Failed 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup ===== Logging stopped: 2014-03-21 11:03:45+01:00 ===== I have tried this solution but no success: http://stackoverflow.com/questions/11205927/appfabric-installation-failed-because-installer-msi-returned-with-error-code-1 My system enviroment variable PSModulesPath has this value: C:\Windows\System32\WindowsPowerShell\v1.0\Modules I have also followed this link with no success: http://jefferytay.wordpress.com/2013/12/11/installing-appfabric-on-windows-server-2012/ Any help would be greatly appreciated !

    Read the article

  • IIS 7.5 + Windows Server 2008 R2 + ASP.NET 4.0 HTTP 500 Error?

    - by Dave
    Hi, I'm having an issue I cannot track down and I have looked through the forums and not found anything that sheds any light. I have a fresh install of a Server 2008 R2 Web that I am trying to load an application I created and tested on a Windows 7 machine running IIS 7.5 using ASP.NET 4.0. Everything works fine on the dev machine. But when I used the Web Deployment tool to move it to the server, I now get a HTTP 500 error without a lot of information: Module AspNetInitClrHostFailureModule Notification BeginRequest Handler StaticFile Error Code 0x80070002 Requested URL http://192.168.1.83:80/ Physical Path C:\JustStreamIt Logon Method Not yet determined Logon User Not yet determined Failed Request Tracing Log Directory C:\inetpub\logs\FailedReqLogFiles And in my trace file I get: view trace Warning -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) And I get the following in the Application Log: Log Name: Application Source: Microsoft-Windows-IIS-W3SVC-WP Date: 5/28/2010 2:08:10 PM Event ID: 2299 Task Category: None Level: Error Keywords: Classic User: N/A Computer: win-ltfkdo1dnfp Description: An application has reported as being unhealthy. The worker process will now request a recycle. Reason given: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. . The data is the error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC-WP" Guid="{670080D9-742A-4187-8D16-41143D1290BD}" EventSourceName="W3SVC-WP" /> <EventID Qualifiers="49152">2299</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-28T21:08:10.000000000Z" /> <EventRecordID>1663</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>Application</Channel> <Computer>win-ltfkdo1dnfp</Computer> <Security /> </System> <EventData> <Data Name="Reason">An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. </Data> <Binary>02000780</Binary> </EventData> </Event> Anyone have a suggestion about where I should start looking?

    Read the article

  • Ubuntu: Failure to login with multiple video adapters

    - by tsilb
    Forgive my ignorance, for I am a complete linux noob. I have a computer with three video cards and six monitors. Works great on Windows. Trying to get it to run Ubuntu as well. It loads fine when I have it configured to run on one adapter; detects both screens, runs ok. But I want to turn the other 4 monitors on and run the whole thing as one extended desktop (one session, etc). So I downloaded and installed the newest ATI driver for Linux, which seems to work, kinda. I ran this to set up the screens: aticonfig --adapter=all --initial -f Now when I boot, Ubuntu seems to turn on all the screens (3 viewports, each with two cloned displays from what I can tell). When I enter my login info OR move the mouse off the main screen, the screens freeze and the kbd/ms become unresponsive. aticonfig generated xorg.conf included below. Have tried the following: aticonfig -initial -f - works, but only detects the primary adapter and 2 screens aticccle - Tells me I have to reboot after enabling the other cards. Then goes into above described freezing state. aticonfig --adapter=all --initial -f - see above Manually editing xorg.conf file with my limited knowledge - Was able to get two adapters running, but only the second adapter initialized while the primary stopped at the Ubuntu boot screen. Was unable to see the login prompt. Froze after I logged in blindly (was able to hear the login sound). Using generic "radeon" driver instead of ATI Proprietary driver with the above init attempts Toggling xinerama Various combinations of the above Hardware: Intel Core 2 Quad q6600 8GB DDR2 (3x) ATI Radeon HD 4680 5 monitors (21W, 21W, 22W Portrait, 22W Portrait, 19")and an HDTV (26"W, HDMI) in a horizontal arrangement I know next to nothing about Linux/Ubuntu aside from basic filesystem navigation, editing text files, and accessing my local and networked Windows stores and shares. Basically this is the most advanced thing I've had to do. I installed today. Please advise how to make this configuration work. my xorg.conf: Section "ServerLayout" Identifier "Layout0" Screen 0 "aticonfig-Screen[0]-0" 0 0 Screen "aticonfig-Screen[1]-0" RightOf "aticonfig-Screen[0]-0" Screen "aticonfig-Screen[2]-0" RightOf "aticonfig-Screen[1]-0" Option "RenderAccel" "true" Option "AllowGLXWithComposite" "true" EndSection Section "Files" EndSection Section "Module" EndSection Section "ServerFlags" Option "Xinerama" "0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[2]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "aticonfig-Device[1]-0" Driver "fglrx" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "aticonfig-Device[2]-0" Driver "fglrx" BusID "PCI:4:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-0" Device "aticonfig-Device[1]-0" Monitor "aticonfig-Monitor[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[2]-0" Device "aticonfig-Device[2]-0" Monitor "aticonfig-Monitor[2]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Windows 7 Samba issue

    - by abduls85
    We have a strange samba issue affecting only one user. Our samba setup is as follow : Red Hat Enterprise Linux Server release 5.4 (Tikanga) - Samba Server Samba version 3.0.33-3.14.el5 - Samba version Domain Controller WIN2008R2 Standard - Windows DC Windows 7 64 bit - Client PCs User mentioned that he faced this problem after he force shutdown his PC few weeks ago. By right, for all users when we access \\sambaservername in windows it will show all the shares in the samba server but for this user once he startup his PC he will not be able to access \\sambaservername, Error message Windows cannot access \\sambaservername Current workaround to solve the problem : Try to access one share in \\sambaservername for instance \\sambaservername\sharedfolder1. But even when doing so, it will first prompt an error in the beginning, error message is as follows Logon failure: unknown user name or bad password. user need to enter the credentials again and he can access the share. Thereafter, he will be able to access \\sambaservername without any issues. But once he reboots his computer the problem will persists. Troubleshooting done so far: Ensure the following settings: Go to: Control Panel → Administrative Tools → Local Security Policy Select: Local Policies → Security Options "Network security: LAN Manager authentication level" → Send LM & NTLM responses "Minimum session security for NTLM SSP" → uncheck: Require 128-bit encryption Advise user to reset his password and try again but problem still persists Tried my account on users' PC, there is no issues. Tried user account on serveral other Windows 7 PC including mine but problem still persists. Windows XP does not have this problem. Ensure that there is no stored crendentials on the windows 7 PC. Checked the credentials manager in Control Panel as well as typing this command rundll32.exe keymgr.dll, KRShowKeyMgr Restart winbindd daemon on samba server but to no avail. I suspect this is due to some caching issue but not sure where is the issue. Whenever the user has error accessing \\sambaservername, the following errors will be logged in the samba server : [2012/10/10 17:10:26, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! But after workaround, there will be no more errors. I suspect after reading the article listed below some amendments need to be made to the \var\samba\cache directory : http://www.linuxquestions.org/questions/linux-server-73/getent-passwd-dont-show-ad-groups-and-users-745829/ http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/tdb.html http://lists.samba.org/archive/samba/2010-May/155521.html http://lists.samba.org/archive/samba/2011-March/161912.html http://lzeit.blogspot.sg/2009/10/samba-shares-inaccessible-after-power.html There are several users using the samba server and i would like to solve this problem without any impacts. I saw the following article : http://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html#WINBINDCACHETIME "winbind offline logon (G) This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache. Default: winbind offline logon = false Example: winbind offline logon = true " Any idea on how to delete the entry for one user in the local cache ?

    Read the article

  • Does likewise-open > version 5.4 contain CIFS support?

    - by Ben Andken
    I'm trying to get the CIFS server working in likewise-open. I've found this set of instructions and everything seems to work until I try to connect ([url]http://www.likewise.com/resources/documentation_library/manuals/cifs/likewise-cifs-smb-file-server-guide.html#id2765992):[/url] 1.6. Build and Configure a Standalone Likewise-CIFS Server This section demonstrates how to build and configure a standalone instance of Likewise-CIFS from the command line. The following procedure assumes that you want to set up Likewise-CIFS on a Linux server to share files with Windows computers in a network without Active Directory. This procedure also assumes you know how to build Linux applications from their source code and then install them. Download Likewise-CIFS from its open source git location: $ git clone git://git.likewiseopen.org/ Download, build, and install the following tools. The tools listed are known to work, but earlier or later versions might work as well. Also, instead of downloading the tools, you might be able to install them on your platform with apt-get or some other means. http://ftp.gnu.org/gnu/autoconf/autoconf-2.65.tar.gz http://ftp.gnu.org/gnu/automake/automake-1.9.6.tar.gz http://ftp.gnu.org/gnu/libtool/libtool-2.2.6a.tar.gz http://pkgconfig.freedesktop.org/releases/pkg-config-0.23.tar.gz gcc --version 3.x or greater Build Likewise-CIFS: $ cd likewise-open $ build/mkcomp --debug all Install Likewise-CIFS: $ sudo su $ cd staging/install-root $ tar cf - . | (cd / && tar xvf -) Make sure Samba is not running: $ /etc/init.d/smb stop Make sure SELinux is either disabled or set to permissive. Make sure the ports required by Likewise are open. For a list of ports that Likewise uses, see the Likewise Open Installation and Administration Guide. Configure Likewise Open: $ /etc/init.d/lwsmd start $ for i in /etc/likewise/*.reg; do /opt/likewise/bin/lwregshell upgrade $i; done $ /etc/init.d/lwsmd stop $ /etc/init.d/lwsmd start $ /opt/likewise/bin/lwsm start srvsvc $ /opt/likewise/bin/domainjoin-cli configure --enable nsswitch Add a user account to the local Likewise provider database. In the following example, substitute the account name that you want for newuser. $ /opt/likewise/bin/lw-add-user --home /home/newuser --shell /bin/bash newuser Successfully added user newuser Enable the user and set the password: $ /opt/likewise/bin/lw-mod-user --enable-user --set-password newuser New Password: ********** Successfully modified user newuser Look up new user's identity as follows. Substitute the value from the command hostname -s for the hostname. Keep in mind that Likewise truncates a hostname longer than 15 characters to the first 15 characters of the string. % id hostname\\newuser uid=2000(HOSTNAME\newuser) gid=1800(HOSTNAME\Likewise Users) groups=1800(HOSTNAME\Likewise Users) context=system_u:system_r:unconfined_t:s0 Make a CIFS directory for the user: mkdir /lwcifs/newuser chown 2000:1800 /lwcifs/newuser From a Windows computer, map the Likewise-CIFS drive share: Computer->Map Network Drive... Folder: \\IP_hostname\c$ Click "Finish" Username: hostname\newuser Password: user_password The last step fails when I try to connect. I've tried with Windows XP Pro and Windows 7 Pro. The rest of the directions only appear to work for version 5.4 (the one that shipped with 10.04). For 12.04, version 6.1 is the only one available and it doesn't appear to have the srvsvc module mentioned in these instructions. Is CIFS support dropped in the 6.1 version of likewise-open?

    Read the article

  • Apache logs 200 instead of 404

    - by elle
    I've been getting the following in my apache access log: "GET /work//?module=www&section=working=../../../../../../../../../../../../../../../../../../../../../../../..//proc/self/environ%0000 HTTP/1.1" 200 5187 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; pl-PL; rv:1.8.1.24pre) Gecko/20100228 K-Meleon/1.5.4\",\"Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/540.0 (KHTML,like Gecko) Chrome/9.1.0.0 Safari/540.0\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Comodo_Dragon/4.1.1.11 Chrome/4.1.249.1042 Safari/532.5\",\"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.16) Gecko/2009122206 Firefox/3.0.16 Flock/2.5.6\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Maxthon/3.0.8.2 Safari/533.1\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.8pre) Gecko/20070928 Firefox/2.0.0.7 Navigator/9.0RC1\",\"Opera/9.99 (Windows NT 5.1; U; pl) Presto/9.9.9\",\"Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-HK) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5\",\"Seamonkey-1.1.13-1(X11; U; GNU Fedora fc 10) Gecko/20081112\",\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)\",\"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; .NET4.0C; .NET4.0E; InfoPath.3)" If I try the URL, I get a 404 instead of 200 which the above request received. Is there a way I can confirm that the 200 was real and not spoofed? Where is the long info on the client coming from?

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >