Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 367/401 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • PHP: Can pcntl_alarm() and socket_select() peacefully exist in the same thread?

    - by DWilliams
    I have a PHP CLI script mostly written that functions as a chat server for chat clients to connect to (don't ask me why I'm doing it in PHP, thats another story haha). My script utilizes the socket_select() function to hang execution until something happens on a socket, at which point it wakes up, processes the event, and waits until the next event. Now, there are some routine tasks that I need performed every 30 seconds or so (check of tempbanned users should be unbanned, save user databases, other assorted things). From what I can tell, PHP doesn't have very great multi-threading support at all. My first thought was to compare a timestamp every time the socket generates an event and gets the program flowing again, but this is very inconsistent since the server could very well sit idle for hours and not have any of my cleanup routines executed. I came across the PHP pcntl extensions, and it lets me use assign a time interval for SIGALRM to get sent and a function get executed every time it's sent. This seems like the ideal solution to my problem, however pcntl_alarm() and socket_select() clash with each other pretty bad. Every time SIGALRM is triggered, all sorts of crazy things happen to my socket control code. My program is fairly lengthy so I can't post it all here, but it shouldn't matter since I don't believe I'm doing anything wrong code-wise. My question is: Is there any way for a SIGALRM to be handled in the same thread as a waiting socket_select()? If so, how? If not, what are my alternatives here? Here's some output from my program. My alarm function simply outputs "Tick!" whenever it's called to make it easy to tell when stuff is happening. This is the output (including errors) after allowing it to tick 4 times (there were no actual attempts at connecting to the server despite what it says): [05-28-10 @ 20:01:05] Chat server started on 192.168.1.28 port 4050 [05-28-10 @ 20:01:05] Loaded 2 users from file PHP Notice: Undefined offset: 0 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:15] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:25] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:25] Accepting socket connection from PHP Notice: Undefined offset: 1 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:35] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:45] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:45] Accepting socket connection from PHP Notice: Undefined offset: 2 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112

    Read the article

  • JSF Navigation issue Facelets and Beans

    - by ortho
    Hi, I have an issue with navigation in my simple jsf system. I have MainBean that has two methods: public String register() and public String login(). I have played with faces-config.xml for several hours and I feel like I miss something very important because I think like I have tried all simple solutions so far :). I have added the mysql connector (jdbc) to Tomcat's lib folder and I am able to register the table to mySQL databse. It even allows my users to log in to the page. The only problem is that I can't use navigation in any other page than login.xhtml. Seems like navigation is only active on this one. I tried to use * but is no joy. I am sure that there is a simple fix for that and someone will come up with correct solution soon. Let's skip all the mysql part and try to fix the navigation issue please. faces-config.xml: http://java.sun.com/xml/ns/javaee/web-facesconfig_1_2.xsd" version="1.2" com.sun.facelets.FaceletViewHandler mainBean dk.itu.beans.MainBean session registerBean dk.itu.beans.RegisterBean session /login.xhtml success /welcome.xhtml failure /login_failed.xhtml sign /register.xhtml /login_failed.xhtml back /login.xhtml Here the last navigation-rule from login_failed.xhtml does not work at all. login.xhtml (which is the main - starting view): login_failed.xhtml: Here I tried many options. I used action="#{mainBean.register})" method that returns "sign" string and none of these worked. There is one more file (not specified in faces-config.xml file, because did not work either - but going from login.xhtml by button works fine. I tried to manage navigation from login_failed.xhtml first, then I will apply the same rool for registration to come back to login page when customer registers his nick name). register.xhtml: Basicaly mainBean.register now calls the database and returns string, but obviously it doesn't navigate to any view (but gives an entry to database). I believe it is simple fix for most of the experienced web developers and any help will be greately appreciated. I use eclipse, tomcat 6 and Widows Vista if that helps :) Thank you in advance. Kindest regards.

    Read the article

  • Insert an event on Google Resource Calendar using the latest google-php-client-api

    - by user3781583
    Created a Project Enabled Calendar API Created an OAuth2.0 Service Account Downloaded the keyfile .p12 and saved it locally (not using a server with a public IP address) Shared my Resource Calendar with the Email address created in the Service Account (with Manage Sharing rights) Entered Client ID for the service account and authorized http://www.googleapis.com/auth/calendar Environment lamp setup on localhost. <?php require_once 'google-api-php-client/src/Google/Client.php'; require_once 'google-api-php-client/src/Google/Service/Calendar.php'; session_start(); const CLIENT_ID = 'XXXXXX.apps.googleusercontent.com'; //Service CLIENT ID const SERVICE_ACCOUNT_NAME = '[email protected]'; const KEY_FILE = 'google-api-php-client/src/Google/Reservation Service-XXXXXXX.p12'; $client = new Google_Client(); $client->setApplicationName("Appointment"); if (isset($_SESSION['token'])) { $client->setAccessToken($_SESSION['token']); } $key = file_get_contents(KEY_FILE); $client->setClientId(CLIENT_ID); $client->setAssertionCredentials(new Google_Auth_AssertionCredentials( SERVICE_ACCOUNT_NAME, array('https://www.googleapis.com/auth/calendar'), $key)); //Save token in session if ($client->getAccessToken()) { $_SESSION['token'] = $client->getAccessToken(); } $cal = new Google_Service_Calendar($client); $event = new Google_Service_Calendar_Event(); $event->setSummary('This is a Test event'); $event->setLocation('Test Location'); $start = new Google_Service_Calendar_EventDateTime(); $start->setDateTime('2014-08-20T10:30:00.000-05:00'); $event->setStart($start); $end = new Google_Service_Calendar_EventDateTime(); $end->setDateTime('2014-08-20T12:30:00.000-05:00'); $event->setEnd($end); $cal->events->insert('[email protected]', $event); ?> getting the following error: Fatal error: Uncaught exception 'Google_Service_Exception' with message 'Error calling POST https://www.googleapis.com/calendar/v3/calendars/XXXXXXX%40resource.calendar.google.com/events: (403) Forbidden' in /google-api-php-client/src/Google/Http/REST.php:79 Stack trace: #0 /google-api-php-client/src/Google/Http/REST.php(44): Google_Http_REST::decodeHttpResponse(Object(Google_Http_Request)) #1 /google-api-php-client/src/Google/Client.php(503): Google_Http_REST::execute(Object(Google_Client), Object(Google_Http_Request)) #2 /google-api-php-client/src/Google/Servic/Resource.php(195): Google_Client-execute(Object(Google_Http_Request)) #3 /google-api-php-client/src/Google/Service/Calendar.php(1459): Google_Service_Resource-call('insert', Array, 'Google_Service_...') #4 /calendar.php(53): Google_S in /google-api-php-client/src/Google/Http/REST.php on line 79 A few people had the same issue, I am sharing the calendar with the service account. Any help will be appreciated.

    Read the article

  • Asp.net MVC and MOSS 2010 integration

    - by Robert Koritnik
    Just a sidenote: I'm not sure whether I should post this to serverfault as well, because some MOSS admin may have some info for me as well? A bit of explanation first (without Asp.net MVC) Is it possible to integrate the two? Is it possible to write an application that would share at least credential information with MOSS? I have to write a MOSS application that has to do with these technologies: MOSS 2010 Personal client certificates authentication (most probably on USB keys) Active Directory Federation Services Separate SQL DB that would serve application specific data (separate as not being part of MOSS DB) How should it work? Users should authenticate using personal certificates into MOSS 2010 There would be a certain part of MOSS that would be related to my custom application This application should only authorize certain users via AD FS - I guess these users should have a certain security claim attached to them This application should manage users (that have access to this app) with additional (app specific) security claims related to this application (as additional application level authorization rights for individual application parts) This application should use custom SQL 2008 DB heavily with its own data This application should have the possibility to integrate with external systems as well (Exchange for instance to inject calendar entries, ERP systems etc) This application should be able to export its data (from its DB) to files. I don't know if it's possible, but it would be nice if the app could add these files to MOSS and attach authorization info to them so only users with sufficient rights would be able to view/open these files. Why Asp.net MVC then? I'm very well versed in Asp.net MVC (also with the latest version) and I haven't done anything on Sharepoint since version 2003 (which doesn't do me no good or prepare me for the latest version in any way shape or form). This project will most probably be a death march project so I would rather write my application as a UI rich Asp.net MVC application and somehow integrate it into MOSS. But not only via a link, because I would like to at least share credentials, so users wouldn't need to re-login when accessing my app. Using Asp.net MVC I would at least have the possibility to finish on time or be less death marching. Is this at all possible? Questions Is it possible to integrate Asp.net MVC into MOSS as described above? If integration is not possible, would it be possible to create a completely MOSS based application that would work as described? Which parts of MOSS 2010 should I use to accomplish what I need?

    Read the article

  • Wrapping a pure virtual method with multiple arguments with Boost.Python

    - by fallino
    Hello, I followed the "official" tutorial and others but still don't manage to expose this pure virtual method (getPeptide) : ms_mascotresults.hpp class ms_mascotresults { public: ms_mascotresults(ms_mascotresfile &resfile, const unsigned int flags, double minProbability, int maxHitsToReport, const char * unigeneIndexFile, const char * singleHit = 0); ... virtual ms_peptide getPeptide(const int q, const int p) const = 0; } ms_mascotresults.cpp #include <boost/python.hpp> using namespace boost::python; #include "msparser.hpp" // which includes "ms_mascotresults.hpp" using namespace matrix_science; #include <iostream> #include <sstream> struct ms_mascotresults_wrapper : ms_mascotresults, wrapper<ms_mascotresults> { ms_peptide getPeptide(const int q, const int p) { this->get_override("getPeptide")(q); this->get_override("getPeptide")(p); } }; BOOST_PYTHON_MODULE(ms_mascotresults) { class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults") .def("getPeptide", pure_virtual(&ms_mascotresults::getPeptide) ) ; } Here are the bjam's errors : /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:66: error: cannot declare field ‘boost::python::objects::value_holder<ms_mascotresults_wrapper>::m_held’ to be of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: because the following virtual functions are pure within ‘ms_mascotresults_wrapper’: ... include/ms_mascotresults.hpp:334: note: virtual matrix_science::ms_peptide matrix_science::ms_mascotresults::getPeptide(int, int) const ms_mascotresults.cpp: In constructor ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper()’: ms_mascotresults.cpp:12: error: no matching function for call to ‘matrix_science::ms_mascotresults::ms_mascotresults()’ include/ms_mascotresults.hpp:284: note: candidates are: matrix_science::ms_mascotresults::ms_mascotresults(matrix_science::ms_mascotresfile&, unsigned int, double, int, const char*, const char*) include/ms_mascotresults.hpp:109: note: matrix_science::ms_mascotresults::ms_mascotresults(const matrix_science::ms_mascotresults&) ... /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp: In constructor ‘boost::python::objects::value_holder<Value>::value_holder(PyObject*) [with Value = ms_mascotresults_wrapper]’: /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: note: synthesized method ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper()’ first required here /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: error: cannot allocate an object of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: since type ‘ms_mascotresults_wrapper’ has pure virtual functions So I tried to change the constructor's signature by : BOOST_PYTHON_MODULE(ms_mascotresults) { //class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults") class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults", init<ms_mascotresfile &, const unsigned int, double, int, const char *,const char *>()) .def("getPeptide", pure_virtual(&ms_mascotresults::getPeptide) ) Giving these errors : /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:66: error: cannot declare field ‘boost::python::objects::value_holder<ms_mascotresults_wrapper>::m_held’ to be of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: because the following virtual functions are pure within ‘ms_mascotresults_wrapper’: include/ms_mascotresults.hpp:334: note: virtual matrix_science::ms_peptide matrix_science::ms_mascotresults::getPeptide(int, int) const ... ms_mascotresults.cpp:24: instantiated from here /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: error: no matching function for call to ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper(matrix_science::ms_mascotresfile&, const unsigned int&, const double&, const int&, const char* const&, const char* const&)’ ms_mascotresults.cpp:12: note: candidates are: ms_mascotresults_wrapper::ms_mascotresults_wrapper(const ms_mascotresults_wrapper&) ms_mascotresults.cpp:12: note: ms_mascotresults_wrapper::ms_mascotresults_wrapper() If I comment the virtual function getPeptide in the .hpp, it builds perfectly with this constructor : class_<ms_mascotresults>("ms_mascotresults", init<ms_mascotresfile &, const unsigned int, double, int, const char *,const char *>() ) So I'm a bit lost...

    Read the article

  • Does oneway declaration in Android .aidl guarantee that method will be called in a separate thread?

    - by Dan Menes
    I am designing a framework for a client/server application for Android phones. I am fairly new to both Java and Android (but not new to programming in general, or threaded programming in particular). Sometimes my server and client will be in the same process, and sometimes they will be in different processes, depending on the exact use case. The client and server interfaces look something like the following: IServer.aidl: package com.my.application; interface IServer { /** * Register client callback object */ void registerCallback( in IClient callbackObject ); /** * Do something and report back */ void doSomething( in String what ); . . . } IClient.aidl: package com.my.application; oneway interface IClient { /** * Receive an answer */ void reportBack( in String answer ); . . . } Now here is where it gets interesting. I can foresee use cases where the client calls IServer.doSomething(), which in turn calls IClient.reportBack(), and on the basis of what is reported back, IClient.reportBack() needs to issue another call to IClient.doSomething(). The issue here is that IServer.doSomething() will not, in general, be reentrant. That's OK, as long as IClient.reportBack() is always invoked in a new thread. In that case, I can make sure that the implementation of IServer.doSomething() is always synchronized appropriately so that the call from the new thread blocks until the first call returns. If everything works the way I think it does, then by declaring the IClient interface as oneway, I guarantee this to be the case. At least, I can't think of any way that the call from IServer.doSomething() to IClient.reportBack() can return immediately (what oneway is supposed to ensure), yet IClient.reportBack still be able to reinvoke IServer.doSomething recursively in the same thread. Either a new thread in IServer must be started, or else the old IServer thread can be re-used for the inner call to IServer.doSomething(), but only after the outer call to IServer.doSomething() has returned. So my question is, does everything work the way I think it does? The Android documentation hardly mentions oneway interfaces.

    Read the article

  • uninitialized constant Active Scaffold rails 2.3.5

    - by Kiva
    Hi guy, I update my rails application 2.0.2 to 2.3.5. I use active scaffold for the administration part. I change nothing in my code but a problem is coming with the update. I have a controller 'admin/user_controller' to manage users. Here is the code of the controller: class Admin::UserController < ApplicationController layout 'admin' active_scaffold :user do |config| config.columns.exclude :content, :historique_content, :user_has_objet, :user_has_arme, :user_has_entrainement, :user_has_mission, :mp, :pvp, :user_salt, :tchat, :notoriete_by_pvp, :invitation config.list.columns = [:user_login, :user_niveau, :user_mail, :user_bloc, :user_valide, :group_id] #:user_description, :race, :group, :user_lastvisited, :user_nextaction, :user_combats_gagner, :user_combats_perdu, :user_combats_nul, :user_password, :user_salt, :user_combats, :user_experience, :user_mana, :user_vie config.create.link.page = true config.update.link.page = true config.create.columns.add :password, :password_confirmation config.update.columns.add :password, :password_confirmation config.create.columns.exclude :user_password, :user_salt config.update.columns.exclude :user_password, :user_salt config.list.sorting = {:user_login => 'ASC'} config.subform.columns = [] end end This code hasn't change with the update, but when I go in this page, I got this error: uninitialized constant Users /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:in `load_missing_constant' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:in `const_missing' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:in `const_missing' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:361:in `constantize' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:360:in `each' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:360:in `constantize' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/string/inflections.rb:162:in `constantize' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:28:in `reverse_matches_for' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:24:in `each' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:24:in `reverse_matches_for' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:11:in `reverse' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold/data_structures/column.rb:117:in `autolink?' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold.rb:107:in `links_for_associations' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold/data_structures/columns.rb:62:in `each' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold/data_structures/columns.rb:62:in `each' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold.rb:106:in `links_for_associations' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold.rb:59:in `active_scaffold' /Users/Kiva/Documents/Projet-rpg/jeu/app/controllers/admin/user_controller.rb:11 I search since 2 days but I don't find the problem, can you help me please.

    Read the article

  • Prism Commands - binding error when binding to list element ?

    - by Maciek
    I've got a ItemsControl (to be replaced by listbox) which has it's ItemsSource bound to an ObservableCollection<User> which is located in the view model. The View Model contains some DelegateCommand<T> delegates for handling commands (for instance UpdateUserCommand and RemoveUserCommand). All works fine if the buttons linked to those commands are placed outside of the DataTemplate of the control which is presenting the items. <ItemsControl ItemsSource="{Binding Users, Mode=TwoWay}" HorizontalContentAlignment="Stretch"> <ItemsControl.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="0.2*"/> <ColumnDefinition Width="0.2*"/> <ColumnDefinition Width="0.2*"/> <ColumnDefinition Width="0.2*"/> <ColumnDefinition Width="0.2*"/> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="{Binding UserName}"/> <PasswordBox Grid.Column="1" Password="{Binding UserPass}"/> <TextBox Grid.Column="2" Text="{Binding UserTypeId}"/> <Button Grid.Column="3" Content="Update" cal:Click.Command="{Binding UpdateUserCommand}" cal:Click.CommandParameter="{Binding}"/> <Button Grid.Column="4" Content="Remove" cal:Click.Command="{Binding RemoveUserCommand}" cal:Click.CommandParameter="{Binding}"/> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> What I'm trying to achieve, is : Have each row - generated by the ListView/ItemsControl - contain buttons to manage the item represented that particular row. During the runtime, the VS's output panel generated the following messages for each listbox element System.Windows.Data Error: BindingExpression path error: 'UpdateUserCommand' property not found on 'ModuleAdmin.Services.User' 'ModuleAdmin.Services.User' (HashCode=35912612). BindingExpression: Path='UpdateUserCommand' DataItem='ModuleAdmin.Services.User' (HashCode=35912612); target element is 'System.Windows.Controls.Button' (Name=''); target property is 'Command' (type 'System.Windows.Input.ICommand').. System.Windows.Data Error: BindingExpression path error: 'RemoveUserCommand' property not found on 'ModuleAdmin.Services.User' 'ModuleAdmin.Services.User' (HashCode=35912612). BindingExpression: Path='RemoveUserCommand' DataItem='ModuleAdmin.Services.User' (HashCode=35912612); target element is 'System.Windows.Controls.Button' (Name=''); target property is 'Command' (type 'System.Windows.Input.ICommand').. Which would imply that there are binding errors present. Is there any way to make this right? or is this not the way?

    Read the article

  • Java JMS Messaging

    - by London
    Hello, I have a working example of sending message to server and server receiving it via qpid messaging. Here is simple hello world to send to server : http://pastebin.com/M7mSECJn And here is server which receives requests and sends response(the current client doesn't receive response) : http://pastebin.com/2mEeuzrV Here is my property file : http://pastebin.com/TLEFdpXG They all work perfectly, I can see the messages in the qpid queue via Qpid JMX management console. These examples are downloaded from https://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/example (someone may need it also). I've done Jboss messaging using spring before, but I can't manage to do the same with qpid. With jboss inside applicationsContext I had beans jndiTemplate, conectionFactory, destinationQueue, and jmscontainer like this : <!-- Queue configuration --> <bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate"> <property name="environment"> <props> <prop key="java.naming.factory.initial">org.jnp.interfaces.NamingContextFactory</prop> <prop key="java.naming.provider.url">jnp://localhost:1099</prop> <prop key="java.naming.factory.url.pkgs">org.jboss.naming:org.jnp.interfaces</prop> <prop key="java.naming.security.principal">admin</prop> <prop key="java.naming.security.credentials">admin</prop> </props> </property> </bean> <bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiTemplate" ref="jndiTemplate" /> <property name="jndiName" value="ConnectionFactory" /> </bean> <bean id="queueDestination" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiTemplate" ref="jndiTemplate" /> <property name="jndiName"> <value>queue/testQueue</value> </property> </bean> <bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> <property name="destination" ref="queueDestination" /> <property name="messageListener" ref="listener" /> </bean> and of course sender and listener : Now I'd like to rewrite this qpid example using spring context logic. Can anyone help me?

    Read the article

  • Hibernate annotated many-to-one not adding child to parent Collection

    - by Rob Hruska
    I have the following annotated Hibernate entity classes: @Entity public class Cat { @Column(name = "ID") @GeneratedValue(strategy = GenerationType.AUTO) @Id private Long id; @OneToMany(mappedBy = "cat", cascade = CascadeType.ALL, fetch = FetchType.EAGER) private Set<Kitten> kittens = new HashSet<Kitten>(); public void setId(Long id) { this.id = id; } public Long getId() { return id; } public void setKittens(Set<Kitten> kittens) { this.kittens = kittens; } public Set<Kitten> getKittens() { return kittens; } } @Entity public class Kitten { @Column(name = "ID") @GeneratedValue(strategy = GenerationType.AUTO) @Id private Long id; @ManyToOne(cascade = CascadeType.ALL, fetch = FetchType.EAGER) private Cat cat; public void setId(Long id) { this.id = id; } public Long getId() { return id; } public void setCat(Cat cat) { this.cat = cat; } public Cat getCat() { return cat; } } My intention here is a bidirectional one-to-many/many-to-one relationship between Cat and Kitten, with Kitten being the "owning side". What I want to happen is when I create a new Cat, followed by a new Kitten referencing the Cat, the Set of kittens on my Cat should contain the new Kitten. However, this does not happen in the following test: @Test public void testAssociations() { Session session = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction tx = session.beginTransaction(); Cat cat = new Cat(); session.save(cat); Kitten kitten = new Kitten(); kitten.setCat(cat); session.save(kitten); tx.commit(); assertNotNull(kitten.getCat()); assertEquals(cat.getId(), kitten.getCat().getId()); assertTrue(cat.getKittens().size() == 1); // <-- ASSERTION FAILS assertEquals(kitten, new ArrayList<Kitten>(cat.getKittens()).get(0)); } Even after re-querying the Cat, the Set is still empty: // added before tx.commit() and assertions cat = (Cat)session.get(Cat.class, cat.getId()); Am I expecting too much from Hibernate here? Or is the burden on me to manage the Collection myself? The (Annotations) documentation doesn't make any indication that I need to create convenience addTo*/removeFrom* methods on my parent object. Can someone please enlighten me on what my expectations should be from Hibernate with this relationship? Or if nothing else, point me to the correct Hibernate documentation that tells me what I should be expecting to happen here. What do I need to do to make the parent Collection automatically contain the child Entity?

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • Problems doing asynch operations in C# using Mutex.

    - by firoso
    I've tried this MANY ways, here is the current iteration. I think I've just implemented this all wrong. What I'm trying to accomplish is to treat this Asynch result in such a way that until it returns AND I finish with my add-thumbnail call, I will not request another call to imageProvider.BeginGetImage. To Clarify, my question is two-fold. Why does what I'm doing never seem to halt at my Mutex.WaitOne() call, and what is the proper way to handle this scenario? /// <summary> /// re-creates a list of thumbnails from a list of TreeElementViewModels (directories) /// </summary> /// <param name="list">the list of TreeElementViewModels to process</param> public void BeginLayout(List<AiTreeElementViewModel> list) { // *removed code for canceling and cleanup from previous calls* // Starts the processing of all folders in parallel. Task.Factory.StartNew(() => { thumbnailRequests = Parallel.ForEach<AiTreeElementViewModel>(list, options, ProcessFolder); }); } /// <summary> /// Processes a folder for all of it's image paths and loads them from disk. /// </summary> /// <param name="element">the tree element to process</param> private void ProcessFolder(AiTreeElementViewModel element) { try { var images = ImageCrawler.GetImagePaths(element.Path); AsyncCallback callback = AddThumbnail; foreach (var image in images) { Console.WriteLine("Attempting Enter"); synchMutex.WaitOne(); Console.WriteLine("Entered"); var result = imageProvider.BeginGetImage(callback, image); } } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } /// <summary> /// Adds a thumbnail to the Browser /// </summary> /// <param name="result">an async result used for retrieving state data from the load task.</param> private void AddThumbnail(IAsyncResult result) { lock (Thumbnails) { try { Stream image = imageProvider.EndGetImage(result); string filename = imageProvider.GetImageName(result); string imagePath = imageProvider.GetImagePath(result); var imageviewmodel = new AiImageThumbnailViewModel(image, filename, imagePath); thumbnailHash[imagePath] = imageviewmodel; HostInvoke(() => Thumbnails.Add(imageviewmodel)); UpdateChildZoom(); //synchMutex.ReleaseMutex(); Console.WriteLine("Exited"); } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } }

    Read the article

  • KISS: Simple C# application which communicates with a RESTful web service.

    - by Workshop Alex
    Following the KISS principle, I suddenly realised the following: In .NET, you can use the Entity Model Framework to wrap around a database. This model can be exposed as a web service through WCF. This web service would have a very standardized definition. A client application could be created which could consume any such RESTful web service. I don't want to re-invent the wheel and it wouldn't surprise me if someone has already done this, so my question is simple: Has anyone already created a simple (desktop, not web) client application that can consume a RESTful service that's based on the Entity Framework and which will allow the user to read and write data directly to this service? Otherwise, I'll just have to "invent" this myself. :-)Problem is, the database layer and RESTful service is already finished. The RESTful service will only stay in the project during it's development phase, since we can use the database-layer assembly directly from the web applications that are build around it. When the web application is deployed, the RESTful services are just kept out of the deployment. But the database has a lot of data to manage over nearly 50 tables. When developing against a local database, we can have straight access to the database so I wouldn't need this tool for this. When it's deployed, the web application would be the only way to access the data so I could not use this tool. But we're also having a test phase where the database is stored on another system outside the local domain and this database is not available for developers. Only administrators have direct access to this database, making tests a bit more complex. However, through the RESTful service, I can still access the data directly. Thus, when some test goes wrong, I can repair the data through this connection or just create a copy of the data for tests on my local system. There's plenty of other functionality and it's even possible to just open the URL to a table service straight in Excel or XMLSpy to see the contents. But when I want to write something back, I have to write special code to do just that. A generic tool that would allow me to access the data and modify it would be easier. Since it's a generic setup around the ADO.NET Data services, this should be reasonable easy too. Thus, I can do it but hoped someone else has already done something similar. But it appears that there's no such tool made yet...

    Read the article

  • Asynchronous subprocess on Windows

    - by Stigma
    First of all, the overall problem I am solving is a bit more complicated than I am showing here, so please do not tell me 'use threads with blocking' as it would not solve my actual situation without a fair, FAIR bit of rewriting and refactoring. I have several applications which are not mine to modify, which take data from stdin and poop it out on stdout after doing their magic. My task is to chain several of these programs. Problem is, sometimes they choke, and as such I need to track their progress which is outputted on STDERR. pA = subprocess.Popen(CommandA, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # ... some more processes make up the chain, but that is irrelevant to the problem pB = subprocess.Popen(CommandB, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=pA.stdout ) Now, reading directly through pA.stdout.readline() and pB.stdout.readline(), or the plain read() functions, is a blocking matter. Since different applications output in different paces and different formats, blocking is not an option. (And as I wrote above, threading is not an option unless at a last, last resort.) pA.communicate() is deadlock safe, but since I need the information live, that is not an option either. Thus google brought me to this asynchronous subprocess snippet on ActiveState. All good at first, until I implement it. Comparing the cmd.exe output of pA.exe | pB.exe, ignoring the fact both output to the same window making for a mess, I see very instantaneous updates. However, I implement the same thing using the above snippet and the read_some() function declared there, and it takes over 10 seconds to notify updates of a single pipe. But when it does, it has updates leading all the way upto 40% progress, for example. Thus I do some more research, and see numerous subjects concerning PeekNamedPipe, anonymous handles, and returning 0 bytes available even though there is information available in the pipe. As the subject has proven quite a bit beyond my expertise to fix or code around, I come to Stack Overflow to look for guidance. :) My platform is W7 64-bit with Python 2.6, the applications are 32-bit in case it matters, and compatibility with Unix is not a concern. I can even deal with a full ctypes or pywin32 solution that subverts subprocess entirely if it is the only solution, as long as I can read from every stderr pipe asynchronously with immediate performance and no deadlocks. :)

    Read the article

  • Maven nexus with jetty 7 and apache2 reverse proxy

    - by user613154
    Hello stackoverflow ! Here is my problem : i try to run a maven nexus behind an apache reverse proxy. As i have multiples war in my jetty, i want the nexus to run here : http://localhost:8080/nexus I made a jetty context file as follow : {jetty.home}/contexts/nexus.xml <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/nexus</Set> <Set name="war"><SystemProperty name="jetty.home" default="."/>/webapps/nexus.war</Set> </Configure> My jetty connector in jetty.xml is as follow : <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.nio.SelectChannelConnector"> <Set name="host"><Property name="jetty.host" /></Set> <Set name="port"><Property name="jetty.port" default="8080"/></Set> <Set name="maxIdleTime">300000</Set> <Set name="Acceptors">2</Set> <Set name="forwarded">true</Set> <Set name="statsOn">false</Set> <Set name="confidentialPort">8443</Set> <Set name="lowResourcesConnections">20000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> I want http://maven.foo.com/ as an end point for the nexus, so i made this apache2 configuration file : ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> <VirtualHost *:80> ServerName maven.foo.com ProxyPass / http://localhost:8080/nexus/ ProxyPassReverse / http://localhost:8080/nexus/ ErrorLog ${APACHE_LOG_DIR}/error_nexus.log </VirtualHost> But i can't manage to make it work. The error message displayed in the browser is "The server has not found anything matching the request URI". I tried to read docs on jetty and apache web site, but didn't find information for mapping a subdomain "sub.foo.com" to a context "localhost:8080/sub" ... Any help welcome ! Thanks

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • LINQ InsertOnSubmit Required Fields needed for debugging

    - by Derek Hunziker
    Hi All, I've been using the ADO.NET Strogly-Typed DataSet model for about 2 years now for handling CRUD and stored procedure executions. This past year I built my first MVC app and I really enjoyed the ease and flexibility of LINQ. Perhaps the biggest selling point for me was that with LINQ I didn't have to create "Insert" stored procedures that would return the SCOPE_IDENTITY anymore (The auto-generated insert statements in the DataSet model were not capable of this without modification). Currently, I'm using LINQ with ASP.NET 3.5 WebForms. My inserts are looking like this: ProductsDataContext dc = new ProductsDataContext(); product p = new product { Title = "New Product", Price = 59.99, Archived = false }; dc.products.InsertOnSubmit(p); dc.SubmitChanges(); int productId = p.Id; So, this product example is pretty basic, right, and in the future, I'll probably be adding more fields to the database such as "InStock", "Quantity", etc... The way I understand it, I will need to add those fields to the database table and then delete and re-add the tables to the LINQ to SQL Class design view in order to refresh the DataContext. Does that sound right? The problem is that any new fields that are non-null are NOT caught by the ASP.NET build processes. For example, if I added a non-null field of "Quantity" to the database, the code above would still build. In the DataSet model, the stored procedure method would accept a certain amount of parameters and would warn me that my Insert would fail if I didn't include a quantity value. The same goes for LINQ stored procedure methods, however, to my knowledge, LINQ doesn't offer a way to auto generate the insert statements and that means I'm back to where I started. The bottom line is if I used insert statements like the one above and I add a non-null field to my database, it would break my app in about 10-20 places and there would be no way for me to detect it. Is my only option to do a solution-side search for the keyword "products.InsertOnSubmit" and make sure the new field is getting assigned? Is there a better way? Thanks!

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Using OpenID as the only authentication method

    - by iconiK
    I have read the other questions and they mostly talk about the security of doing so. That's not entirely my concern, mostly because the website is question is a browser-based game. However, the larger issue is the user - not every user is literate enough to understand OpenID. Sure RPX makes this pretty easy, which is what I'll use, but what if the user does not have an account at Google or Facebook or whatever, or does not trust the system to log in with an existing account? They'd have to get an account at another provide - I'm sure most will know how to do it, let alone be bothered to do it. There is also the problem of how to manage it in the application. A user might want to use multiple identities with a single account, so it's not as simple as username + password to deal with. How do I store the OpenID identities of a user in the database? Using OpenID gives me a benefit too: RPX can provide extensive profile information, so I can just prefill the profile form and ask the user to edit as required. I currently have this: UserID Email ------ --------------- 86000 [email protected] 86001 [email protected] UserOpenID OpenID ---------- ------ 86000 16733 86001 16839 86002 19361 OpenID Provider Identifier ------ -------- ---------------- 16733 Yahoo https:\\me.yahoo.com\bob#d36bd 16839 Yahoo https:\\me.yahoo.com\bigbobby#x75af 19361 Yahoo https:\\me.yahoo.com\alice#c19fd Is that the right way to store OpenID identifiers in the database? How would I match the identifier RPX gave me with one in the database to log in the user (if the identifier is known). So here are concrete questions: How would I make it accessible to users not having an OpenID or not wanting to use one? (security concerns over say, logging in with their Google account for example) How do I store the identifier in the database? (I'm not sure if the tables above are right) What measures do I need to take in order to prevent someone from logging in as another user and happily doing anything with their account? (as I understand RPX sends the identifier via HTTP, so what anyone would have to do is to just somehow grab it then enter it in the "OpenID" field) What else do I need to be aware of when using OpenID?

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • Django facebook integration error

    - by Gaurav
    I'm trying to integrate facebook into my application so that users can use their FB login to login to my site. I've got everything up and running and there are no issues when I run my site using the command line python manage.py runserver But this same code refuses to run when I try and run it through Apache. I get the following error: Environment: Request Method: GET Request URL: http://helvetica/foodfolio/login Django Version: 1.1.1 Python Version: 2.6.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'foodfolio.app', 'foodfolio.facebookconnect'] Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'facebook.djangofb.FacebookMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'facebookconnect.middleware.FacebookConnectMiddleware') Template error: In template /home/swat/website-apps/foodfolio/facebookconnect/templates/facebook/js.html, error at line 2 Caught an exception while rendering: No module named app.models 1 : <script type="text/javascript"> 2 : FB_RequireFeatures(["XFBML"], function() {FB.Facebook.init("{{ facebook_api_key }}", " {% url facebook_xd_receiver %} ")}); 3 : 4 : function facebookConnect(loginForm) { 5 : FB.Connect.requireSession(); 6 : FB.Facebook.get_sessionState().waitUntilReady(function(){loginForm.submit();}); 7 : } 8 : function pushToFacebookFeed(data){ 9 : if(data['success']){ 10 : var template_data = data['template_data']; 11 : var template_bundle_id = data['template_bundle_id']; 12 : feedTheFacebook(template_data,template_bundle_id,function(){}); Traceback: File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/swat/website-apps/foodfolio/app/controller.py" in __showLogin__ 238. context_instance = RequestContext(request)) File "/usr/lib/pymodules/python2.6/django/shortcuts/__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "/usr/lib/pymodules/python2.6/django/template/loader.py" in render_to_string 108. return t.render(context_instance) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 178. return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 779. bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py" in render_node 71. result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 946. autoescape=context.autoescape)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 779. bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py" in render_node 81. raise wrapped Exception Type: TemplateSyntaxError at /foodfolio/login Exception Value: Caught an exception while rendering: No module named app.models

    Read the article

  • Problem connecting to postgres with Kohana 3 database module on OS X Snow Leopard

    - by Bart Gottschalk
    Environment: Mac OS X 10.6 Snow Leopard PHP 5.3 Kohana 3.0.4 When I try to configure and use a connection to a postgresql database on localhost I get the following error: ErrorException [ Warning ]: mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) Here is the configuration of the database in /modules/database/config/database.php (note the third instance named 'pgsqltest') return array ( 'default' => array ( 'type' => 'mysql', 'connection' => array( /** * The following options are available for MySQL: * * string hostname * string username * string password * boolean persistent * string database * * Ports and sockets may be appended to the hostname. */ 'hostname' => 'localhost', 'username' => FALSE, 'password' => FALSE, 'persistent' => FALSE, 'database' => 'kohana', ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'alternate' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=kohana', 'username' => 'root', 'password' => 'r00tdb', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'pgsqltest' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=pgsqltest', 'username' => 'postgres', 'password' => 'dev1234', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), ); And here is the code to create the database instance, create a query and execute the query: $pgsqltest_db = Database::instance('pgsqltest'); $query = DB::query(Database::SELECT, 'SELECT * FROM test')->execute(); I'm continuing to research a solution for this error but thought I'd ask to see if someone else has already found a solution. Any ideas are welcome. One other note is that I know my build of PHP can access this postgresql db since I'm able to manage the db using phpPgAdmin. But I have yet to determine what phpPgAdmin is doing differently to connect to the db than what Kohana 3 is attempting. Bart

    Read the article

  • How do I construct a Django reverse/url using query args?

    - by Andrew Dalke
    I have URLs like http://example.com/depict?smiles=CO&width=200&height=200 (and with several other optional arguments) My urls.py contains: urlpatterns = patterns('', (r'^$', 'cansmi.index'), (r'^cansmi$', 'cansmi.cansmi'), url(r'^depict$', cyclops.django.depict, name="cyclops-depict"), I can go to that URL and get the 200x200 PNG that was constructed, so I know that part works. In my template from the "cansmi.cansmi" response I want to construct a URL for the named template "cyclops-depict" given some query parameters. I thought I could do {% url cyclops-depict smiles=input_smiles width=200 height=200 %} where "input_smiles" is an input to the template via a form submission. In this case it's the string "CO" and I thought it would create a URL like the one at top. This template fails with a TemplateSyntaxError: Caught an exception while rendering: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': u'CO', 'height': 200, 'width': 200}' not found. This is a rather common error message both here on StackOverflow and elsewhere. In every case I found, people were using them with parameters in the URL path regexp, which is not the case I have where the parameters go into the query. That means I'm doing it wrong. How do I do it right? That is, I want to construct the full URL, including path and query parameters, using something in the template. For reference, % python manage.py shell Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django.core.urlresolvers import reverse >>> reverse("cyclops-depict", kwargs=dict()) '/depict' >>> reverse("cyclops-depict", kwargs=dict(smiles="CO")) Traceback (most recent call last): File "<console>", line 1, in <module> File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 356, in reverse *args, **kwargs))) File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 302, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': 'CO'}' not found.

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >