Search Results

Search found 5586 results on 224 pages for 'global illumination'.

Page 77/224 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • new Statefull session bean instance without calling lookup

    - by kislo_metal
    Hi! Scenario: I have @Singleton UserFactory (@Stateless could be) , it`s method createSession() generating @Statefull UserSession bean by manual lookup. If I am injecting by DI @EJB - i will get same instance during calling fromFactory() method(as it should be) What I want - is to get new instance of UserSession without preforming lookup. Q1: how could I call new instance of @Statefull session bean? Code: @Singleton @Startup @LocalBean public class UserFactory { @EJB private UserSession session; public UserFactory() { } @Schedule(second = "*/1", minute = "*", hour = "*") public void creatingInstances(){ try { InitialContext ctx = new InitialContext(); UserSession session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } } @Schedule(second = "*/1", minute = "*", hour = "*") public void fromFactory(){ System.out.println("in singleton UUID " +session.getSessionUUID()); } public UserSession creatSession(){ UserSession session2 = null; try { InitialContext ctx = new InitialContext(); session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } return session2; } } As I understand, calling of session.getClass().newInstance(); is not a best idea Q2 : is it true? I am using glassfish v3, ejb 3.1.

    Read the article

  • Apache module, is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in apache module: 1. apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the apache process though has returned the status to root process but it is also executing parallely where it going through its global store and sending updates to the client, if any. So can a apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • Python C API from C++ app - know when to lock

    - by Alex
    Hi Everyone, I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock. Python Cfunc1() - C++ func that creates threads internally which lead to calls in "my class", It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang. Any known API or solution for this ? (Python 2.4)

    Read the article

  • Threading.Timer invokes asynchronously many methods

    - by Dimitar
    Hi guys! Please help! I call a threading.timer from global.asax which invokes many methods each of which gets data from different services and writes it to files. My question is how do i make the methods to be invoked on a regular basis let's say 5 mins? What i do is: in Global.asax I declare a timer protected void Application_Start() { TimerCallback timerDelegate = new TimerCallback(myMainMethod); Timer mytimer = new Timer(timerDelegate, null, 0, 300000); Application.Add("timer", mytimer); } the declaration of myMainMethod looks like this: public static void myMainMethod(object obj) { MyDelegateType d1 = new MyDelegateType(getandwriteServiceData1); d1.BeginInvoke(null, null); MyDelegateType d2 = new MyDelegateType(getandwriteServiceData2); d2.BeginInvoke(null, null); } this approach works fine but it invokes myMainMethod every 5 mins. What I need is the method to be invoked 5 mins after all the data is retreaved and written to files on the server. How do I do that?

    Read the article

  • How to get IQueryable<> from stored procedure (entity framework)

    - by mmcteam
    I want to get IQueryable<> result when executing stored procedure. Here is peace of code that works fine: IQueryable<SomeEntitiy> someEntities; var globbalyFilteredSomeEntities = from se in m_Entities.SomeEntitiy where se.GlobalFilter == 1234 select se; I can use this to apply global filter, and later use result in such way result = globbalyFilteredSomeEntities .OrderByDescending(se => se.CreationDate) .Skip(500) .Take(10); What I want to do - use some stored procedures in global filter. I tried: Add stored procedure to m_Entities, but it returns IEnumerable<> and executes sp immediately: var globbalyFilteredSomeEntities = from se in m_Entities.SomeEntitiyStoredProcedure(1234); Materialize query using EFExtensions library, but it is IEnumerable<>. If I use AsQueryable() and OrderBy(), Skip(), Take() and after that ToList() to execute that query - I get exception that DataReader is open and I need to close it first(can't paste error - it is in russian). var globbalyFilteredSomeEntities = m_Entities.CreateStoreCommand("exec SomeEntitiyStoredProcedure(1234)") .Materialize<SomeEntitiy>(); //.AsQueryable() //.OrderByDescending(se => se.CreationDate) //.Skip(500) //.Take(10) //.ToList(); Also just skipping .AsQueryable() is not helpful - same exception.

    Read the article

  • New projects not built when target platform is set explicitly

    - by stiank81
    I create a new solution with one project, and then change the target platform from "Any CPU" to "x86". After this new projects added doesn't get built by default, and their target platform doesn't follow the global settings. Why?! Looking at the configuration manager new projects added are not checked to "Build", and they get target platform "Any CPU" instead of the globally set x86. Why is this happening? I expect new projects too to get the globally set and defined x86 target platform.. Some things I've tried: Toggle global platform back to Any CPU, and then to x86 again. No change.. Choosing platform explicitly for the new project. x86 is not available in the list, and when I say <New..> and try adding it I'm not allowed as ".. a solution platform with the same name already exists.". On the build properties for the new project I can't change the platform in the Configuration section, but I can set "Platform target" to x86 in the General section. It is however not clear whether this actually makes a difference, and it wouldn't respond if I change the target platform globally later. Initially I thought this was a problem from converting my solution from VS2008 to VS2010, but the problem applies both places. I.e. when I create a solution in VS2008 and just stay in VS2008 I still get the problem.

    Read the article

  • Magento config XML for adding a controller action to a core admin controller

    - by N. B.
    I'm trying to add a custom action to a core controller by extending it in a local module. Below I have the class definition which resides in magento1_3_2_2/app/code/local/MyCompany/MyModule/controllers/Catalog/ProductController.php class MyCompany_MyModule_Catalog_ProductController extends Mage_Adminhtml_Catalog_ProductController { public function massAttributeSetAction(){ ... } } Here is my config file at magento1_3_2_2/app/code/local/MyCompany/MyModule/etc/config.xml: ... <global> <rewrite> <mycompany_mymodule_catalog_product> <from><![CDATA[#^/catalog_product/massAttributeSet/#]]></from> <to>/mymodule/catalog_product/massAttributeSet/</to> </mycompany_mymodule_catalog_product> </rewrite> <admin> <routers> <MyCompany_MyModule> <use>admin</use> <args> <module>MyCompany_MyModule</module> <frontName>MyModule</frontName> </args> </MyCompany_MyModule> </routers> </admin> </global> ... However, https://example.com/index.php/admin/catalog_product/massAttributeSet/ simply yields a admin 404 page. I know that the module is active - other code is executing fine. I feel it's simply a problem with my xml syntax. Am I going about this the write way? I'm hesitant because I'm not actually rewriting a controller method... I'm adding one entirely. However it does make sense in that, the original admin url won't respond to that action name and it will need to be redirected. I'm using Magento 1.3.2.2 Thanks for any guidance.

    Read the article

  • AssemblyResolve event is not firing during compilation of a dynamic assembly for an aspx page.

    - by John
    This one is really pissing me off. Here goes: My goal is to load assemblies at run-time that contain embedded aspx,ascx etc. What I would also like is to not lock the assembly file on disk so I can update it at run-time without having to restart the application (I know this will leave the previous version(s) loaded). To that end I have written a virtual path provider that does the trick. I have subscribed to the CurrentDomain.AssemblyResolve event so as to redirect the framework to my assemblies. The problem is that the when the framework tries to compile the dynamic assembly for the aspx page I get the following: Compiler Error Message: CS0400: The type or namespace name 'Pages' could not be found in the global namespace (are you missing an assembly reference?) Source Error: public class app_resource_pages__version_1_0_0_0__culture_neutral__publickeytoken_null_default_aspx : global::Pages._Default, System.Web.SessionState.IRequiresSessionState, System.Web.IHttpHandle I noticed that if I load the assembly with Assembly.Load(AssemblyName) or Assembly.LoadFrom(filename) I dont get the above error. If I load it with Assembly.Load(byte[]) (so as to not lock it), the exception is thrown but my AssemblyResolve handler, when called is returning the assembly correctly (it is called once). So I am guessing that it is called once when the framework parses the asp markup but not when it tries to create the dynamic assembly for the aspx page.

    Read the article

  • How to make placeholder varablies in jquery validate 1.7?

    - by chobo2
    Hi I am using jquery 1.4.2 and jquery validate 1.7(http://bassistance.de/jquery-plugins/jquery-plugin-validation/) Say I have this example that I just grabbed off some random site(http://www.webreference.com/programming/javascript/jquery/form_validation/) 8 <script type="text/javascript"> 9 $(document).ready(function() { 10 $("#form1").validate({ 11 rules: { 12 name: "required",// simple rule, converted to {required:true} 13 email: {// compound rule 14 required: true, 15 email: true 16 }, 17 url: { 18 url: true 19 }, 20 comment: { 21 required: true 22 } 23 }, 24 messages: { 25 comment: "Please enter a comment." 26 } 27 }); 28 }); 29 </script> now is it possible to do something like this 10 $("#form1").validate({ var NameHolder = "name" 11 rules: { 12 NameHolder: "required",// simple rule, converted to {required:true} 13 email: {// compound rule 14 required: true, 15 email: true So basically I want to make sort of a global variable to hold theses rule names( what correspond to the names on that html control). My concern is the names of html controls can change and it kinda sucks that I will have to go around and change it in many places of my code to make it work again. So basically I am wondering is there away to make a global variable to store this name. So if I need to change the name I only have to change it in one spot in my javascript file sort of the way stopping magic numbers ?

    Read the article

  • How to safely transfer reference to object across window?

    - by Morgan Cheng
    I'm debugging a web application. Javasript in one window create one object and use it as argument to invoke global method in another window. Pseudo code is like below. var obj = new Foo(); anotherWin.bar(obj); In anotherWin, the argument is stored in global variable. var g_obj; function bar(obj) { g_obj = obj; ... } When other function tries to reference g_obj.Id, it throws exception "Cannot evaluate expression". This happens in IE8.0.7600.16385 on Windows 7. In Visual Studio debugger, when this exception happens, the g_obj shows as {...} It looks all its properties are lost. Perhaps the root reason is the object is created in one window but only referenced in another window. The object might be garbage-collected at any time. Is there any way to work around this?

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

  • Custom permalinks switching function. Please check this logic...

    - by Scott B
    I've got a setting in my theme options panel to allow the user to switch the permalinks setting to support friendly URLs. I'm only allowing /%postname%/ and /%postname%.html as options. I don't want to be triggering an htaccess rewrite everytime someone accesses a page on the site or views theme options, so I'm trying to code this to avoid that. I've got an input field in theme options that's called $myTheme_permalinks. The default value for this is "/%postname%/" but the user can also change it to "/%postname%.html" Here's the code at the top of theme options to handle this setting. Does this look sound? if(get_option('myTheme_permalinks') =="/%postname%/" && get_option('permalink_structure') !== "/%postname%/" || !get_option('myTheme_permalinks')) { require_once(ABSPATH . '/wp-admin/includes/misc.php'); require_once(ABSPATH . '/wp-admin/includes/file.php'); global $wp_rewrite; $wp_rewrite->set_permalink_structure('/%postname%/'); $wp_rewrite->flush_rules(); update_option('permalink_structure','/%postname%/'); update_option('myTheme_permalinks','/%postname%/'); } else if (get_option('myTheme_permalinks') =="/%postname%.html" && get_option('permalink_structure') !== "/%postname%.html" && ) { require_once(ABSPATH . '/wp-admin/includes/misc.php'); require_once(ABSPATH . '/wp-admin/includes/file.php'); global $wp_rewrite; $wp_rewrite->set_permalink_structure('/%postname%.html'); $wp_rewrite->flush_rules(); update_option('permalink_structure','/%postname%.html'); }

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • PHP Facebook Cronjob with offline access

    - by Mohamed Salem
    1:the code to greet the user, ask for his permission and store his session data so that we can use a cronjob with his session data afterwards. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "password"; $db_name = "databasename"; #go to line 85, the script actually starts there mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); #you have to create a database to store session values. #if you do not know what columns there should be look at line 76 to see column names. #make them all varchars # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => '121036530138', 'secret' => '9bbec378147064', 'cookie' => false,)); # Lets set up the permissions we need and set the login url in case we need it. $par['req_perms'] = "friends_about_me,friends_education_history,friends_likes, friends_interests,friends_location,friends_religion_politics, friends_work_history,publish_stream,friends_activities, friends_events, friends_hometown,friends_location ,user_interests,user_likes,user_events, user_about_me,user_status,user_work_history,read_requests, read_stream,offline_access,user_religion_politics,email,user_groups"; $loginUrl = $facebook->getLoginUrl($par); function save_session($session){ global $facebook; # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$session['uid']); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a stored session, but is it valid? echo " We have a session, but is it valid?"; try { $attachment = array('access_token' => $session_id[0]); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " our old session is not valid, let's delete saved invalid session data "; $res = mysql_query("delete from facebook_user WHERE uid =".$session['uid']); #save new good session #to see what is our session data: print_r($session); if (is_array($session)) { $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','" . $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } # this should never ever happen echo " Something is terribly wrong: Our old session was bad, and now we cannot get the new session"; return; } echo " Our old stored session is valid "; return $session_id[0]; } else { echo " no stored session, this means the user never subscribed to our application before. "; # let's store the session $session = $facebook->getSession(); if (is_array($session)) { # Yes we have a session! so lets store it! $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','". $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } } } #this is the first meaningful line of this script. $session = $facebook->getSession(); # Is the user already subscribed to our application? if ( is_null($session) ) { # no he is not #send him to permissions page header( "Location: $loginUrl" ); } else { #yes, he is already subscribed, or subscribed just now #in case he just subscribed now, save his session information $access_token=save_session($session); echo " everything is ok"; # write your code here to do something afterwards } ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/indexx.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Fatal error: Call to undefined method Facebook::getSession() in /home/content/28/9687528/html/ss/src/indexx.php on line 86 2:A cronjob template that reads the stored session of a user from database, uses his session data to work on his behalf, like reading status posts or publishing posts etc. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "pass"; $db_name = "database"; # Lets connect to the Database and set up the table $link = mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => 'appid', 'secret' => 'secret', 'cookie' => false, )); function get_check_session($uidCheck){ global $facebook; # This function basically checks for a stored session and if we have one it returns it # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$uidCheck); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a session # but, is it valid? try { $attachment = array('access_token' => $session_id[0],); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " User ".$uidCheck." removed the application, or there is some other access problem. "; # let's delete stored data $res = mysql_query("delete from facebook_user where WHERE uid =".$uidCheck); return; } return $session_id[0]; } else { # "no stored session"; echo " error:newsFeedcrontab.php No stored sessions. This should not have happened "; } } # get all users that have given us offline access $users = getUsers(); foreach($users as $user){ # now for each user, check if they are still subscribed to our application echo " Checking user".$user; $access_token=get_check_session($user); # If we've not got an access_token we actually need to login. # but in the crontab, we just log the error, there is no way we can find the user to give us permission here. if ( is_null($access_token) ) { echo " error: newsFeedcrontab.php There is no access token for the user ".$user." "; } else { #we are going to read the newsfeed of user. There are user's friends' posts in this newsfeed try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/home', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get feed of ".$user.$e; } #do something with the result here #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "home" link under connections #we can also read the home of user. Home is the wall of the user who has given us offline access. try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/feed', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get wall of ".$user.$e; } #do something with the result here # #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "feed" link under connections } } function getUsers(){ $sql = "SELECT distinct(uid) from facebook_user Where 1"; $result = mysql_query($sql); while($row = mysql_fetch_array($result)){ $rows [] = $row['uid']; } print_r($rows); return $rows; } mysql_close($link); ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/cron.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/content/28/9687528/html/ss/src/cron.php on line 110 Warning: Invalid argument supplied for foreach() in /home/content/28/9687528/html/ss/src/cron.php on line 64

    Read the article

  • passenger won't spawn more than 6 instances despite passenger_max_pool_size = 30

    - by mrD
    I have some problems with passenger + nginx and hope someone might be able help me and direct me in the right direction. I've set the passenger_max_pool_size to 30 but passenger never spawns more than 6 instances. I'm loading a webpage that uses ajax to load 30 sub pages from the server but because passenger only spawns 6 instances they are queued. What makes me confused is that Waiting on global queue is 0 but I can see in my browser that everything gets queued. When the first 6 ajax requests are done the next 6 starts loading. What am I missing? :) This is the output from passenger-status (I had about 24 requests in the browser waiting for response from the server when I checked this status) ----------- General information ----------- max = 30 count = 6 active = 6 inactive = 0 Waiting on global queue: 0 ----------- Domains ----------- /srv/rails/production/current: PID: 28428 Sessions: 1 Processed: 42 Uptime: 5m 43s PID: 28424 Sessions: 1 Processed: 23 Uptime: 5m 43s PID: 28422 Sessions: 1 Processed: 7 Uptime: 5m 43s PID: 28420 Sessions: 1 Processed: 22 Uptime: 6m 0s PID: 28426 Sessions: 1 Processed: 39 Uptime: 5m 43s PID: 28430 Sessions: 1 Processed: 7 Uptime: 5m 43s These are my passenger related settings in nginx.conf http { passenger_root /opt/ruby/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /opt/ruby/bin/ruby; passenger_max_pool_size 30;

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • php class scope when calling a non-method function not accessing all class members

    - by Aglystas
    So I'm using a stand alone function from within a class that that uses the class it's being called from. Here's the function function catalogProductLink($product_id,$product_name,$categories=true) { //This is the class that the function is called from global $STATE; if ($categories) { //The $STATE->category_id is the property I want to access, which I can't if (is_array($STATE->category_id)) { foreach($STATE->category_id as $cat_id) { if ($cat_id == 0) continue; $str .= "c$cat_id/"; } } } $str .= catalogUrlKeywords($product_name).'-p'.$product_id.'.html'; return $str; } And here's the function call, which is being made from within the $STATE class. $redirect = catalogProductLink($this->product_id, $tempProd->product_name, true, false); The object that I need access to is the $STATE object that has been declared global. Prior to this function call there are lots of public properties populated, but when I look at the $STATE object within the function scope it loses all the properties but one, product_id. The property that matters for this function is the category_id property, which is an array of category id's. I'm wondering why I don't have access to all the public properties of the $STATE object and how I can get access to them.

    Read the article

  • Visual Studio 2008 Installer, Custom Action. Breakpoint not firing.

    - by Snake
    Hi, I've got an installer with a custom action project. I want the action to fire at install. The action fires, when I write something to the event log, it works perfectly. But I really need to debug the file since the action is quite complicated. So I've got the following installer class: namespace InstallerActions { using System; using System.Collections; using System.Collections.Generic; using System.ComponentModel; using System.Configuration.Install; using System.Diagnostics; using System.IO; [RunInstaller(true)] // ReSharper disable UnusedMember.Global public partial class DatabaseInstallerAction : Installer // ReSharper restore UnusedMember.Global { public DatabaseInstallerAction() { InitializeComponent(); } public override void Install(IDictionary stateSaver) { base.Install(stateSaver); System.Diagnostics.Debugger.Launch(); System.Diagnostics.Debugger.Break(); // none of these work Foo(); } private static void Foo() { } } } The installer just finalizes without warning me, it doesn't break, it doesn't ask me to attach a debugger. I've tried debug and release mode. Am I missing something? Thanks -Snake

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • What exactly is a reentrant function?

    - by eSKay
    Most of the times, the definition of reentrance is quoted from Wikipedia: A computer program or routine is described as reentrant if it can be safely called again before its previous invocation has been completed (i.e it can be safely executed concurrently). To be reentrant, a computer program or routine: Must hold no static (or global) non-constant data. Must not return the address to static (or global) non-constant data. Must work only on the data provided to it by the caller. Must not rely on locks to singleton resources. Must not modify its own code (unless executing in its own unique thread storage) Must not call non-reentrant computer programs or routines. How is safely defined? If a program can be safely executed concurrently, does it always mean that it is reentrant? What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities? Also, Are all recursive functions reentrant? Are all thread-safe functions reentrant? Are all recursive and thread-safe functions reentrant? While writing this question, one thing comes to mind: Are the terms like reentrance and thread safety absolute at all i.e. do they have fixed concrete definations? For, if they are not, this question is not very meaningful. Thanks!

    Read the article

  • define a closure as method from class

    - by user272839
    Hi, i'm trying to play with php5.3 and closure. I see here (Listing 7. Closure inside an object : http://www.ibm.com/developerworks/opensource/library/os-php-5.3new2/index.html) that it's possible to use $this in the callback function, but it's not. So I try to give $this as use variable : $self = $this; $foo = function() use($self) { //do something with $self } So to use the same example : class Dog { private $_name; protected $_color; public function __construct($name, $color) { $this->_name = $name; $this->_color = $color; } public function greet($greeting) { $self = $this; return function() use ($greeting, $self) { echo "$greeting, I am a {$self->_color} dog named {$self->_name}."; }; } } $dog = new Dog("Rover","red"); $dog->greet("Hello"); Output: Hello, I am a red dog named Rover. First of all this example does not print the string but return the function, but that's not my problem. Secondly I can't access to private or protected, because the callback function is a global function and not in the context from the Dog object. Tha't my problem. It's the same as : function greet($greeting, $object) { echo "$greeting, I am a {$self->_color} dog named {$self->_name}."; } And I want : public function greet($greeting) { echo "$greeting, I am a {$self->_color} dog named {$self->_name}."; } Which is from Dog and not global. I hope that I am clear ...

    Read the article

  • PHP Preserve scope when calling a function

    - by Joshua
    I have a function that includes a file based on the string that gets passed to it i.e. the action variable from the query string. I use this for filtering purposes etc so people can't include files they shouldn't be able to and if the file doesn't exist a default file is loaded instead. The problem is that when the function runs and includes the file scope, is lost because the include ran inside a function. This becomes a problem because I use a global configuration file, then I use specific configuration files for each module on the site. The way I'm doing it at the moment is defining the variables I want to be able to use as global and then adding them into the top of the filtering function. Is there any easier way to do this, i.e. by preserving scope when a function call is made or is there such a thing as PHP macros? Edit: Would it be better to use extract($_GLOBALS); inside my function call instead? Edit 2: For anyone that cared. I realised I was over thinking the problem altogether and that instead of using a function I should just use an include, duh! That way I can keep my scope and have my cake too.

    Read the article

  • How should I define a JavaScript 'namespace' to satisfy JSLint?

    - by Matthew Murdoch
    I want to be able to package my JavaScript code into a 'namespace' to prevent name clashes with other libraries. Since the declaration of a namespace should be a simple piece of code I don't want to depend on any external libraries to provide me with this functionality. I've found various pieces of advice on how to do this simply but none seem to be free of errors when run through JSLint (using 'The Good Parts' options). As an example, I tried this from Advanced JavaScript (section Namespaces without YUI): "use strict"; if (typeof(MyNamespace) === 'undefined') { MyNamespace = {}; } Running this through JSLint gives the following errors: Problem at line 2 character 12: 'MyNamespace' is not defined. Problem at line 3 character 5: 'MyNamespace' is not defined. Implied global: MyNamespace 2,3 The 'Implied global' error can be fixed by explicitly declaring MyNamespace... "use strict"; if (typeof(MyNamespace) === 'undefined') { var MyNamespace = {}; } ...and the other two errors can be fixed by declaring the variable outside the if block. "use strict"; var MyNamespace; if (typeof(MyNamespace) === 'undefined') { MyNamespace = {}; } So that works, but it seems to me that (since MyNamespace will always be undefined at the point it is checked?) it is equivalent to the much simpler: "use strict"; var MyNamespace = {}; JSLint is content with this but I'm concerned that I've simplified the code to such an extent that it will no longer function correctly as a namespace. Is this final formulation sensible?

    Read the article

  • gcc -finline-functions behaviour?

    - by user176168
    I'm using gcc with the -finline-functions optimization for release builds. In order to combat code bloat because I work on an embedded system I want to say don't inline particular functions. The obvious way to do this would be through function attributes ie attribute(noinline). The problem is this doesn't seem to work when I switch on the global -finline-functions optimisation which is part of the -O3 switch. It also has something to do with it being templated as a non templated version of the same function doesn't get inlined which is as expected. Has anybody any idea of how to control inlining when this global switch is on? Here's the code: #include <cstdlib> #include <iostream> using namespace std; class Base { public: template<typename _Type_> static _Type_ fooT( _Type_ x, _Type_ y ) __attribute__ (( noinline )); }; template<typename _Type_> _Type_ Base::fooT( _Type_ x, _Type_ y ) { asm(""); return x + y; } int main(int argc, char *argv[]) { int test = Base::fooT( 1, 2 ); printf( "test = %d\n", test ); system("PAUSE"); return EXIT_SUCCESS; }

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >