Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 283/780 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Why is the event object different coming from jquery bind vs. addEventListener

    - by yodaisgreen
    Why is it when I use the jQuery bind the event object I get back is different from the event object I get back using addEventListener? The event object resulting from this jQuery bind does not have the targetTouches array (among other things) but the event from the addEventListener does. Is it me or is something not right here? $(document).ready (function () { $("#test").bind("touchmove", function (event) { console.log(event.targetTouches[0].pageX); // targetTouches is undefined }); }); vs. $(document).ready (function () { var foo = document.querySelectorAll('#test') foo[0].addEventListener('touchmove', function (event) { console.log(event.targetTouches[0].pageX); // returns the correct values }, false); });

    Read the article

  • How to view Mercurial changeset changes using a GUI diff tool

    - by Marcus
    We use both Examdiff and Kdiff3 to view Mercurial changes. Just add this to .hgrc: [extdiff] cmd.kdiff3 = cmd.examdiff = C:\Program Files\ExamDiff Pro\ExamDiff.exe And then you can type hg examdiff or hg diff3 to see a complete diff of all your changes. What I would like is to do the same to see a "before and after" of files for a given changeset that was checked in by someone else. I know you can type hg log to see all changesets and then hg log -vprXX to see a text diff, but that's too hard for my GUI preferring eyes. Anyone know how to the equivalent with the GUI tools?

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • Emacs Lisp: How to use ad-get-arg and ad-get-args?

    - by RamyenHead
    I'm not sure I am using ad-get-args and ad-get-arg right. For example, the following code doesn't work. (defun my-add (a b) (+ a b)) (defadvice my-add (after my-log-on activate) (message "my-add: %s" (ad-get-args))) (my-add 1 2) The last expression causes an error: Debugger entered--Lisp error: (void-function ad-get-args). The following doesn't work either. (defun my-substract (a b) (- a b)) (defadvice my-substract (around my-log-on activate) (message "my-substract: %s" (ad-get-arg 0)) (ad-do-it)) (my-substract 10 1) The defadvice gives a warning: Warning: `(setq ad-return-value (ad-Orig-my-substract a b))' is a malformed function And the last expression gives an error: Debugger entered--Lisp error: (invalid-function (setq ad-return-value (ad-Orig-my-substract a b))) (setq ad-return-value (ad-Orig-my-substract a b))() I was trying to use defadvice to watch start-process arguments for debugging purposes and I found my way of using ad-get-arg didn't work.

    Read the article

  • ASP:LinkButton and Eval

    - by sgibbons
    I'm using an ASP:LinkButton inside of an ItemTemplate inside of a TemplateField in a GridView. For the command argument for the link button I want to pass the ID of the row from the datasource that the gridview is bound to, so I'm doing something like this: <asp:LinkButton ID="viewLogButton" CommandName="viewLog" CommandArgument="<%#Eval("ID")%>" Text="View Log" runat="server"/> Unfortunately, the resulting HTML is this: <asp:LinkButton ID="viewLogButton" CommandName="viewLog" CommandArgument="3" Text="View Log" runat="server"/> It seems that it is parsing the Eval() properly, but this is somehow causing it not to parse the LinkButton tag and just dump it out as literal text. Does anyone know: a) why this is happening and, b) what a good solution to this problem is?

    Read the article

  • Problems with jquery selector

    - by Gandalf StormCrow
    I have a trouble with selecting element attributes with jquery here is my HTML: <a class="myselector" rel="I need this value"></a> <div class="someClass" id="someID"> ...bunch of elements/content <input type="button" name="myInput" id="inputID" title="myInput Title" /> ...bunch of elements/content </div> ...bunch of elements/content Here I'm trying to get the rel value of myselector here is how I tried but its not working : $('#inputID').live('click',function(){ console.log($(this).closest('a.myselector').attr('rel')); }); Also tried this since all is wrapped in wrapper div : $('#inputID').live('click',function(){ console.log($(this).parent('div.wrapper').find('a.myselector').attr('rel')); }); I get undefined value in firebug in both cases, I use live because div#someID is loaded in the document its not there when page first loads. Any advice, how can I get my selector rel value?

    Read the article

  • can't login to phpmyadmin

    - by user574383
    Hi, i am new at linux but i need phpmyadmin on my centos server. I did this: cd /var/www/html/ (document root of apache) wget http://sourceforge.net/projects/phpmyadmin/path/to/latest/version tar xvfz phpMyAdmin-3.3.9-all-languages.tar.gz mv phpMyAdmin-3.3.9-all-languages phpmyadmin rm phpMyAdmin-3.3.9-all-languages.tar.gz cd phpmyadmin/ cp config.sample.inc.php config.inc.php Ok so then i just got to a webbrowser and go to www.$ip/phpmyadmin and i am presented with a login screen asking for username and password. How can i get these credentials to log in? I'd like to log in as root i guess. But i don't know how to setup a root account and create a password for root using the cli and mysql. Please help? Thanks.

    Read the article

  • DataNucleus Enhancer flakey?

    - by KevMo
    I'm creating a GWT app in Google App Engine, and using Google data store. Does anybody else have the problem of the DataNucleus being flakey as all get out? I can save a class, and DataNucleus will do it's thing just fine. If I change ANYTHING in the class (even adding whitespace) and then save, I get the following error: DataNucleus Enhancer completed with success for 0 classes. Timings : input=37 ms, enhance=0 ms, total=37 ms. Consult the log for full details DataNucleus Enhancer completed and no classes were enhanced. Consult the log for full details Once I clean my project, DataNucleus is happy again. Is this common when using eclipse? Is there a workaround?

    Read the article

  • Stack trace in website project, when debug = false

    - by chandmk
    We have a website project. We are logging unhanded exceptions via a appdomain level exception handler. When we set debug= true in web.config, the exception log is showing the offending line numbers in the stack trace. But when we set debug = false, in web.config, log is not displaying the line numbers. We are not in a position to convert the website project in to webapplication project type at this time. Its legacy application and almost all the code is in aspx pages. We also need to leave the project in 'updatable' mode. i.e. We can't user pre-compile option. We are generating pdb files. Is there anyway to tell this kind of website projects to generate the pdb files, and show the line numbers in the stack trace?

    Read the article

  • MySQL Insert Statement Queue

    - by Justin
    We are building an ajax application in which a users input is submitted for processing to a php script. We are currently writing every request to a log file for tracking. I would like to move this tracking into a database table but I do not want to run a insert statement after request. What I would like to do is set up a 'queue' of transactions (inserts and updates) that need to be processed on the MySQL database. I would then set up a cron job or process to check and process the transactions in the queue. Is there something out there that we could build upon or do we have to just write to plain ol' text log files and process them?

    Read the article

  • AppDomain.UnhandeledException event not fired

    - by Yaakov Davis
    In a WPF application, the app simply crashes, without the above event being fired. (I'm also registered to DispatcherUnhandeledException, which doesn't fire as well.) I conclude that it doesn't fire since the handler is defined to place a log entry. When looking at the log, there's no corresponding entry. It happens in a production environment; I'm unable to point at a particular scenario. I've read few descriptions on scenarios where this might happen, but I still don't have a clear grasp on this. Can anyone share his experience / knowledge on this? How can I find the root of the crash and solve it? Many thanks.

    Read the article

  • difference in logging mechanism: API and application(python)

    - by zubin71
    I am currently writing an API and an application which uses the API. I have gotten suggestions from people stating that I should perform logging using handlers in the application and use a "logger" object for logging from the API. In light of the advice I received above, is the following implementation correct? class test: def __init__(self, verbose): self.logger = logging.getLogger("test") self.logger.setLevel(verbose) def do_something(self): # do something self.logger.log("something") # by doing this i get the error message "No handlers could be found for logger "test" The implementation i had in mind was as follows: #!/usr/bin/python """ .... .... create a logger with a handler .... .... """ myobject = test() try: myobject.do_something() except SomeError: logger.log("cant do something") Id like to get my basics strong, id be grateful for any help and suggestions for code you might recommend I look up. Thnkx!

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • apache front end using mod_proxy_ajp to tomcat on different servers

    - by user302307
    Anyone knows the steps to run Apache on server A as front end and run mod_proxy_ajp to connect to tomcat instances on server B? I want to run apache on sever A to do name based vhost that connects to many tomcat servers. I can run mod_proxy_ajp, only if apache and tomcat are on the same server. What I've tried so far: In server A, running Apache 2.2: NameVirtualHost *:80 ServerName tc0.domo.lan ErrorLog "C:\Apache\Apache2.2\logs\tc0.ajp.error.log" CustomLog "C:\Apache\Apache2.2\logs\tc0.ajp.access.log" combined DocumentRoot C:/htdocs0 AddDefaultCharset Off Order deny,allow Allow from all ProxyPass / ajp://192.168.77.233:8009/ ProxyPassReverse / ajp://192.168.77.233:8009/ Options FollowSymLinks AllowOverride None Order deny,allow Allow from all Server B: 192.168.77.233, tomcat 6 connector: I can confirm if going to http://192.168.77.233:8080/manager/html, tomcat works. When I use packet sniffer on server A, I found that server A is trying to connect to server B at port 80 when I'm connecting http://tc0.domo.lan/manager/html on server A

    Read the article

  • Using the stadard Java logging, is it possible to restart logs after a certain period?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Implementing Role based Helpers

    - by Cynics
    So my question is how would you implement your handwritten Helpers based on the role of current user. Would it be efficient to change the behaviour at request time? e.g. the Helper somehow figures out the role of user, and include the proper SubModule? module ApplicationHelper module LoggedInHelper # Some functions end module GuestHelper # The Same functions end # If User is Guest then include GuestHelper # If User is LoggedIn then include LoggedInHelper end Is it efficient this way? is it rails way? I've got a whole bunch of function that act like this, and I don't want to wrap every single one of them in an if statement def menu_actions if current_user.nil? # User is guest { "Log in" => link_to "Login", "/login" } else # User is Logged In { "Log out" => link_to "Logout", "/logout" } end end Thank you for your time and thoughts.

    Read the article

  • .NET CF on Windows CE - problem with filtering system messages

    - by mack369
    Hello, I'm trying to get every windows message that tells that the user has touched the screen. It works everywhere, except the button, when it is disabled. It seems that the application doesn't get any message when clicked on disabled control. I'm using OpenNetCF Application2 class for filtering messages: Application2.AddMessageFilter(Device.PowerManager); Application2.Run(new MainForm()); PowerManager class contains a following method (as required by IMessageFilter interface): public bool PreFilterMessage(ref Microsoft.WindowsCE.Forms.Message m) { log.DebugFormat("windows message {0} - 0x{0:X}", m.Msg); if (m.Msg == 0x0201 || m.Msg == 0x8001 || m.Msg == 0x0005) { return this.ResetPowerManager(); } return false; } in the log file there is no indication of a windows message when clicking on disabled button. I'm wondering how is it possible and how can I get this message.

    Read the article

  • Bug in Mathematica's Integrate with PrincipalValue->True

    - by Janus
    It seems that Mathematica's handling of principal value integrals fails on some corner cases. Consider these two expressions (which should give the same result): Integrate[UnitBox[x]/(x0 - x), {x, -Infinity, Infinity}, PrincipalValue -> True, Assumptions -> {x0 > 0}] /. x0 -> 1 // Simplify Integrate[UnitBox[x]/(x0 - x) /. x0 -> 1, {x, -Infinity, Infinity}, PrincipalValue -> True] In Mathematica 7.0.0 I get I Pi+Log[3] Log[3] Has this been fixed in later versions? Does anybody have an idea for a (more or less) general workaround?

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • Instant Messenger: How does gtalk/yahoo messenger populate the contact list?

    - by Owen
    Hi All, We are currently working on a small IM project which pretty much works like gtalk and yahoo messenger. We came across a problem that puzzled us how gtalk/ym populate their contact lists. Given that the user has let's say more or less 500 contacts, both IMs seem to readily load the contacts pretty fast and already sorted. Here are my questions(referring to either): Does it cache its contacts, like saving it in a file somewhere upon exit so that upon log-in it readily extracts the contacts and displays it in its contact list? Does it always request for the VCARDS upon log in? OR they have a VCARD push or whatever that simply updates the contacts' profiles (like that of their status [presence push - available, busy, etc...] )?

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Make: how to force make?

    - by HH
    The command $ make all gives errors such as rm: cannot remove '.lambda': No such file or directory so it stops. How can I force-make? Makefile all: make clean make .lambda make .lambda_t make .activity make .activity_t_lambda clean: rm .lambda .lambda_t .activity .activity_t_lambda .lambda: awk '{printf "%.4f \n", log(2)/log(2.71828183)/$$1}' t_year > .lambda .lambda_t: paste .lambda t_year > .lambda_t .activity: awk '{printf "%.4f \n", $$1*2.71828183^(-$$1*$$2)}' .lambda_t > .activity .activity_t_lambda: paste .activity t_year .lambda | sed -e 's@\t@\t\&\t@g' -e 's@$$@\t\\\\@g' | tee > .activity_t_lambda > ../RESULTS/currentActivity.tex

    Read the article

  • Am I reindexing this Sphinx index correctly?

    - by Ethan
    According to the Thinking Sphinx docs... Turning on delta indexing does not remove the need for regularly running a full re-index ... So I set up this cron job... 50 10 * * * cd /var/www/my_app/current && /opt/ruby/bin/rake thinking_sphinx:index RAILS_ENV=production >> /var/www/my_app/current/log/reindexing.log 2>&1 Is that a reasonable way to do it? Should I be doing something different?

    Read the article

  • Execuitng script in threads

    - by Pedro Magalhaes
    Hi. I wanna make an app that executes remote scripts. I am going to design it like a Windows Service that listen on tcp/ip port. Every new request I will execute a python scripts. So I can handle any number of tcp/ip request at same time, so I will need to execute python script in separate threads. How can I do that? Is that simple? The script will share some objects, like Log files(text files) and other modules. In the log file example every script will write in the same file. So the app (windows service) will be responsible for do that. I need to make this objects thread safe, right?

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >