Search Results

Search found 1839 results on 74 pages for 'akky awesome'.

Page 66/74 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • jQuery multiple running totals

    - by Benjamin Randal
    0I am using jQuery to calculate a running total on multiple textboxes. Just found an awesome response on how to get that working a few days ago, but now I am running into another problem. When using one selector, the total for GetTotal is calculated perfectly. However, when I include the second selector, the totals begin to conflict with one another, and no longer calculate properly. I have been searching for a solution to this for some time now, does anyone have any ideas? Here is the selector i am currently using: function GetTotal(txtBox) { var total = 0; $('input:text').each(function(index, value) { total += parseInt($(value).val() || 0); }); $("#chkTotal").html(total); } My view uses these txt boxes <div class="editor-field"> @Html.TextBox("Field1", String.Empty, new {InputType = "text", id = "field1", onchange = "GetTotal(this)" }) </div> <div class="editor-field"> @Html.TextBox("Field2", String.Empty, new {InputType = "text", id = "field2", onchange = "GetTotal(this)" }) </div> <div> <h3>Total Checked</h3> </div> <div id="chkTotal"></div> Now I am trying to implement another selector which will total two additional editor fields... function GetTotal1(txtBox) { var total1 = 0; $('input:text').each(function (index, value) { total1 += parseInt($(value).val() || 0); }); $("#disTotal").html(total1); } View: <div class="editor-field"> @Html.TextBox("Field3", String.Empty, new {InputType = "text", id = "field3", onchange = "GetTotal1(this)" }) </div> <div class="editor-field"> @Html.TextBox("Field4", String.Empty, new {InputType = "text", id = "field4", onchange = "GetTotal1(this)" }) </div> <div> <h3>Total Distributed</h3> </div> <div id="disTotal"></div>

    Read the article

  • msysgit git-am can't apply it's own git format-patch sequence

    - by Andrian Nord
    I'm using msysgit git on windows to operate on central svn repository. I'm using git as I want to have it's awesome little local branches for everything and rebasing on each other. I also need to update from central repo often, so using separate svn/git is not an option. Problem is - git svn --help (man page) says that it is not a good idea to use git merge into master branch (which is set to track from svn's trunk) from local branches, as this will ruin the party and git svn dcommit would not work anymore. I know that it's not exactly true and you may use git merge if you are merging from branch which was properly rebased on master prior merge, but I'm trying to make it safer and actually use git format-patch and git am. We are using code review, so I'm making patches anyway. I also knew about git cherry-pick, but I want to just git am /reviewed/patches/dir/* without actually recalling what commits was corresponding to this patches (without reading patches, that is). So, what's wrong with git svn and git am? It's simple - git am for a few very hard points is doing CRLF into LF conversion for patches supplied (git-mailsplit is doing this, to be precise), if not rebasing. git format-patch is also producing proper (LF-ended) patches. As my repo is mostly CRLF (and it should remain so), patches are, obviously, failing due to wrong EOL. Converting diffs to CRLF and somehow hacking git am to prevent it from conversion is not working, too. It will fail if any file was removed or deleted - git apply will complain about expected /dev/null (but he got /dev/null^M). And if I'm applying it with git am --ignore-space-change --ignore-whitespace that it will commit LF endings straight to the index, which is also weird. I don't know if it will preserve over commiting into svn (via git svn dcommit) and checking it out and I don't want to try out. Of course, it's still possible to try hacking around patches to convert only actual diffs, but this is too much hacks for simple task. So, I wonder, is there really no established way to produce patches and apply them to the same repo on the same system? It just feels weird that msysgit can't apply it's own patches.

    Read the article

  • AlarmManager triggers PendingIntent too soon

    - by Wezelkrozum
    I've searched for 3 days now but didn't find a solution or similar problem/question anywhere else. Here is the deal: Trigger in 1 hour - works correct Trigger in 2 hours - Goes of in 1:23 Trigger in 1 day - Goes of in ~11:00 So why is the AlarmManager so unpredictable and always too soon? Or what am I doing wrong? And is there another way so that it could work correctly? This is the way I register my PendingIntent in the AlarmManager (stripped down): AlarmManager alarmManager = (AlarmManager)parent.getSystemService(ALARM_SERVICE); Intent myIntent = new Intent(parent, UpdateKlasRoostersService.class); PendingIntent pendingIntent = PendingIntent.getService(parent, 0, myIntent, PendingIntent.FLAG_UPDATE_CURRENT); //Set startdate of PendingIntent so it triggers in 10 minutes Calendar start = Calendar.getInstance(); start.setTimeInMillis(SystemClock.elapsedRealtime()); start.add(Calendar.MINUTE, 10); //Set interval of PendingIntent so it triggers every day Integer interval = 1*24*60*60*1000; //Cancel any similar instances of this PendingIntent if already scheduled alarmManager.cancel(pendingIntent); //Schedule PendingIntent alarmManager.setRepeating(AlarmManager.ELAPSED_REALTIME_WAKEUP, start.getTimeInMillis(), interval, pendingIntent); //Old way I used to schedule a PendingIntent, didn't seem to work either //alarmManager.set(AlarmManager.RTC_WAKEUP, start.getTimeInMillis(), pendingIntent); It would be awesome if anyone has a solution. Thanks for any help! Update: 2 hours ago it worked to trigger it with an interval of 2 hours, but after that it triggered after 1:20 hours. It's getting really weird. I'll track the triggers down with a logfile and post it here tomorrow. Update: The PendingIntent is scheduled to run every 3 hours. From the log's second line it seems like an old scheduled PendingIntent is still running: [2012-5-3 2:15:42 519] Updating Klasroosters [2012-5-3 4:15:15 562] Updating Klasroosters [2012-5-3 5:15:42 749] Updating Klasroosters [2012-5-3 8:15:42 754] Updating Klasroosters [2012-5-3 11:15:42 522] Updating Klasroosters But, I'm sure I cancelled the scheduled PendingIntent's before I schedule a new one. And every PendingIntent isn't recreated in the same way, so it should be exactly the same. If not , this threads question isn't relevant anymore.

    Read the article

  • How do I add values to semi-complex JSON object?

    - by Nick Verheijen
    I'm fairly new to using JSON objects and I'm kinda stuck. I've got an JSON object that was converted from this array: Array ( [status] => success [id] => 1 [name] => Zone 1 [description] => Awesome zone deze.. [tiles] => Array ( // Column for the tile grid [0] => Array ( // Row for the tile grid [0] => Array ( [tileID] => 1 [rotation] => 0 ) [1] => Array ( [tileID] => 1 [rotation] => 0 ) // Etc.. ) [1] => Array // etc.. etc.. ) ) I use this object to render out an isometric grid for my HTML5 Canvas game. I'm building a map editor and to put more tiles on the map, i'll have to add values to this json object. This is how I would do it in PHP: mapData[column][row] = array( 'tileID' => 1, 'rotation' => 0 ); So my question is, how do I achieve this with a JSON object in javascript? Thanks in advance! Nick Update I've ran into an error: can't convert undefined to object mapDataTiles[mouseY][mouseX] = { tileID: editorSelectedTile, rotation: 0 }; This is the code i use for clicking & then saving the new tile to the JSON object. At first I though that one of my parameters was 'undefined', so i logged those to the console but they came out perfectly.. // If there is already a tile placed on these coordinates if( mapDataTiles[mouseX] && mapDataTiles[mouseX][mouseY] ) { mapDataTiles[mouseX][mouseY]['tileID'] = editorSelectedTile; } // If there is no tile placed on these coordinates else { mapDataTiles[mouseX][mouseY] = { tileID: editorSelectedTile, rotation: 0 }; } My variables have the following values: MouseX: 5 MouseY: 17 tileID: 2 Also weird fact, that for some coordinates it does actually work and save new data to the array. mapDataTiles[mouseY][mouseX] = { tileID: editorSelectedTile, rotation: 0 };

    Read the article

  • Unable to install gem "pg" on Ubuntu 12.10 (AMD64)

    - by Lynx_Eyes
    I've been (unsuccessfully) trying to install the "pg" gem on my ruby 1.9.3-p286 but nothing seems to work. I've already installed postgresql (9.1), libpq-dev and a few others like postgresql-server-dev-9.1. I've tried to pass the "with-pg-config" flag to the gem install but simply nothing seems to work. Every time I try to install the gem it outputs something like this: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... no checking for PQconnectdb() in -llibpq... no checking for PQconnectdb() in -lms/libpq... no Can't find the PostgreSQL client library (libpq) *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby --with-pg --without-pg --with-pg-dir --without-pg-dir --with-pg-include --without-pg-include=${pg-dir}/include --with-pg-lib --without-pg-lib=${pg-dir}/lib --with-pg-config --without-pg-config --with-pg_config --without-pg_config --with-pqlib --without-pqlib --with-libpqlib --without-libpqlib --with-ms/libpqlib --without-ms/libpqlib Gem files will remain installed in /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1 for inspection. Results logged to /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1/ext/gem_make.out What am I doing wrong? Is there something else that I should do before trying to install the gem? Thank you in advance. [EDIT] Ok, so joelparkerhenderson's answer set me to think that there might me something wrong with paths and libraries and a went on digging a little bit further.. I've found this awesome post and it solved! Basically the problem lies with RVM. So, my problem is solved and for anyone out there that might suffer from the same thing, follow the link!

    Read the article

  • JSON and WebOS simple example?

    - by user558361
    I have been following this tutorial http://tinyurl.com/327p325 which has been GREAT up until this point where I can't get his code to work. I get the list working with static items but I can't get it to work with the json items. I've tried to simplify it with what I really want it to do to try and debug what is wrong (also if someone could please tell me how to view the Mojo log that would be awesome) In the tutorial he has to use the yahoo service to convert the site into json data, while the site I want to interact with already has json data generated so this is what I have PageAssistant.prototype.setup = function() { this.myListModel = { items : [] }; this.myListAttr = { itemTemplate: "page/itemTemplate", renderLimit: 20, }; this.controller.setupWidget("MyList",this.myListAttr,this.myListModel); this.controller.setupWidget("search_divSpinner", { spinnerSize : "large" }, { spinning: true } ); }; PageAssistant.prototype.activate = function(event) { this.getData(); }; PageAssistant.prototype.getData = function () { // the spinner doesn't show up at all $("search_divScrim").show(); var url = "http://www.website.com/.json"; var request = new Ajax.Request(url, { method: 'get', asynchronous: true, evalJSON: "false", onSuccess: this.parseResult.bind(this), on0: function (ajaxResponse) { // connection failed, typically because the server is overloaded or has gone down since the page loaded Mojo.Log.error("Connection failed"); }, onFailure: function(response) { // Request failed (404, that sort of thing) Mojo.Log.error("Request failed"); }, onException: function(request, ex) { // An exception was thrown Mojo.Log.error("Exception"); }, }); } PageAssistant.prototype.parseResult = function (transport){ var newData = []; var theStuff=transport.responseText; try { var json = theStuff.evalJSON(); } catch(e) { Mojo.Log.error(e); } // this is where I believe I am wrong for (j=0;j < json.data.count;j++) { var thread=json.data.children[j]; newData[j] = { title: thread.data.author }; } this.myListModel["items"] = newData; this.controller.modelChanged(this.myListModel , this); $("search_divScrim").hide(); } So where I commented that I believe I am wrong I am just trying to get the title out of this json data { kind: Listing data: { children: [ { kind: food data: { author: Foodmaster hidden: false title: You should eat this } }, // then it repeats with the kind: and data Anyone see where I went wrong? I would like to know how to view the log as I have log events but can't figure out where to look to see if any of them are being thrown.

    Read the article

  • show-hide image onmouseover

    - by butters
    I have 3 images on top of each other. The first one is a normal .jpg image, the second a greyscale version and the 3rd is some kind of effect i add with a transparent .png Now what i want is that, if i move the mouse over those images, the greyscale image is hidden or replaced by another image and afterwards visible again. The problem here is that i am a js noob, so it's kind of hard for me to find a solution ^^ my code looks something like this: <html> <head> <style type="text/css"> <!-- ul li{ display: inline-table; } .frame{ position: relative; height: 110px; width: 110px; } .frame div{ position: absolute; top:0px; left:0px; } .effect{ background:url(images/effect.png) no-repeat; height:110px; width: 110px; } .image{ height:100px; width:100px; border: 1px solid red; margin:4px; } .greyscale{ height:100px; width:100px; border: 1px solid red; margin:4px; } --> </style> </head> <body> <ul> <li> <div class="frame"> <div class="image"><img src="images/pic1.jpg" height="100" width="100"></div> <div class="greyscale"><img src="images/grey1.jpg" height="100" width="100"></div> <div class="effect">qwert</div> </div> </li> <li> <div class="frame"> <div class="image"><img src="images/pic2.jpg" height="100" width="100"></div> <div class="greyscale"><img src="images/grey2.jpg" height="100" width="100"></div> <div class="effect">qewrt</div> </div> </li> </ul> </body> </html> </code></pre> would be super-awesome if someone can help me out :)

    Read the article

  • How do display checked value in checkbox on Google plus style popup box?

    - by user946742
    After reading this post on Stackoverflow Google plus popup box when hovering over thumbnail? I was inspirted to add it on my site. I managed to do so and the script adds the contacts to my database. So far awesome! However, my problem (and also appears in the example) is it does not display the "checked" value... so the user will never know if they already added them to their list or not. Is the correct way to display checked values with PHP? Here is my html code: <ul style="list-style: none;padding:2px;"> <li style="padding:5px 2px;"> <input type="checkbox" id="Friends" name="circles" value="Friends" '.$checked1.'/> Friends </li> <li style="padding:5px 2px;"> <input type="checkbox" id="Following" name="circles" value="Following" '.$checked2.'/>Following </li> <li style="padding:5px 2px;"> <input type="checkbox" id="Family" name="circles" value="Family" '.$checked3.'/> Family </li> <li style="padding:5px 2px;"> <input type="checkbox" id="Acquaintances" name="circles" value="Acquaintances" '.$checked4.'/> Acquaintances </li> </ul> And my PHP code is: if($circle_check_friends>0) { $ckecked1='checked=""'; } else if ($circle_check_following>0) { $ckecked2='checked=""'; } else if ($circle_check_family>0) { $ckecked3='checked=""'; } else if ($circle_check_acquaintances>0) { $ckecked4='checked=""'; } else if ($circle_check_friends=0) { $ckecked1=''; } else if ($circle_check_following=0) { $ckecked2=''; } else if ($circle_check_family=0) { $ckecked3=''; } else if ($circle_check_acquaintances=0) { $ckecked4=''; } Im lost because this is not giving me the result I want... i.e. for the checked values to be displayed according to the users choice. Your help is highly appreciated Thank you all in advance George

    Read the article

  • Cant get description rss tag data with javascript

    - by AdamB
    I'm currently making a widget to take and display items from a feed. I have this working for the most part, but for some reason the data within the tag within the item comes back as empty, but I get the data in the and tags no problem. feed is and xmlhttp.responseXML object. var items = feed.getElementsByTagName("item"); for (var i=0; i<10; i++){ container = document.getElementById('list'); new_element = document.createElement('li'); title = items[i].getElementsByTagName("title")[0].firstChild.nodeValue; link = items[i].getElementsByTagName("link")[0].firstChild.nodeValue; alert(items[i].getElementsByTagName("description")[0].firstChild.nodeValue); new_element.innerHTML = "<a href=\""+link+"\">"+title+"</a> "; container.insertBefore(new_element, container.firstChild); } I have no idea why it wouldn't be working for the tag and would be for the other tags. Here is an example of the rss feed its trying to parse: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"> <channel> <title>A title</title> <link>http://linksomehwere</link> <description>The title of the feed</description> <language>en-us</language> <item> <pubDate>Fri, 10 Jul 2009 11:34:49 -0500</pubDate> <title>Awesome Title</title> <link>http://link/to/thing</link> <guid>http://link/to/thing</guid> <description> <![CDATA[ <p>some html crap</p> blah blah balh ]]> </description> </item> </channel> </rss>

    Read the article

  • Using PHP session_id() to Make Sure iframe is Generated by Our Server Dynamically

    - by Michael Robinson
    We use iframes to show ads on our site. Iframes are used to allow us to keep the ad generation code and other site modules separate. As we track ad views on our site, and need to be able to keep an accurate count of which pagetype gets what views, I must ensure that users can't simply copy-paste the iframe in which the ad is loaded onto another site. This would cause ad count to become inflated for this page, and the count would not match the view count of the page the iframe "should" be displayed in. Before anyone says so: no I can't simply compare the page view count with the ad view count, or use the page view count * number of ads per page, as # of ads per page will not necessarily be static. I need to come up with a solution that will allow ads to be shown only for iframes that are generated dynamically and are shown on our pages. I am not familiar with PHP sessions, but from what little reading I have had time to do, the following seems to be to be an acceptable solution: Add "s = session_id()" to the src of the ad's iframe. In the code that receives and processes ad requests, only return (and count) and ad if s == session_id(). Please correct me if I'm wrong, but this would ensure: Ads would only be returned to iframes whose src was generated alongside the rest of the page's content, as is the case during normal use. We can return our logo to ad calls with an invalid session_id. So a simple example would be: One of our pages: <?php session_start(); ?> <div id="someElement"> <!-- EVERYONE LOVES ADS --> <iframe src="http//awesomesite.com/ad/can_has_ad.php?s=<?php echo session_id(); ?>></iframe> </div> ad/can_has_ad.php: <?php session_start(); ?> if($_GET['s'] == session_id()){ echo 'can has ad'; } else{ echo '<img src="http://awesomesite.com/images/canhaslogo.jpg"/>'; } And finally, copied code with static 's' parameter: <!-- HAHA LULZ I WILL SCREW WITH YOUR AD VIEW COUNTS LULZ HAHA --> <iframe src="http//awesomesite.com/ad/can_has_ad.php?s=77f2b5fcdab52f52607888746969b0ad></iframe> Which would give them an iframe showing our awesome site's logo, and not screw with our view counts. I made some basic test cases: two files, one that generates the iframe and echos it, and one that the iframe's src is pointed to, that checks the 's' parameter and shows an appropriate message depending on the result. I copied the iframe into a file and hosted it on a different server, and the correct message was displayed (cannot has ad). So, my question is: Would this work or am I being a PHP session noob, with the above test being a total fluke? Thanks for your time! Edit: I'm trying to solve this without touching the SQL server

    Read the article

  • PDF Report generation

    - by IniTech
    EDIT : I completed this project using ABCpdf. For anyone interested, I love this product and their support is A+. Everything I listed as a 'Con' for the HTML - PDF solution was easily doable in ABCpdf. I've been charged with creating a data driven pdf report. After reviewing the plethora of options, I have narrowed it down to 2. I need you all to to help me decide, or offer alternatives I haven't considered. Here are the requirements: 100% Data driven Eventually PDF (a stop in HTML is fine, so long as it is converted) Can be run with multiple sets of data (the layout is always the same, the data is variable) Contains normal analysis-style copy (saved in DB with html markup) Contains tables (data for tables is generated at run-time) Header/Page # on each page Table of Contents .NET (VB or C#) Done quickly Now, because of the fact that the report is going to be generated with multiple sets of data, I don't think a stamped pdf template will work since I won't know how long or how many pages a certain piece of the report could require. So, I think my best options are: Programmatic creation using an iText-like solution. Generate in HTML and convert to PDF using a third-party application (ABCPdf is the tool I have played with so far) Both solutions have their pro's and con's. Programmatic solution: Pros: Flexible Easy page numbering/page header/table of contents Free Cons: Time consuming (to write a layer on top of iText to do what I need and keep maintainable) Since the copy is already stored in the db with html markup, I would have to parse through the data before I place it into the pdf, ensuring I don't have to break the paragraph into chunks so I can apply bold, italic, underline, etc. to specific phrases. This seems like a huge PITA, and I hope I am wrong about that assumption. HTML - PDF Pros: Easy to generate from db (no parsing necessary) Many tools for conversion Uses technology I am already familiar with Built-in "Print Preview" - not a req, but nice Cons: (Edited after project completion. All of my assumptions were incorrect and ABCpdf is awesome) 1. Almost impossible to generate page headers - Not True 2. Very difficult to generate page numbers Not True 3. Nearly impossible to generate table of contents Not True 4. (Cross-browser support isn't a con; Since its internal, I can dictate what browser to use) 5. Conversion tool quirks - may not convert exactly as rendered in browser Not True 6. Overall, I think it would be very hard to format the HTML exactly as I would want it to appear/convert to PDF. Not True That's it - I need the communitys help in deciding which way I should go. I might be wrong about some of my Pro/Con assumptions. If I am, please tell me. All thoughts and suggestions are welcome and appreciated. Thanks

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • Set a specific stylesheet based on session variable in javascript

    - by user2371301
    I have an option for a user to select his/her own theme while logged into the system and this theme is set in a MYSQL Database and called each time the user logs in, this is called by: <?php $_SESSION['SESS_THEME_NAME']; ?> Now, I had this working in a PHP file but I need it to work in Javascript instead unfortunately. And I need some help. I looked at the code using the developers tools on Google Chrome and looks like the above code is not resolving within the javascript file. Which makes sense because you can't access session variables within a javascript file (as I found by searching Google.) The code is basically supposed to set the specific stylesheet based on the value extracted from the MYSQL database. So if the database says Default the script needs to tell the webpage to use the default.css file. And so on and so forth. My attempt at writing this is as follows: var themName="<?php $_SESSION['SESS_THEME_NAME']; ?>"; if (themeName == "Default") { document.write("<link re='stylesheet' type='text/css' href='css/mws-theme.css'>"); }; if (themeName == "Army") { document.write("<link rel='stylesheet' type='text/css' href='css/mws-theme-army.css'>"); }; if (themeName == "Rocky Mountains") { document.write("<link rel='stylesheet' type='text/css' href='css/mws-theme-rocky.css'>"); }; if (themeName == "Chinese Temple") { document.write("<link rel='stylesheet' type='text/css' href='css/mws-theme-chinese.css'>"); }; if (themeName == "Boutique") { document.write("<link rel='stylesheet' type='text/css' href='css/mws-theme-boutique.css'>"); }; if (themeName == "Toxic") { document.write("<link rel='stylesheet' type='text/css' href='css/mws-theme-toxic.css'>"); }; if (themeName == "Aquamarine") { document.write("<link rel='stylesheet' type='text/css' href='css/mws-theme-aquamarine.css'>"); }; Any help once so ever would be awesome and much much appreciated! I am reaching a deadline :/

    Read the article

  • What variable dictates position of non-focused elements in the roundabout plugin?

    - by kristina childs
    Part of the problem here is that i'm not sure what the best language to use in order to find the solution. I search and searched so please forgive if this is already a thread somewhere. I'm using the roundabout plugin to cycle through 3 divs. Each div is 794px wide, which makes the roundabout-in-focus element 794 and the two not in focus 315.218px wide, positioned so half of each is hidden by the in-focus div. This is all well and good, however the total width of the display needs to stay within 1000px (ideally 980px, but i can fudge if need be.) Basically I want to make the non-focused divs be 3/4 hidden by the in-focus div but for the life of me can't figure out what variables i need to edit in order to do it. Unfortunately it's not one of the many easily-changed options like z-index and minScale. i tried minScale but it's clear this isn't going to work. the plugin outputs this code: <li class="roundabout-moveable-item" style="position: absolute; left: -57px; top: 205px; width: 319.982px; height: 149.513px; opacity: 0.7; z-index: 146; font-size: 5.6px;"> i need to find out what changes the left positioning so it's shifted closer to the center of the stage, like this: <li class="roundabout-moveable-item" style="position: absolute; left: 5px; top: 205px; width: 319.982px; height: 149.513px; opacity: 0.7; z-index: 146; font-size: 5.6px;"> i tried playing with the positioning functions of the plugin but all that did was shift everything in tandem left or right. any help is greatly appreciated. this site is going to be awesome once i figure out all this jquery stuff! here is a link to my .js file: http://avalon.eaw.com/scripts/jquery.roundabout2.js i've got an overflow:hidden on the to help guide the positioning of those no-focused items.

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • Configuring a html page from an original demo page

    - by Wold
    I forked into rainyday.js through github, an awesome javascript program made by maroslaw at this link: https://github.com/maroslaw/rainyday.js. Basically I tried taking his demo page and my own photo city.jpg and changed the applicable fields so that I could run it on my own site, but only the picture loads and the script itself doesn't start to run. I'm pretty new to html and javascript so I'm probably omitting something very simple, but here is the script for the demo code: <script src="rainyday.js"></script> <script> function getURLParameter(name) { return decodeURIComponent((new RegExp('[?|&]' + name + '=' + '([^&;]+?)(&|#|;|$)').exec(location.search)||[,''])[1].replace(/\+/g, '%20'))||null; } function demo() { var image = document.getElementById('background'); image.onload = function () { var engine = null; var preset = getURLParameter('preset') || '1'; if (preset === '1') { engine = new RainyDay({ element: 'background', blur: 10, opacity: 1, fps: 30, speed: 30 }); engine.rain([ [1, 2, 8000] ]); engine.rain([ [3, 3, 0.88], [5, 5, 0.9], [6, 2, 1] ], 100); } else if (preset === '2') { engine = new RainyDay({ element: 'background', blur: 10, opacity: 1, fps: 30, speed: 30 }); engine.VARIABLE_GRAVITY_ANGLE = Math.PI / 8; engine.rain([ [0, 2, 0.5], [4, 4, 1] ], 50); } else if (preset === '3') { engine = new RainyDay({ element: 'background', blur: 10, opacity: 1, fps: 30, speed: 30 }); engine.trail = engine.TRAIL_SMUDGE; engine.rain([ [0, 2, 0.5], [4, 4, 1] ], 100); } }; image.crossOrigin = 'anonymous'; if (getURLParameter('imgur')) { image.src = 'http://i.imgur.com/' + getURLParameter('imgur') + '.jpg'; } else if (getURLParameter('img')) { image.src = getURLParameter('img') + '.jpg'; } var youtube = getURLParameter('youtube'); if (youtube) { var div = document.getElementById('sound'); var player = document.createElement('iframe'); player.frameborder = '0'; player.height = '1'; player.width = '1'; player.src = 'https://youtube.com/embed/' + youtube + '?autoplay=1&controls=0&showinfo=0&autohide=1&loop=1'; div.appendChild(player); } } </script> This is where I am naming my background and specifying the photo from within the directory. <body onload="demo();"> <div id="sound" style="z-index: -1;"></div> <div id="parent"> <img id='background' alt="background" src="city.jpg" /> </div> </body> The actual code for the whole entire rainyday.js script can be found here: https://github.com/maroslaw/rainyday.js/blob/master/rainyday.js Thanks in advance for any help and advice!

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

  • scrollTo (jQuery) won't work in firefox

    - by William
    For some reason, firefox seems to ignore my scrollTo function even though it works in chrome and safari. Here's an example link: http://blog.rainbird.me/post/2358248459/blowholes-are-awesome Chrome and Safari will automatically scroll to the top of the image (with an offset of 20 pixels) It doesn't work in firefox. I'm baffled! code: $(document).ready(function() { $(".photoShell img").lazyload({ placeholder: "http://william.rainbird.me/boston-polaroid/white.gif", threshold: 200 }); window.viewport = { height: function() { return $(window).height(); }, width: function() { return $(window).width(); }, scrollTop: function() { return $(window).scrollTop(); }, scrollLeft: function() { return $(window).scrollLeft(); } }; $(".photoShell img").hide(); $(".photoShell .caption").hide(); $(".photoShell img").load(function() { var maxWidth = viewport.width() - 40; // Max width for the image if(maxWidth > 960){ maxWidth = 960; } var maxHeight = viewport.height() - 50; // Max height for the image var ratio = 0; // Used for aspect ratio var width = $(this).width(); // Current image width var height = $(this).height(); // Current image height // Check if the current width is larger than the max if(width > maxWidth){ ratio = maxWidth / width; // get ratio for scaling image $(this).css("width", maxWidth); // Set new width $(this).css("height", height * ratio); // Scale height based on ratio height = height * ratio; // Reset height to match scaled image width = width * ratio; // Reset width to match scaled image } // Check if current height is larger than max if(height > maxHeight){ ratio = maxHeight / height; // get ratio for scaling image $(this).css("height", maxHeight); // Set new height $(this).css("width", width * ratio); // Scale width based on ratio width = width * ratio; // Reset width to match scaled image } $(this).parents('div.photoShell').css("width", $(this).width() + 22); $(this).parents('div.photoShell').addClass('loaded'); $(this).next(".caption").show(); var scrollNum = $(this).parents('div.photoShell').offset().top; $.scrollTo(scrollNum - 20, {duration: 700, axis:"y"}); $(this).fadeIn("slow"); }).each(function() { // trigger the load event in case the image has been cached by the browser if(this.complete) $(this).trigger('load'); });

    Read the article

  • How to call a set of variables functions based on class on a group of elements

    - by user1547007
    I have the following html code: <i class="small ele class1"></i> <i class="medium ele class1"></i> <i class="large ele class1"></i> <div class="clear"></div> <i class="small ele class2"></i> <i class="medium ele class2"></i> <i class="large ele class2"></i> <div class="clear"></div> <i class="small ele class3"></i> <i class="medium ele class3"></i> <i class="large ele class3"></i> <div class="clear"></div> <i class="small ele class4"></i> <i class="medium ele class4"></i> <i class="large ele class4"></i>? And my javascript looks like so: var resize = function(face, s) { var bb = face.getBBox(); console.log(bb); var w = bb.width; var h = bb.height; var max = w; if (h > max) { max = h; } var scale = s / max; var ox = -bb.x+((max-w)/2); var oy = -bb.y+((max-h)/2); console.log(s+' '+h+' '+bb.y); face.attr({ "transform": "s" + scale + "," + scale + ",0,0" + "t" + ox + "," + oy }); } $('.ele').each(function() { var s = $(this).innerWidth(); var paper = Raphael($(this)[0], s, s); var face = $(this).hasClass("class1") ? class1Generator(paper) : class4Generator(paper); /*switch (true) { case $(this).hasClass('class1'): class1Generator(paper); break; case $(this).hasClass('class2'): class2Generator(paper) break; case $(this).hasClass('class3'): class3Generator(paper) break; case $(this).hasClass('class4'): class4Generator(paper) break; }*/ resize(face, s); }); my question is, how could I make this line of code more scalable? I tried using a switch but The script below is calling two functions if one of the elements has a class, but what If i have 10 classes? I don't think is the best solution I created a jsFiddle http://jsfiddle.net/7uUgz/6/ //var face = $(this).hasClass("awesome") ? awesomeGenerator(paper) : awfulGenerator(paper);

    Read the article

  • Jquery remove class from image

    - by user1269625
    Hey ya'll I have these 3 images thumbnails here... <div class="wpcart_gallery" style="text-align:center; padding-top:5px;"> <a class="thickbox cboxElement" title="DSC_0118" href="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/DSC_0118.jpg" rel="Teardrop Druzy Amethyst Ring" rev="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/DSC_0118.jpg"> <img class="attachment-gold-thumbnails colorbox-736" width="50" height="50" title="DSC_0118" alt="DSC_0118" src="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/DSC_0118-50x50.jpg"> </a> <a class="thickbox cboxElement" title="P7230376" href="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/P7230376.jpg" rel="Teardrop Druzy Amethyst Ring" rev="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/P7230376.jpg"> <img class="attachment-gold-thumbnails colorbox-736" width="50" height="50" title="P7230376" alt="P7230376" src="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/P7230376-50x50.jpg"> </a> <a class="thickbox cboxElement" title="P7230378" href="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/P7230378.jpg" rel="Teardrop Druzy Amethyst Ring" rev="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/P7230378.jpg"> <img class="attachment-gold-thumbnails colorbox-736" width="50" height="50" title="P7230378" alt="P7230378" src="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/P7230378-50x50.jpg"> </a> </div> What I am trying to do is come up with a jquery code that would remove the cboxElement from the first image and if I click on one of the images to remove cboxElement and place cboxElement to the other images... I also have this big image and when you click on one of the thumbnails, the image is replaced by the thumbnail and thats really the thumbnail I want to exclude...Could I possible just say if one of the 3 images src = this one image = src remove the class from this thumbnail? Which way would be better? I am very new at jquery :( I hope this makes sense. Here is the code for the big image... <a class="preview_link cboxElement" style="text-decoration:none;" href="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/DSC_0118.jpg" rel="Teardrop Druzy Amethyst Ring"> <img id="product_image_736" class="product_image colorbox-736" width="400" src="http://www.taranmarlowjewelry.com/wp-content/uploads/2012/07/DSC_0118.jpg" title="Teardrop Druzy Amethyst Ring" alt="Teardrop Druzy Amethyst Ring"> <br> <div style="text-align:center; color:#F39B91;">Click To Enlarge</div> </a> Any help or a point in the right direction would be awesome!!

    Read the article

  • Connecting debian and windows via IPsec VPN with Racoon and ipsec-tools

    - by Michi Qne
    I've some trouble with the IPsec configuration on my debian server (6 squeeze). This server should connect via IPsec VPN to an windows server, which is protected by an firewall. I've used racoon and ipsec-tools and this tutorial http://wiki.debian.org/IPsec. However, I am not quite sure, if this tutorial fits to my purpose, because of some differences: my Host and my gateway are the same server. So I don't have two different ip addresses. I guess, that's not a problem the other server is an windows system behind a firewall. Hopefully, not a problem the subnet of the windows system is /32 not /24. So I change it to /32. I worked through the tutorial step by step, but I wasn't able to route the ip. The following command didn't work for me: ip route add to 172.16.128.100/32 via XXX.XXX.XXX.XXX src XXX.XXX.XXX.XXX So I tried the following instead: ip route add to 172.16.128.100 .., which obviously not solved the problem. The next problem is the compression. The windows doesn't use a compression, but 'compression_algorithm none;' doesn't work with my racoon. So the current value is 'compression_algorithm deflate;' So my current result looks like this: When I am trying to ping the windows host (ping 172.16.128.100), I receive the following error message from ping: ping: sendmsg: Operation not permitted And racoon logs: racoon: ERROR: failed to get sainfo. After googling for a while I came to no conclusion, what's the solution. Does this error message mean that the first phase of IPsec works? I am thankful for any advice. I guess my configs might be helpful. My racoon.conf looks like this: path pre_shared_key "/etc/racoon/psk.txt"; remote YYY.YYY.YYY.YYY { exchange_mode main; proposal { lifetime time 8 hour; encryption_algorithm 3des; hash_algorithm sha1; authentication_method pre_shared_key; dh_group 2; } } sainfo address XXX.XXX.XXX.XXX/32 any address 172.16.128.100/32 any { pfs_group 2; lifetime time 8 hour; encryption_algorithm aes 256; authentication_algorithm hmac_sha1; compression_algorithm deflate; } And my ipsec-tools.conf looks like this: flush; spdflush; spdadd XXX.XXX.XXX.XXX/32 172.16.128.100/32 any -P out ipsec esp/tunnel/XXX.XXX.XXX.XXX-YYY.YYY.YYY.YYY/require; spdadd 172.16.128.100/32 XXX.XXX.XXX.XXX/32 any -P in ipsec esp/tunnel/YYY.YYY.YYY.YYY-XXX.XXX.XXX.XXX/require; If anyone has an advice, that would be awesome. Thanks in Advance. Greets, Michael It was a simple copy-and-paste error in an ip address.

    Read the article

  • Managing access to multiple linux system

    - by Swartz
    A searched for answers but have found nothing on here... Long story short: a non-profit organization is in dire need of modernizing its infrastructure. First thing is to find an alternatives to managing user accounts on a number of Linux hosts. We have 12 servers (both physical and virtual) and about 50 workstations. We have 500 potential users for these systems. The individual who built and maintained the systems over the years has retired. He wrote his own scripts to manage it all. It still works. No complaints there. However, a lot of the stuff is very manual and error-prone. Code is messy and after updates often needs to be tweaked. Worst part is there is little to no docs written. There are just a few ReadMe's and random notes which may or may not be relevant anymore. So maintenance has become a difficult task. Currently accounts are managed via /etc/passwd on each system. Updates are distributed via cron scripts to correct systems as accounts are added on the "main" server. Some users have to have access to all systems (like a sysadmin account), others need access to shared servers, while others may need access to workstations or only a subset of those. Is there a tool that can help us manage accounts that meets the following requirements? Preferably open source (i.e. free as budget is VERY limited) mainstream (i.e. maintained) preferably has LDAP integration or could be made to interface with LDAP or AD service for user authentication (will be needed in the near future to integrate accounts with other offices) user management (adding, expiring, removing, lockout, etc) allows to manage what systems (or group of systems) each user has access to - not all users are allowed on all systems support for user accounts that could have different homedirs and mounts available depending on what system they are logged into. For example sysadmin logged into "main" server has main://home/sysadmin/ as homedir and has all shared mounts sysadmin logged into staff workstations would have nas://user/s/sysadmin as homedir(different from above) and potentially limited set of mounts, a logged in client would have his/her homedir at different location and no shared mounts. If there is an easy management interface that would be awesome. And if this tool is cross-platform (Linux / MacOS / *nix), that will be a miracle! I have searched the web and so have found nothing suitable. We are open to any suggestions. Thank you. EDIT: This question has been incorrectly marked as a duplicate. The linked to answer only talks about having same homedirs on all systems, whereas we need to have different homedirs based on what system user is currently logged into(MULTIPLE homedirs). Also access needs to be granted only to some machinees not the whole lot. Mods, please understand the full extent of the problem instead of merely marking it as duplicate for points...

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

  • How to automate org-refile for multiple todo

    - by lawlist
    I'm looking to automate org-refile so that it will find all of the matches and re-file them to a specific location (but not archive). I found a fully automated method of archiving multiple todo, and I am hopeful to find or create (with some help) something similar to this awesome function (but for a different heading / location other than archiving): https://github.com/tonyday567/jwiegley-dot-emacs/blob/master/dot-org.el (defun org-archive-done-tasks () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "\* \\(None\\|Someday\\) " nil t) (if (save-restriction (save-excursion (org-narrow-to-subtree) (search-forward ":LOGBOOK:" nil t))) (forward-line) (org-archive-subtree) (goto-char (line-beginning-position)))))) I also found this (written by aculich), which is a step in the right direction, but still requires repeating the function manually: http://stackoverflow.com/questions/7509463/how-to-move-a-subtree-to-another-subtree-in-org-mode-emacs ;; I also wanted a way for org-refile to refile easily to a subtree, so I wrote some code and generalized it so that it will set an arbitrary immediate target anywhere (not just in the same file). ;; Basic usage is to move somewhere in Tree B and type C-c C-x C-m to mark the target for refiling, then move to the entry in Tree A that you want to refile and type C-c C-w which will immediately refile into the target location you set in Tree B without prompting you, unless you called org-refile-immediate-target with a prefix arg C-u C-c C-x C-m. ;; Note that if you press C-c C-w in rapid succession to refile multiple entries it will preserve the order of your entries even if org-reverse-note-order is set to t, but you can turn it off to respect the setting of org-reverse-note-order with a double prefix arg C-u C-u C-c C-x C-m. (defvar org-refile-immediate nil "Refile immediately using `org-refile-immediate-target' instead of prompting.") (make-local-variable 'org-refile-immediate) (defvar org-refile-immediate-preserve-order t "If last command was also `org-refile' then preserve ordering.") (make-local-variable 'org-refile-immediate-preserve-order) (defvar org-refile-immediate-target nil) "Value uses the same format as an item in `org-refile-targets'." (make-local-variable 'org-refile-immediate-target) (defadvice org-refile (around org-immediate activate) (if (not org-refile-immediate) ad-do-it ;; if last command was `org-refile' then preserve ordering (let ((org-reverse-note-order (if (and org-refile-immediate-preserve-order (eq last-command 'org-refile)) nil org-reverse-note-order))) (ad-set-arg 2 (assoc org-refile-immediate-target (org-refile-get-targets))) (prog1 ad-do-it (setq this-command 'org-refile))))) (defadvice org-refile-cache-clear (after org-refile-history-clear activate) (setq org-refile-targets (default-value 'org-refile-targets)) (setq org-refile-immediate nil) (setq org-refile-immediate-target nil) (setq org-refile-history nil)) ;;;###autoload (defun org-refile-immediate-target (&optional arg) "Set current entry as `org-refile' target. Non-nil turns off `org-refile-immediate', otherwise `org-refile' will immediately refile without prompting for target using most recent entry in `org-refile-targets' that matches `org-refile-immediate-target' as the default." (interactive "P") (if (equal arg '(16)) (progn (setq org-refile-immediate-preserve-order (not org-refile-immediate-preserve-order)) (message "Order preserving is turned: %s" (if org-refile-immediate-preserve-order "on" "off"))) (setq org-refile-immediate (unless arg t)) (make-local-variable 'org-refile-targets) (let* ((components (org-heading-components)) (level (first components)) (heading (nth 4 components)) (string (substring-no-properties heading))) (add-to-list 'org-refile-targets (append (list (buffer-file-name)) (cons :regexp (format "^%s %s$" (make-string level ?*) string)))) (setq org-refile-immediate-target heading)))) (define-key org-mode-map "\C-c\C-x\C-m" 'org-refile-immediate-target) It sure would be helpful if aculich, or some other maven, could please create a variable similar to (setq org-archive-location "~/0.todo.org::* Archived Tasks") so users can specify the file and heading, which is already a part of the org-archive-subtree functionality. I'm doing a search and mark because I don't have the wherewithal to create something like org-archive-location for this setup. EDIT: One step closer -- almost home free . . . (defun lawlist-auto-refile () (interactive) (beginning-of-buffer) (re-search-forward "\* UNDATED") (org-refile-immediate-target) ;; cursor must be on a heading to work. (save-excursion (re-search-backward "\* UNDATED") ;; must be written in such a way so that sub-entries of * UNDATED are not searched; or else infinity loop. (while (re-search-backward "\* \\(None\\|Someday\\) " nil t) (org-refile) ) ) )

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >