Search Results

Search found 15045 results on 602 pages for 'template engine'.

Page 536/602 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Change Powerpoint chart data with .NET

    - by mc6688
    I have a Powerpoint template that contains 1 slide and on that slide is a chart. I'd like to be able to manipulate that charts data using .NET. So far I have code that... unzips the Powerpoint file. unzips the embedded excel file (ppt\embeddings\Microsoft_Office_Excel_Worksheet1.xlsx) It successfully manipulates the data in the excel sheet and zips it back up. Opens and manipulates ppt\charts\chart1.xml Powerpoint is then zipped up and delivered to the user The result of this is a Powerpoint file that shows a blank chart. But when I click on the chart and go to edit data it updates the data and shows the correct chart. I believe my problem is with the chart1.xml that I am generating. I have compared my generated version with a version created by Powerpoint and they are almost identical. The only differences are in the values for <c:crossAx> and <c:axId>. There are also some rounding difference in the data. But I do not feel like that would result in an blank chart. Is there another file that I need to edit? Does anyone have any ideas as to what else I should try to get this working?

    Read the article

  • Rails rake test returns an error message

    - by eakkas
    I am a rails newbie and receive the following message when I run rake test. This is a an application based on rails community engine. I tried creating a test application just to make sure that my gems etc. are fine and I am able to run rake test successfully in that application. It would be great if someone could shed a light on what is going wrong... /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/whiny_nil.rb:52:in `method_missing': undefined method `merge' for nil:NilClass (NoMethodError) from /home/eakkas/NetBeansProjects/hello_ce/vendor/plugins/community_engine/app/controllers/users_controller.rb:17 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require_without_desert' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:8:in `require' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:32:in `__each_matching_file' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:7:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:27:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:26:in `each' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:26:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run'

    Read the article

  • How should developers cope with so many GUI configuration combinations?

    - by shawn-harrison
    These days, any decent Windows desktop application must perform well and look good under the following conditions: XP and Vista and Windows 7. 32 bit and 64 bit. With and without Themes. With and without Aero. At 96 and 120 and perhaps custom DPIs. One or more monitors (screens). Each OS has its own preferred font. Oh my! What is a lowly little Windows desktop application developer to do? :( I'm hoping to get a thread started with suggestions on how to deal with this GUI dilemma. First off, I'm on Delphi 7. a) Does Delphi 2010 bring anything new to the table to help with this situation? b) Should we pick an aftermarket component suite and rely on them to solve all these problems? c) Should we go with an aftermarket skinning engine? d) Perhaps a more HTML-type GUI is the way to go. Can we make a relatively complex GUI app with HTML that doesn't require using a browser? (prefer to keep it form based) e) Should we just knuckle down and code through each one of these scenarios and quit bitching about it? f) And finally, how in the world are we supposed to test all these conditions?

    Read the article

  • geting information from Treeview with HierarchicalDataTemplate

    - by lina
    Good day! I have such a template: <common:HierarchicalDataTemplate x:Key="my2ndPlusHierarchicalTemplate" ItemsSource="{Binding Children}"> <StackPanel Margin="0,2,5,2" Orientation="Vertical" Grid.Column="2"> <CheckBox IsTabStop="False" IsChecked="False" Click="ItemCheckbox_Click" Grid.Column="1" /> <TextBlock Text="{Binding Name}" FontSize="16" Foreground="#FF100101" HorizontalAlignment="Left" FontFamily="Verdana" FontWeight="Bold" /> <TextBlock Text="{Binding Description}" FontFamily="Verdana" FontSize="10" HorizontalAlignment="Left" Foreground="#FFA09A9A" FontStyle="Italic" /> <TextBox Width="100" Grid.Column="4" Height="24" LostFocus="TextBox_LostFocus" Name="tbNumber"></TextBox> </StackPanel> </common:HierarchicalDataTemplate> for a Treeview <controls:TreeView x:Name="tvServices" ItemTemplate="{StaticResource myHierarchicalTemplate}" ItemContainerStyle="{StaticResource expandedTreeViewItemStyle}" Grid.Column="1" Grid.Row="2" Grid.ColumnSpan="3" BorderBrush="#FFC1BCBC" FontFamily="Verdana" FontSize="14"> </controls:TreeView> I want to know the Name property of each TextBox in Treeview to make validation of each textbox such as: private void TextBox_LostFocus(object sender, RoutedEventArgs e) { tbNumber.ClearValidationError(); if ((!tbNumber.Text.IsZakazNumberValid()) && (tbNumber.Text != "")) { tbNumber.SetValidation(MyStrings.NumberError); tbNumber.RaiseValidationError(); isValid = false; } else { isValid = true; } } and I wnat to see what check boxes were checked how can I do it?

    Read the article

  • Uploading a file fails under WordPress

    - by Ash
    I'm using WordPress and I'm following W3's guide for uploading a file: HTML code: <html> <body> <form action="upload_file.php" method="post" enctype="multipart/form-data"> <label for="file">Filename:</label> <input type="file" name="file" id="file" /> <br /> <input type="submit" name="submit" value="Submit" /> </form> </body> </html> PHP code (upload_file.php): <?php if ($_FILES["file"]["error"] > 0) { echo "Error: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Stored in: " . $_FILES["file"]["tmp_name"]; } ?> The HTML code is pasted in a PHP page template and the PHP file under the WP installation directory under www. The problem is when I submit the file I get Error: 1. If I remark the "if" part of the PHP code and leave the "else" part I get: Upload: IMG_4258.JPG Type: Size: 0 Kb Stored in: So at least I know the PHP code is running. But what's causing it to fail? Is there a problem with the code or is WordPress meddling with the process?

    Read the article

  • Determining when stringByEvaluatingJavaScriptFromString has finished

    - by alku83
    I have a UIWebView which loads up an HTML page. This page has two buttons on it, say Exit and Submit. I don't want users to be able to click the Exit button, so once the page has finished loading (ie. webViewDidFinishLoad is called), I use stringByEvaluatingJavaScriptFromString to remove one of these buttons, by manipulating the HTML. I also disable user interaction on the UIWebView on webViewDidStartLoad, and enable it again on webViewDidFinishLoad. The problem I am finding is that stringByEvaluatingJavaScriptFromString takes a second or two to complete, and it seems to be done in it's own thread. So what is happening is that webViewDidFinishLoad is called, user interaction is enabled on the UIWebView, and if the user is quick, they can click the Exit button before stringByEvaluatingJavaScriptFromString has finished. As stringByEvaluatingJavaScriptFromString seems to be on it's own thread with no way to know when it's finished (it doesnt call webViewDidFinishLoad), the only way to completely prevent users from tapping the Exit button that I can see is to only enable user interaction on the UIWebView after some delay, which is unreliable (how can I really know how long to delay for?). Am I correct in that stringByEvaluatingJavaScriptFromString is done on it's on thread, and I have no way of being able to tell when it's finished? Any other suggestions for how to get around this problem? EDIT: In short, what I want to know is if it is possible to disable a UIWebView while stringByEvaluatingJavaScriptFromString is executing, and re-enable the UIWebView when the javascript is finished. EDIT 2: There's an article here which seems to imply you can somehow poll the JS engine to see when it's finished, but I can't find any other references saying the same thing: http://drnicwilliams.com/2008/11/10/to-webkit-or-not-to-webkit-within-your-iphone-app/ EDIT 3 Based on the answer from Brad Smith, it seems that I actually need to know when the UIWebView has finished loading itself after the javascript has executed. It's looking more and more like I just need to put a delay of sorts in there.

    Read the article

  • Unit Testing a rails 2.3.5 plugin

    - by brad
    I'm writing a new plugin for a rails 2.3.5 app. I've included an app directory (which makes it an engine) so i can easily load some extra routes. Not sure if that affects anything. Anyway, in the test directory i have two files: test_helper.rb and my_plugin_test.rb These files were generated automatically using script/generate plugin my_plugin When I go to vendor/plugins/my_plugin directory and run rake test they don't seem to run. I get the following console output: (in /Users/me/Repos/my_app/source/trunk/vendor/plugins/my_plugin) /Users/me/.rvm/rubies/jruby-1.4.0/bin/jruby -I"lib:lib:test" "/Users/me/.rvm/gems/jruby-1.4.0/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/my_plugin_test.rb" So it obviously sees my test file, but none of the tests inside get run, I just get back to my console prompt. What am I missing here? I figured the generated code would work out of the box Here are the two files test_helper.rb require 'rubygems' require 'active_support' require 'active_support/test_case' my_plugin_test.rb require 'test_helper' class MyPluginTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "Factories are supported" do assert_not_nil Factory end end File structure vendor - plugins - my_plugin - app - config - routes.rb - generators - my_plugin - some generator files.rb - lib - my_plugin.rb - my_plugin - my_plugin_lib_file.rb - rails - init.rb - Rakefile - tasks - my_plugin_tasks.rake - test - test_helper.rb - my_plugin_test.rb

    Read the article

  • Problem in inferring instances that have integer cardinality constraint

    - by Mikae Combarado
    Hello, I have created an RDF/OWL file using Protege 4.1 alpha. I also created a defined class in Protege called CheapPhone. This class has a restriction which is shown below : (hasPrice some integer[< 350]) Whenever, a price of a phone is below 350, it is inferred as CheapPhone. There is no problem for inferring this in Protege 4.1 alpha. However, I cannot infer this using Jena. I also created a defined class called SmartPhone. This class also has a restriction which is shown below : (has3G value true) and (hasInternet value true) Whenever, a phone has 3G and Internet, it is inferred as SmartPhone. In this situation, there is no problem inferring this in both Protege and Jena. I have started to think that there is a problem in default inference engine of Jena. The code that I use in Java is below : Reasoner reasoner = ReasonerRegistry.getOWLReasoner(); reasoner = reasoner.bindSchema(ontModel); OntModelSpec ontModelSpec = OntModelSpec.OWL_MEM_MINI_RULE_INF; ontModelSpec.setReasoner(reasoner); // Create ontology model with reasoner support // ontModel was created and read before, so I don't share the code in order // not to create garbage here OntModel model = ModelFactory.createOntologyModel(ontModelSpec, ontModel); OntClass sPhone = model.getOntClass(ns + "SmartPhone"); ExtendedIterator s = sPhone.listInstances(); while(s.hasNext()) { OntResource mp = (OntResource)s.next(); System.out.println(mp.getURI()); } This code works perfectly and returns me the instances, but when I change the code below and make it appropriate for CheapPhone, it doesn't return anything. OntClass sPhone = model.getOntClass(ns + "CheapPhone"); Am I doing something wrong ?

    Read the article

  • Disadvantages of MySQL Row Locking

    - by Nyxynyx
    I am using row locking (transactions) in MySQL for creating a job queue. Engine used is InnoDB. SQL Query START TRANSACTION; SELECT * FROM mytable WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1 FOR UPDATE; UPDATE mytable SET status = 1; COMMIT; According to this webpage, The problem with SELECT FOR UPDATE is that it usually creates a single synchronization point for all of the worker processes, and you see a lot of processes waiting for the locks to be released with COMMIT. Question: Does this mean that when the first query is executed, which takes some time to finish the transaction before, when the second similar query occurs before the first transaction is committed, it will have to wait for it to finish before the query is executed? If this is true, then I do not understand why the row locking of a single row (which I assume) will affect the next transaction query that would not require reading that locked row? Additionally, can this problem be solved (and still achieve the effect row locking does for a job queue) by doing a UPDATE instead of the transaction? UPDATE mytable SET status = 1 WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1

    Read the article

  • Trigger ad-hoc activity within a workflow

    - by Chris Taylor
    I am looking to use WF 4 to replace an existing workflow solution we have. One feature that is currently used in the existing workflow engine is the ability to cancel a current activity and loopback to a FlowSwitch type activity. So given the following crude workflow where we start at 'O' and base in the input data the workflow follows the path to 'A2' which is currently blocking on s bookmark waiting for input. ---------A1--\ | \ /\ \ O------- ---->--(A2)-------| ^ \/ / | | | / | | ---------A3--/ | | | |----------------------| However in the meantime some out of band data comes in that means we should cancel 'A2' and return to the FlowSwitch to re-evaluate based on the new data. The question is what is the best way to handle the out of band data that arrived? My initial guess is to have a Parallel activity with one branch waiting for out of band data and the other branch containing the workflow sequence described above. If data came in on the brach waiting for the out of band data, how would I cancel the current activity in the workflow and force it to return to the FlowSwitch. Or of course is there a better way to handle this. I have not actually done any work with the WF4 stuff for WF3 for that matter so I might be missing something obvious here.

    Read the article

  • sqlite3 'database is locked' won't go away with retries

    - by Azarias
    I have a sqlite3 database that is accessed by a few threads (3-4). I am aware of the general limitations of sqlite3 with regards to concurrency as stated http://www.sqlite.org/faq.html#q6 , but I am convinced that is not the problem. All of the threads both read and write from this database. Whenever I do a write, I have the following construct: try: Cursor.execute(q, params) Connection.commit() except sqlite3.IntegrityError: Notify except sqlite3.OperationalError: print sys.exc_info() print("DATABASE LOCKED; sleeping for 3 seconds and trying again") time.sleep(3) Retry On some runs, I won't even hit this block, but when I do, it never comes out of it (keeps retrying, but I keep getting the 'database is locked' error from exc_info. If I understand the reader/writer lock usage correctly, some amount of waiting should help with the contention. What this sounds like is deadlock, but I do not use any transactions in my code, and every SELECT or INSERT is simply a one off. Some threads, however, keep the same connection when they do their operation (which includes a mix of SELECTS and INSERTS and other modifiers). I would appericiate it if you could shade a light on this, and also ways around fixing it (besides using a different database engine.)

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • Parse error: syntax error, unexpected ';'

    - by sufoid
    Hallo I have this script: <? require("lib2/config.inc.php"); require("lib2/tpl.class.php"); require("lib2/db.class.php"); require("lib2/um.class.php"); $tpl = new template("templates", "tpl"); $db = new db($db['location'], $db['username'], $db['passwort'], $db['database']); $um = new usermanagment(); /** User login **/ $checklogin = $um->check_login(); $userdata = $um->getuserdata(); if(!$checklogin && !$guest) { header("LOCATION: ./index2.php"); } eval("\$header .= \" ".$tpl->get("header")."\";"); eval("\$footer .= \" ".$tpl->get("footer")."\";"); $time = time(); $db->Query("UPDATE userdaten SET lastaction = '$time' WHERE userid = '".$userdata['userid']."'"); ?> And get this error: Parse error: syntax error, unexpected ';' in /home/httpd/html/login/global.php(22) : eval()'d code on line 96 Any ideas?

    Read the article

  • How to store session values with Node.js and mongodb?

    - by Tirithen
    How do I get sessions working with Node.js, [email protected] and mongodb? I'm now trying to use connect-mongo like this: var config = require('../config'), express = require('express'), MongoStore = require('connect-mongo'), server = express.createServer(); server.configure(function() { server.use(express.logger()); server.use(express.methodOverride()); server.use(express.static(config.staticPath)); server.use(express.bodyParser()); server.use(express.cookieParser()); server.use(express.session({ store: new MongoStore({ db: config.db }), secret: config.salt })); }); server.configure('development', function() { server.use(express.errorHandler({ dumpExceptions: true, showStack: true })); }); server.configure('production', function() { server.use(express.errorHandler()); }); server.set('views', __dirname + '/../views'); server.set('view engine', 'jade'); server.listen(config.port); I'm then, in a server.get callback trying to use req.session.test = 'hello'; to store that value in the session, but it's not stored between the requests. It probobly takes something more that this to store session values, how? Is there a better documented module than connect-mongo?

    Read the article

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • error message The URI does not identify an external Java class

    - by iHeartGreek
    Hi! I am new to XSL, and thus new to using scripts within the XSL. I have taken example code (also using C#) and adapted it for my own use.. but it does not work. The error message is: The URI urn:cs-scripts does not identify an external Java class The relevant code I have is: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:strTok="urn:cs-scripts"> ... ... ... </xsl:template> <xsl:variable name="temp"> <xsl:value-of select="tok:getList('AAA BBB CCC', ' ')"/> </xsl:variable> <msxsl:script language="C#" implements-prefix="tok"> <![CDATA[ public string[] getList(string str, char[] delim) { return str.Split(delim, StringSplitOptions.None); } public string getString(string[] list, int i) { return list[i]; } ]]> </msxsl:script> </xsl:stylesheet>

    Read the article

  • Fastest inline-assembly spinlock

    - by sigvardsen
    I'm writing a multithreaded application in c++, where performance is critical. I need to use a lot of locking while copying small structures between threads, for this I have chosen to use spinlocks. I have done some research and speed testing on this and I found that most implementations are roughly equally fast: Microsofts CRITICAL_SECTION, with SpinCount set to 1000, scores about 140 time units Implementing this algorithm with Microsofts InterlockedCompareExchange scores about 95 time units Ive also tried to use some inline assembly with __asm {} using something like this code and it scores about 70 time units, but I am not sure that a proper memory barrier has been created. Edit: The times given here are the time it takes for 2 threads to lock and unlock the spinlock 1,000,000 times. I know this isn't a lot of difference but as a spinlock is a heavily used object, one would think that programmers would have agreed on the fastest possible way to make a spinlock. Googling it leads to many different approaches however. I would think this aforementioned method would be the fastest if implemented using inline assembly and using the instruction CMPXCHG8B instead of comparing 32bit registers. Furthermore memory barriers must be taken into account, this could be done by LOCK CMPXHG8B (I think?), which guarantees "exclusive rights" to the shared memory between cores. At last [some suggests] that for busy waits should be accompanied by NOP:REP that would enable Hyper-threading processors to switch to another thread, but I am not sure whether this is true or not? From my performance-test of different spinlocks, it is seen that there is not much difference, but for purely academic purpose I would like to know which one is fastest. However as I have extremely limited experience in the assembly-language and with memory barriers, I would be happy if someone could write the assembly code for the last example I provided with LOCK CMPXCHG8B and proper memory barriers in the following template: __asm { spin_lock: ;locking code. spin_unlock: ;unlocking code. }

    Read the article

  • jQuery AJAX chained calls + Celery in Django

    - by user1029968
    Currently clicking one of the links in my application, triggers AJAX call (GET) that - if succeeds - triggers the second one and this second one - if succeeds - calls the third one. This way user can be informed which part of process started when clicking the link is currently ongoing. So in the template file in Django project, click callback body for link mentioned looks like below: $("#the-link").click(function(item)) { // CALL 1 $.ajax({ url: {% url ajax_call_1 %}, data: { // something } }) .done(function(call1Result) { // CALL 2 $.ajax({ url: {% url ajax_call_1 %}, data: { // call1Result passed here to CALL 2 } }) .done(function(call2Result) { // CALL 3 $.ajax({ url: {%url ajax_call_3 %}, data: { // call2Result passed here to CALL 3 } }) .done(function(call3Result) { // expected result if everything went fine console.log("wow, it worked!"); console.log(call3Result); }) .fail(function(errorObject) { console.log("call3 failed"); console.log(errorObject); } }) .fail(function(errorObject)) { console.log("call2 failed"); console.log(errorObject); } }) .fail(function(errorObject) { console.log("call1 failed"); console.log(errorObject); }); }); This works fine for me. The thing is, I'd like to prevent interrupting the following calls if the user closes the browser and the calls are not finished (as it will take some time to finish all three), as there is some additional logic in Django view functions called in each GET request. For example, if user clicks the link and closes the browser during CALL 1, is it possible to somehow go on with the following CALL 2 and CALL 3? I know that normally I'd be able to use Celery Task to process the function but is it still possible here with the chained calls mentioned? Any help is much appreciated!

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • Creating PowerShell Automatic Variables from C#

    - by Uros Calakovic
    I trying to make automatic variables available to Excel VBA (like ActiveSheet or ActiveCell) also available to PowerShell as 'automatic variables'. PowerShell engine is hosted in an Excel VSTO add-in and Excel.Application is available to it as Globals.ThisAddin.Application. I found this thread here on StackOverflow and started created PSVariable derived classes like: public class ActiveCell : PSVariable { public ActiveCell(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveCell; } } } public class ActiveSheet : PSVariable { public ActiveSheet(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveSheet; } } } and adding their instances to the current POwerShell session: runspace.SessionStateProxy.PSVariable.Set(new ActiveCell("ActiveCell")); runspace.SessionStateProxy.PSVariable.Set(new ActiveSheet("ActiveSheet")); This works and I am able to use those variables from PowerShell as $ActiveCell and $ActiveSheet (their value change as Excel active sheet or cell change). Then I read PSVariable documentation here and saw this: "There is no established scenario for deriving from this class. To programmatically create a shell variable, create an instance of this class and set it by using the PSVariableIntrinsics class." As I was deriving from PSVariable, I tried to use what was suggested: PSVariable activeCell = new PSVariable("ActiveCell"); activeCell.Value = Globals.ThisAddIn.Application.ActiveCell; runspace.SessionStateProxy.PSVariable.Set(activeCell); Using this, $ActiveCell appears in my PowerShell session, but its value doesn't change as I change the active cell in Excel. Is the above comment from PSVariable documentation something I should worry about, or I can continue creating PSVariable derived classes? Is there another way of making Excel globals available to PowerShell?

    Read the article

  • C++ iterators, default initialization and what to use as an uninitialized sentinel.

    - by Hassan Syed
    The Context I have a custom template container class put together from a map and vector. The map resolves a string to an ordinal, and the vector resolves an ordinal (only an initial string to ordinal lookup is done, future references are to the vector) to the entry. The entries are modified intrusively to contain a a bool "assigned" and an iterator_type which is a const_iterator to the container class's map. My container class will use RCF's serialization code (which models boost::serialization) to serialize my container classes to nodes in a network. Serializing iterator's is not possible, or a can of worms, and I can easily regenerate them onces the vectors and maps are serialized on the remote site. The Question I need to default initialize, and be able to test that the iterator has not been assigned to (if it is assigned it is valid, if not it is invalid). Since map iterators are not invalidated upon operations performed on it (unless of course items are removed :D) am I to assume that map<x,y>::end() is a valid sentinel (regardless of the state of the map -- i.e., it could be empty) to initialize to ? I will always have access to the parent map, I'm just unsure wheather end() is the same as the map contents change. I don't want to use another level of indirection (--i.e., boost::optional) to achieve my goal, I'd rather forego compiler checks to correct logic, but it would be nice if I didn't need to. Misc This question exists, but most of its content seems non-sense. Assigning a NULL to an iterator is invalid according to g++ and clang++. This is another similar question, but it focuses on the common use-cases of iterators, which generally tends to be using the iterator to iterate, ofcourse in this use-case the state of the container isn't meant to change whilst iteration is going on.

    Read the article

  • Adding select menu default value via JS?

    - by purpler
    Hi, i'm developing a meta search engine website, Soogle and i've used JS to populate select menu.. Now, after the page is loaded none of engines is loaded by default, user needs to select it on his own or [TAB] to it.. Is there a possibility to preselect one value from the menu via JS after the page loads? This is the code: Javascript: // SEARCH FORM INIT function addOptions(){ var sel=document.searchForm.whichEngine;for(var i=0;i<arr.length;i++){ sel.options[i]=new Option(arr[i][0],i)}} function startSearch(){ searchString=document.searchForm.searchText.value;if(searchString!=""){ var searchEngine=document.searchForm.whichEngine.selectedIndex; var finalSearchString=arr[searchEngine][1]+searchString;location.href=finalSearchString}return false} function checkKey(e){ var character=(e.which)?e.which:event.keyCode;if(character=='13'){ return startSearch()}} // SEARCH ENGINES INIT var arr = new Array(); arr[arr.length] = new Array("Web", "http://www.google.com/search?q="); arr[arr.length] = new Array("Images", "http://images.google.com/images?q="); arr[arr.length] = new Array("Knoweledge","http://en.wikipedia.org/wiki/Special:Search?search="); arr[arr.length] = new Array("Videos","http://www.youtube.com/results?search_query="); arr[arr.length] = new Array("Movies", "http://www.imdb.com/find?q="); arr[arr.length] = new Array("Torrents", "http://thepiratebay.org/search/"); HTML: <body onload="addOptions();document.forms.searchForm.searchText.focus()"> <div id="wrapper"> <div id="logo"></div> <form name="searchForm" method="POST" action="javascript:void(0)"> <input name="searchText" type="text" onkeypress="checkKey(event);"/> <span id="color"></span> <select tabindex="1" name="whichEngine" selected="Web"></select> <br /> <input tabindex="2" type="button" onClick="return startSearch()" value="Search"/> </form> </div> </body>

    Read the article

  • Set ContentTemplate in CodeBehind: XamlParseException 2260 Error

    - by user362215
    Hi, I'd like to change the ContentTemplate of a ContentPresenter in the CodeBehind file. But if I run the Silverlight 4 application a XamlParseException with the error code 2260 occures. foreach (ContentPresenter item in Headers) { item.ContentTemplate = Parent.UnselectedHeaderTemplate; } if ((index >= 0) && (index < Headers.Count)) { ContentPresenter item0 = (ContentPresenter)Headers[index]; item0.ContentTemplate = Parent.SelectedHeaderTemplate; } If I do only the foreach code without the code in the "if", it works. And if I only do the code in the "if" without the foreach it works too. But togheter (the "if"-code and the foreach-code) it doesn't work. I have no idea why it doesn't work. The two templates look like this: <Setter Property="UnselectedHeaderTemplate"> <Setter.Value> <DataTemplate> <ContentControl Content="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Content}" Margin="10,-10" FontSize="72" Foreground="#FF999999" CacheMode="BitmapCache"/> </DataTemplate> </Setter.Value> </Setter> <!-- SelectedHeader template --> <Setter Property="SelectedHeaderTemplate"> <Setter.Value> <DataTemplate> <ContentControl Content="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Content}" Margin="10,-10" FontSize="72" Foreground="{TemplateBinding Foreground}" CacheMode="BitmapCache"/> </DataTemplate> </Setter.Value> </Setter> If you have an idea what problem is please tell me.

    Read the article

  • is back_insert_iterator<> safe to be passed by value?

    - by afriza
    I have a code that looks something like: struct Data { int value; }; class A { public: typedef std::deque<boost::shared_ptr<Data> > TList; std::back_insert_iterator<TList> GetInserter() { return std::back_inserter(m_List); } private: TList m_List; }; class AA { boost::scoped_ptr<A> m_a; public: AA() : m_a(new A()) {} std::back_insert_iterator<A::TList> GetDataInserter() { return m_a->GetInserter(); } }; class B { template<class OutIt> CopyInterestingDataTo(OutIt outIt) { // loop and check conditions for interesting data // for every `it` in a Container<Data*> // create a copy and store it for( ... it = ..; .. ; ..) if (...) { *outIt = OutIt::container_type::value_type(new Data(**it)); outIt++; // dummy } } void func() { AA aa; CopyInterestingDataTo(aa.GetInserter()); // aa.m_a->m_List is empty! } }; The problem is that A::m_List is always empty even after CopyInterestingDataTo() is called. However, if I debug and step into CopyInterestingDataTo(), the iterator does store the supposedly inserted data!

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >