Search Results

Search found 15045 results on 602 pages for 'template engine'.

Page 536/602 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Disadvantages of MySQL Row Locking

    - by Nyxynyx
    I am using row locking (transactions) in MySQL for creating a job queue. Engine used is InnoDB. SQL Query START TRANSACTION; SELECT * FROM mytable WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1 FOR UPDATE; UPDATE mytable SET status = 1; COMMIT; According to this webpage, The problem with SELECT FOR UPDATE is that it usually creates a single synchronization point for all of the worker processes, and you see a lot of processes waiting for the locks to be released with COMMIT. Question: Does this mean that when the first query is executed, which takes some time to finish the transaction before, when the second similar query occurs before the first transaction is committed, it will have to wait for it to finish before the query is executed? If this is true, then I do not understand why the row locking of a single row (which I assume) will affect the next transaction query that would not require reading that locked row? Additionally, can this problem be solved (and still achieve the effect row locking does for a job queue) by doing a UPDATE instead of the transaction? UPDATE mytable SET status = 1 WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1

    Read the article

  • Where do I put javaassist code?

    - by DutrowLLC
    I have an application running on google app engine. I'm using restlets and I have a couple of layers set up including the restlet layer, the model layer, the business layer, and the data layer. I'm attempting to use javaassist to modify some classes, but I'm unsure where to actually put the code. I tried to put the code in the static initialization block: public class Person { String firstName; String getFirstName(){return null;} static{ ClassPool pool = ClassPool.getDefault(); try { CtClass CtPerson = pool.get("Person"); CtMethod CtGetFirstName = CtPerson.getDeclaredMethod("GetFirstName"); CtGetFirstName.setBody("return firstName;"); CtPerson.toClass(); } catch (Exception e) { e.printStackTrace(); } } } ...but that resulted in this error: javassist.CannotCompileException:.....attempted duplicate class definition...". I guess it makes sense that I can't edit the class file in the middle of its generation. I know the code works because I was able to run it correctly by simply putting it in a location that would run when I sent the program a command. (accessed a Restlet resource). The code ran fine if an instance of the class had not already been instantiated, however once I instantiated an instance of the affected class, the javaassist code failed. I assume I need to put this code somewhere that it will only run either: once after the program starts, directly before a class is instantiated for the first time, or even better, during compile time.

    Read the article

  • sqlite3 'database is locked' won't go away with retries

    - by Azarias
    I have a sqlite3 database that is accessed by a few threads (3-4). I am aware of the general limitations of sqlite3 with regards to concurrency as stated http://www.sqlite.org/faq.html#q6 , but I am convinced that is not the problem. All of the threads both read and write from this database. Whenever I do a write, I have the following construct: try: Cursor.execute(q, params) Connection.commit() except sqlite3.IntegrityError: Notify except sqlite3.OperationalError: print sys.exc_info() print("DATABASE LOCKED; sleeping for 3 seconds and trying again") time.sleep(3) Retry On some runs, I won't even hit this block, but when I do, it never comes out of it (keeps retrying, but I keep getting the 'database is locked' error from exc_info. If I understand the reader/writer lock usage correctly, some amount of waiting should help with the contention. What this sounds like is deadlock, but I do not use any transactions in my code, and every SELECT or INSERT is simply a one off. Some threads, however, keep the same connection when they do their operation (which includes a mix of SELECTS and INSERTS and other modifiers). I would appericiate it if you could shade a light on this, and also ways around fixing it (besides using a different database engine.)

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • How should developers cope with so many GUI configuration combinations?

    - by shawn-harrison
    These days, any decent Windows desktop application must perform well and look good under the following conditions: XP and Vista and Windows 7. 32 bit and 64 bit. With and without Themes. With and without Aero. At 96 and 120 and perhaps custom DPIs. One or more monitors (screens). Each OS has its own preferred font. Oh my! What is a lowly little Windows desktop application developer to do? :( I'm hoping to get a thread started with suggestions on how to deal with this GUI dilemma. First off, I'm on Delphi 7. a) Does Delphi 2010 bring anything new to the table to help with this situation? b) Should we pick an aftermarket component suite and rely on them to solve all these problems? c) Should we go with an aftermarket skinning engine? d) Perhaps a more HTML-type GUI is the way to go. Can we make a relatively complex GUI app with HTML that doesn't require using a browser? (prefer to keep it form based) e) Should we just knuckle down and code through each one of these scenarios and quit bitching about it? f) And finally, how in the world are we supposed to test all these conditions?

    Read the article

  • Change Powerpoint chart data with .NET

    - by mc6688
    I have a Powerpoint template that contains 1 slide and on that slide is a chart. I'd like to be able to manipulate that charts data using .NET. So far I have code that... unzips the Powerpoint file. unzips the embedded excel file (ppt\embeddings\Microsoft_Office_Excel_Worksheet1.xlsx) It successfully manipulates the data in the excel sheet and zips it back up. Opens and manipulates ppt\charts\chart1.xml Powerpoint is then zipped up and delivered to the user The result of this is a Powerpoint file that shows a blank chart. But when I click on the chart and go to edit data it updates the data and shows the correct chart. I believe my problem is with the chart1.xml that I am generating. I have compared my generated version with a version created by Powerpoint and they are almost identical. The only differences are in the values for <c:crossAx> and <c:axId>. There are also some rounding difference in the data. But I do not feel like that would result in an blank chart. Is there another file that I need to edit? Does anyone have any ideas as to what else I should try to get this working?

    Read the article

  • Parse error: syntax error, unexpected ';'

    - by sufoid
    Hallo I have this script: <? require("lib2/config.inc.php"); require("lib2/tpl.class.php"); require("lib2/db.class.php"); require("lib2/um.class.php"); $tpl = new template("templates", "tpl"); $db = new db($db['location'], $db['username'], $db['passwort'], $db['database']); $um = new usermanagment(); /** User login **/ $checklogin = $um->check_login(); $userdata = $um->getuserdata(); if(!$checklogin && !$guest) { header("LOCATION: ./index2.php"); } eval("\$header .= \" ".$tpl->get("header")."\";"); eval("\$footer .= \" ".$tpl->get("footer")."\";"); $time = time(); $db->Query("UPDATE userdaten SET lastaction = '$time' WHERE userid = '".$userdata['userid']."'"); ?> And get this error: Parse error: syntax error, unexpected ';' in /home/httpd/html/login/global.php(22) : eval()'d code on line 96 Any ideas?

    Read the article

  • Are there any code critique sites or similar resources?

    - by Ukko
    I have noticed when people post example code illustrating some issue that they are having often they will gather a number of comments addressing the quality of the code they presented and not the actual problem asked. This is very helpful--if not well directed. Often, this is wasted effort since the asker is often not receptive and the code is often chopped down to something small to post leaving lots of rough edges. In the old days you would see people asking questions like this on comp.lang.lisp and other parts of the comp.lang hierarchy. But that bit of the net kind of sank into the sewers of neglect. Is there a comparable one-stop-shop today? I am partially asking for selfish reasons, I know how to write good idiomatic C, Lisp, O'Caml, and Java code. But I learned C++ pre-template and STL, those rusty skills are not really applicable to today's C++. I have picked up languages like Scala in a vacuum and get by, but am I really doing it correctly? There are so many ways you can abuse a language, I am currently working against a codebase of Fortran written in C, and I recognize and loathe the "that guy" who wrote it. I don't want to be someone else's "that guy" if I can help it. Just because it works does not mean that one did not totally miss the boat on how it should have been done. Do you seek out this type of critique? If so how, where and why? What types of benefits do you derive from it? How about abuse and trolls?

    Read the article

  • UTF-8 HTML and CSS files with BOM (and how to remove the BOM with Python)

    - by Cameron
    First, some background: I'm developing a web application using Python. All of my (text) files are currently stored in UTF-8 with the BOM. This includes all my HTML templates and CSS files. These resources are stored as binary data (BOM and all) in my DB. When I retrieve the templates from the DB, I decode them using template.decode('utf-8'). When the HTML arrives in the browser, the BOM is present at the beginning of the HTTP response body. This generates a very interesting error in Chrome: Extra <html> encountered. Migrating attributes back to the original <html> element and ignoring the tag. Chrome seems to generate an <html> tag automatically when it sees the BOM and mistakes it for content, making the real <html> tag an error. So, using Python, what is the best way to remove the BOM from my UTF-8 encoded templates (if it exists -- I can't guarantee this in the future)? For other text-based files like CSS, will major browsers correctly interpret (or ignore) the BOM? They are being sent as plain binary data without .decode('utf-8'). Note: I am using Python 2.5. Thanks!

    Read the article

  • geting information from Treeview with HierarchicalDataTemplate

    - by lina
    Good day! I have such a template: <common:HierarchicalDataTemplate x:Key="my2ndPlusHierarchicalTemplate" ItemsSource="{Binding Children}"> <StackPanel Margin="0,2,5,2" Orientation="Vertical" Grid.Column="2"> <CheckBox IsTabStop="False" IsChecked="False" Click="ItemCheckbox_Click" Grid.Column="1" /> <TextBlock Text="{Binding Name}" FontSize="16" Foreground="#FF100101" HorizontalAlignment="Left" FontFamily="Verdana" FontWeight="Bold" /> <TextBlock Text="{Binding Description}" FontFamily="Verdana" FontSize="10" HorizontalAlignment="Left" Foreground="#FFA09A9A" FontStyle="Italic" /> <TextBox Width="100" Grid.Column="4" Height="24" LostFocus="TextBox_LostFocus" Name="tbNumber"></TextBox> </StackPanel> </common:HierarchicalDataTemplate> for a Treeview <controls:TreeView x:Name="tvServices" ItemTemplate="{StaticResource myHierarchicalTemplate}" ItemContainerStyle="{StaticResource expandedTreeViewItemStyle}" Grid.Column="1" Grid.Row="2" Grid.ColumnSpan="3" BorderBrush="#FFC1BCBC" FontFamily="Verdana" FontSize="14"> </controls:TreeView> I want to know the Name property of each TextBox in Treeview to make validation of each textbox such as: private void TextBox_LostFocus(object sender, RoutedEventArgs e) { tbNumber.ClearValidationError(); if ((!tbNumber.Text.IsZakazNumberValid()) && (tbNumber.Text != "")) { tbNumber.SetValidation(MyStrings.NumberError); tbNumber.RaiseValidationError(); isValid = false; } else { isValid = true; } } and I wnat to see what check boxes were checked how can I do it?

    Read the article

  • Trigger ad-hoc activity within a workflow

    - by Chris Taylor
    I am looking to use WF 4 to replace an existing workflow solution we have. One feature that is currently used in the existing workflow engine is the ability to cancel a current activity and loopback to a FlowSwitch type activity. So given the following crude workflow where we start at 'O' and base in the input data the workflow follows the path to 'A2' which is currently blocking on s bookmark waiting for input. ---------A1--\ | \ /\ \ O------- ---->--(A2)-------| ^ \/ / | | | / | | ---------A3--/ | | | |----------------------| However in the meantime some out of band data comes in that means we should cancel 'A2' and return to the FlowSwitch to re-evaluate based on the new data. The question is what is the best way to handle the out of band data that arrived? My initial guess is to have a Parallel activity with one branch waiting for out of band data and the other branch containing the workflow sequence described above. If data came in on the brach waiting for the out of band data, how would I cancel the current activity in the workflow and force it to return to the FlowSwitch. Or of course is there a better way to handle this. I have not actually done any work with the WF4 stuff for WF3 for that matter so I might be missing something obvious here.

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • Determining when stringByEvaluatingJavaScriptFromString has finished

    - by alku83
    I have a UIWebView which loads up an HTML page. This page has two buttons on it, say Exit and Submit. I don't want users to be able to click the Exit button, so once the page has finished loading (ie. webViewDidFinishLoad is called), I use stringByEvaluatingJavaScriptFromString to remove one of these buttons, by manipulating the HTML. I also disable user interaction on the UIWebView on webViewDidStartLoad, and enable it again on webViewDidFinishLoad. The problem I am finding is that stringByEvaluatingJavaScriptFromString takes a second or two to complete, and it seems to be done in it's own thread. So what is happening is that webViewDidFinishLoad is called, user interaction is enabled on the UIWebView, and if the user is quick, they can click the Exit button before stringByEvaluatingJavaScriptFromString has finished. As stringByEvaluatingJavaScriptFromString seems to be on it's own thread with no way to know when it's finished (it doesnt call webViewDidFinishLoad), the only way to completely prevent users from tapping the Exit button that I can see is to only enable user interaction on the UIWebView after some delay, which is unreliable (how can I really know how long to delay for?). Am I correct in that stringByEvaluatingJavaScriptFromString is done on it's on thread, and I have no way of being able to tell when it's finished? Any other suggestions for how to get around this problem? EDIT: In short, what I want to know is if it is possible to disable a UIWebView while stringByEvaluatingJavaScriptFromString is executing, and re-enable the UIWebView when the javascript is finished. EDIT 2: There's an article here which seems to imply you can somehow poll the JS engine to see when it's finished, but I can't find any other references saying the same thing: http://drnicwilliams.com/2008/11/10/to-webkit-or-not-to-webkit-within-your-iphone-app/ EDIT 3 Based on the answer from Brad Smith, it seems that I actually need to know when the UIWebView has finished loading itself after the javascript has executed. It's looking more and more like I just need to put a delay of sorts in there.

    Read the article

  • Using jQuery with Windows 8 Metro JavaScript App causes security error

    - by patridge
    Since it sounded like jQuery was an option for Metro JavaScript apps, I was starting to look forward to Windows 8 dev. I installed Visual Studio 2012 Express RC and started a new project (both empty and grid templates have the same problem). I made a local copy of jQuery 1.7.2 and added it as a script reference. <!-- SomeTestApp references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/jquery-1.7.2.js"></script> <script src="/js/default.js"></script> Unfortunately, as soon as I ran the resulting app it tosses out a console error: HTML1701: Unable to add dynamic content ' a' A script attempted to inject dynamic content, or elements previously modified dynamically, that might be unsafe. For example, using the innerHTML property to add script or malformed HTML will generate this exception. Use the toStaticHTML method to filter dynamic content, or explicitly create elements and attributes with a method such as createElement. For more information, see http://go.microsoft.com/fwlink/?LinkID=247104. I slapped a breakpoint in a non-minified version of jQuery and found the offending line: div.innerHTML = " <link/><table></table><a href='/a' style='top:1px;float:left;opacity:.55;'>a</a><input type='checkbox'/>"; Apparently, the security model for Metro apps forbids creating elements this way. This error doesn't cause any immediate issues for the user, but given its location, I am worried it will cause capability-discovery tests in jQuery to fail that shouldn't. I definitely want jQuery $.Deferred for making just about everything easier. I would prefer to be able to use the selector engine and event handling systems, but I would live without them if I had to. How does one get the latest jQuery to play nicely with Metro development?

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • How to store session values with Node.js and mongodb?

    - by Tirithen
    How do I get sessions working with Node.js, [email protected] and mongodb? I'm now trying to use connect-mongo like this: var config = require('../config'), express = require('express'), MongoStore = require('connect-mongo'), server = express.createServer(); server.configure(function() { server.use(express.logger()); server.use(express.methodOverride()); server.use(express.static(config.staticPath)); server.use(express.bodyParser()); server.use(express.cookieParser()); server.use(express.session({ store: new MongoStore({ db: config.db }), secret: config.salt })); }); server.configure('development', function() { server.use(express.errorHandler({ dumpExceptions: true, showStack: true })); }); server.configure('production', function() { server.use(express.errorHandler()); }); server.set('views', __dirname + '/../views'); server.set('view engine', 'jade'); server.listen(config.port); I'm then, in a server.get callback trying to use req.session.test = 'hello'; to store that value in the session, but it's not stored between the requests. It probobly takes something more that this to store session values, how? Is there a better documented module than connect-mongo?

    Read the article

  • Creating PowerShell Automatic Variables from C#

    - by Uros Calakovic
    I trying to make automatic variables available to Excel VBA (like ActiveSheet or ActiveCell) also available to PowerShell as 'automatic variables'. PowerShell engine is hosted in an Excel VSTO add-in and Excel.Application is available to it as Globals.ThisAddin.Application. I found this thread here on StackOverflow and started created PSVariable derived classes like: public class ActiveCell : PSVariable { public ActiveCell(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveCell; } } } public class ActiveSheet : PSVariable { public ActiveSheet(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveSheet; } } } and adding their instances to the current POwerShell session: runspace.SessionStateProxy.PSVariable.Set(new ActiveCell("ActiveCell")); runspace.SessionStateProxy.PSVariable.Set(new ActiveSheet("ActiveSheet")); This works and I am able to use those variables from PowerShell as $ActiveCell and $ActiveSheet (their value change as Excel active sheet or cell change). Then I read PSVariable documentation here and saw this: "There is no established scenario for deriving from this class. To programmatically create a shell variable, create an instance of this class and set it by using the PSVariableIntrinsics class." As I was deriving from PSVariable, I tried to use what was suggested: PSVariable activeCell = new PSVariable("ActiveCell"); activeCell.Value = Globals.ThisAddIn.Application.ActiveCell; runspace.SessionStateProxy.PSVariable.Set(activeCell); Using this, $ActiveCell appears in my PowerShell session, but its value doesn't change as I change the active cell in Excel. Is the above comment from PSVariable documentation something I should worry about, or I can continue creating PSVariable derived classes? Is there another way of making Excel globals available to PowerShell?

    Read the article

  • error message The URI does not identify an external Java class

    - by iHeartGreek
    Hi! I am new to XSL, and thus new to using scripts within the XSL. I have taken example code (also using C#) and adapted it for my own use.. but it does not work. The error message is: The URI urn:cs-scripts does not identify an external Java class The relevant code I have is: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:strTok="urn:cs-scripts"> ... ... ... </xsl:template> <xsl:variable name="temp"> <xsl:value-of select="tok:getList('AAA BBB CCC', ' ')"/> </xsl:variable> <msxsl:script language="C#" implements-prefix="tok"> <![CDATA[ public string[] getList(string str, char[] delim) { return str.Split(delim, StringSplitOptions.None); } public string getString(string[] list, int i) { return list[i]; } ]]> </msxsl:script> </xsl:stylesheet>

    Read the article

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • C++ iterators, default initialization and what to use as an uninitialized sentinel.

    - by Hassan Syed
    The Context I have a custom template container class put together from a map and vector. The map resolves a string to an ordinal, and the vector resolves an ordinal (only an initial string to ordinal lookup is done, future references are to the vector) to the entry. The entries are modified intrusively to contain a a bool "assigned" and an iterator_type which is a const_iterator to the container class's map. My container class will use RCF's serialization code (which models boost::serialization) to serialize my container classes to nodes in a network. Serializing iterator's is not possible, or a can of worms, and I can easily regenerate them onces the vectors and maps are serialized on the remote site. The Question I need to default initialize, and be able to test that the iterator has not been assigned to (if it is assigned it is valid, if not it is invalid). Since map iterators are not invalidated upon operations performed on it (unless of course items are removed :D) am I to assume that map<x,y>::end() is a valid sentinel (regardless of the state of the map -- i.e., it could be empty) to initialize to ? I will always have access to the parent map, I'm just unsure wheather end() is the same as the map contents change. I don't want to use another level of indirection (--i.e., boost::optional) to achieve my goal, I'd rather forego compiler checks to correct logic, but it would be nice if I didn't need to. Misc This question exists, but most of its content seems non-sense. Assigning a NULL to an iterator is invalid according to g++ and clang++. This is another similar question, but it focuses on the common use-cases of iterators, which generally tends to be using the iterator to iterate, ofcourse in this use-case the state of the container isn't meant to change whilst iteration is going on.

    Read the article

  • Adding select menu default value via JS?

    - by purpler
    Hi, i'm developing a meta search engine website, Soogle and i've used JS to populate select menu.. Now, after the page is loaded none of engines is loaded by default, user needs to select it on his own or [TAB] to it.. Is there a possibility to preselect one value from the menu via JS after the page loads? This is the code: Javascript: // SEARCH FORM INIT function addOptions(){ var sel=document.searchForm.whichEngine;for(var i=0;i<arr.length;i++){ sel.options[i]=new Option(arr[i][0],i)}} function startSearch(){ searchString=document.searchForm.searchText.value;if(searchString!=""){ var searchEngine=document.searchForm.whichEngine.selectedIndex; var finalSearchString=arr[searchEngine][1]+searchString;location.href=finalSearchString}return false} function checkKey(e){ var character=(e.which)?e.which:event.keyCode;if(character=='13'){ return startSearch()}} // SEARCH ENGINES INIT var arr = new Array(); arr[arr.length] = new Array("Web", "http://www.google.com/search?q="); arr[arr.length] = new Array("Images", "http://images.google.com/images?q="); arr[arr.length] = new Array("Knoweledge","http://en.wikipedia.org/wiki/Special:Search?search="); arr[arr.length] = new Array("Videos","http://www.youtube.com/results?search_query="); arr[arr.length] = new Array("Movies", "http://www.imdb.com/find?q="); arr[arr.length] = new Array("Torrents", "http://thepiratebay.org/search/"); HTML: <body onload="addOptions();document.forms.searchForm.searchText.focus()"> <div id="wrapper"> <div id="logo"></div> <form name="searchForm" method="POST" action="javascript:void(0)"> <input name="searchText" type="text" onkeypress="checkKey(event);"/> <span id="color"></span> <select tabindex="1" name="whichEngine" selected="Web"></select> <br /> <input tabindex="2" type="button" onClick="return startSearch()" value="Search"/> </form> </div> </body>

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • jQuery AJAX chained calls + Celery in Django

    - by user1029968
    Currently clicking one of the links in my application, triggers AJAX call (GET) that - if succeeds - triggers the second one and this second one - if succeeds - calls the third one. This way user can be informed which part of process started when clicking the link is currently ongoing. So in the template file in Django project, click callback body for link mentioned looks like below: $("#the-link").click(function(item)) { // CALL 1 $.ajax({ url: {% url ajax_call_1 %}, data: { // something } }) .done(function(call1Result) { // CALL 2 $.ajax({ url: {% url ajax_call_1 %}, data: { // call1Result passed here to CALL 2 } }) .done(function(call2Result) { // CALL 3 $.ajax({ url: {%url ajax_call_3 %}, data: { // call2Result passed here to CALL 3 } }) .done(function(call3Result) { // expected result if everything went fine console.log("wow, it worked!"); console.log(call3Result); }) .fail(function(errorObject) { console.log("call3 failed"); console.log(errorObject); } }) .fail(function(errorObject)) { console.log("call2 failed"); console.log(errorObject); } }) .fail(function(errorObject) { console.log("call1 failed"); console.log(errorObject); }); }); This works fine for me. The thing is, I'd like to prevent interrupting the following calls if the user closes the browser and the calls are not finished (as it will take some time to finish all three), as there is some additional logic in Django view functions called in each GET request. For example, if user clicks the link and closes the browser during CALL 1, is it possible to somehow go on with the following CALL 2 and CALL 3? I know that normally I'd be able to use Celery Task to process the function but is it still possible here with the chained calls mentioned? Any help is much appreciated!

    Read the article

  • Where is the "ListViewItemPlaceholderBackgroundThemeBrush" located?

    - by Dimi Toulakis
    I have a problem understanding one style definition in Windows 8 metro apps. When you create a metro style application with VS, there is also a folder named Common created. Inside this folder there is file called StandardStyles.xaml Now the following snippet is from this file: <!-- Grid-appropriate 250 pixel square item template as seen in the GroupedItemsPage and ItemsPage --> <DataTemplate x:Key="Standard250x250ItemTemplate"> <Grid HorizontalAlignment="Left" Width="250" Height="250"> <Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}"> <Image Source="{Binding Image}" Stretch="UniformToFill"/> </Border> <StackPanel VerticalAlignment="Bottom" Background="{StaticResource ListViewItemOverlayBackgroundThemeBrush}"> <TextBlock Text="{Binding Title}" Foreground="{StaticResource ListViewItemOverlayForegroundThemeBrush}" Style="{StaticResource TitleTextStyle}" Height="60" Margin="15,0,15,0"/> <TextBlock Text="{Binding Subtitle}" Foreground="{StaticResource ListViewItemOverlaySecondaryForegroundThemeBrush}" Style="{StaticResource CaptionTextStyle}" TextWrapping="NoWrap" Margin="15,0,15,10"/> </StackPanel> </Grid> </DataTemplate> What I do not understand here is the static resource definition, e.g. for the Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}" It is not about how you work with templates and binding and resources. Where is this ListViewItemPlaceholderBackgroundThemeBrush located? Many thanks for your help. Dimi

    Read the article

  • Design: How to declare a specialized memory handler class

    - by Michael Dorgan
    On an embedded type system, I have created a Small Object Allocator that piggy backs on top of a standard memory allocation system. This allocator is a Boost::simple_segregated_storage< class and it does exactly what I need - O(1) alloc/dealloc time on small objects at the cost of a touch of internal fragmentation. My question is how best to declare it. Right now, it's scope static declared in our mem code module, which is probably fine, but it feels a bit exposed there and is also now linked to that module forever. Normally, I declare it as a monostate or a singleton, but this uses the dynamic memory allocator (where this is located.) Furthermore, our dynamic memory allocator is being initialized and used before static object initialization occurs on our system (as again, the memory manager is pretty much the most fundamental component of an engine.) To get around this catch 22, I added an extra 'if the small memory allocator exists' to see if the small object allocator exists yet. That if that now must be run on every small object allocation. In the scheme of things, this is nearly negligable, but it still bothers me. So the question is, is there a better way to declare this portion of the memory manager that helps decouple it from the memory module and perhaps not costing that extra isinitialized() if statement? If this method uses dynamic memory, please explain how to get around lack of initialization of the small object portion of the manager.

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >