Search Results

Search found 2806 results on 113 pages for 'notes'.

Page 85/113 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Django1.1 model field value preprocessing before returning

    - by Satoru.Logic
    Hi, all. I have a model class like this: class Note(models.Model): author = models.ForeignKey(User, related_name='notes') content = NoteContentField(max_length=256) NoteContentField is a custom sub-class of CharField that override the to_python method in purpose of doing some twitter-text-conversion processing. class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): value = super(NoteContentField, self).to_python(value) from ..utils import linkify return mark_safe(linkify(value)) However, this doesn't work. When I save a Note object like this: note = Note(author=request.use, content=form.cleaned_data['content']) note.save() The conversed value is saved into the database, which is not what I wanna see. What I'm trying to do is to save the raw content into the database, and only make the conversion when the content attribute is later accessed. Would you please tell me what's wrong with this? Thanks to Pierre and Daniel. I have figured out what's wrong. I thought the text-conversion code should be in either to_python or get_db_prep_value, and that's wrong. I should override both of them, make to_python do the conversion and get_db_prep_value return the unconversed value: from ..utils import linkify class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): self._raw_value = super(NoteContentField, self).to_python(value) return mark_safe(linkify(self._raw_value)) def get_db_prep_value(self, value): return self._raw_value I wonder if there is a better way to implement this?

    Read the article

  • Daylight Savings Handling in DateDiff() in MS Access?

    - by PowerUser
    I am fully aware of DateDiff()'s inability to handle daylight savings issues. Since I often use it to compare the number of hours or days between 2 datetimes several months apart, I need to write up a solution to handle DST. This is what I came up with, a function that first subtracts 60 minutes from a datetime value if it falls within the date ranges specified in a local table (LU_DST). Thus, the usage would be: datediff("n",Conv_DST_to_Local([date1]),Conv_DST_to_Local([date2])) My question is: Is there a better way to handle this? I'm going to make a wild guess that I'm not the first person with this question. This seems like the kind of thing that should have been added to one of the core reference libraries. Is there a way for me to access my system clock to ask it if DST was in effect at a certain date & time? Function Conv_DST_to_Local(X As Date) As Date Dim rst As DAO.Recordset Set rst = CurrentDb.OpenRecordset("LU_DST") Conv_DST_to_Local = X While rst.EOF = False If X > rst.Fields(0) And X < rst.Fields(1) Then Conv_DST_to_Local = DateAdd("n", -60, X) rst.MoveNext Wend End Function Notes I have visited and imported the BAS file of http://www.cpearson.com/excel/TimeZoneAndDaylightTime.aspx. I spent at least an hour by now reading through it and, while it may do its job well, I can't figure out how to modify it to my needs. But if you have an answer using his data structures, I'll take a look. Timezones are not an issue since this is all local time.

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • MySql: Problem when using a temporary table

    - by Alex
    Hi, I'm trying to use a temporary tables to store some values I need for a query. The reason of using a temporary table is that I don't want to store the data permanently so different users can modify it at the same time. That data is just stored for a second, so I think a temporary table is the best approach for this. The thing is that it seems that the way I'm trying to use it is not right (the query works if I use a permanent one). This is an example of query: CREATE TEMPORARY TABLE SearchMatches (PatternID int not null primary key, Matches int not null) INSERT INTO SearchMatches (PatternID, Matches) VALUES ('12605','1'),('12503','1'),('12587','2'),('12456','1'), ('12457','2'),('12486','2'),('12704','1'),(' 12686','1'), ('12531','2'),('12549','1'),('12604','1'),('12504','1'), ('12586','1'),('12548','1'),('12 530','1'),('12687','2'), ('12485','1'),('12705','1') SELECT pat.id, signatures.signature, products.product, versions.version, builds.build, pat.log_file, sig_types.sig_type, pat.notes, pat.kb FROM patterns AS pat INNER JOIN signatures ON pat.signature = signatures.id INNER JOIN products ON pat.product = products.id INNER JOIN versions ON pat.version = versions.id INNER JOIN builds ON pat.build = builds.id INNER JOIN sig_types ON pat.sig_type = sig_types.id, SearchMatches AS sm INNER JOIN patterns ON patterns.id = sm.PatternID WHERE sm.Matches <> 0 ORDER BY sm.Matches DESC, products.product, versions.version, builds.build LIMIT 0 , 50 Any suggestion? Thanks.

    Read the article

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • Installing mongrel service on Windows 2008

    - by akirekadu
    We use InstallAnywhere to install our product. One of the components that it needs to install is mongrel. IA invokes the following command line during installation: mongrel_rails service::install -N service-1 -D "Service 1" -c "C:\app_dir\\rails\rails_apps\service-1" -p 19000 -e production Apprently under the hoods "sc create..." is used. The installation works great on Windows 2003. On Windows 2008 though this operation requires elevated privileges. When I login as local administrator (ie 'local-machine\administrator' user), the installation works just fine. However, when I login as a domain user that is part of local administrators group, the services fails to install with error "access is denied". How can I make it possible to install the product without having to login as local administrator? Thanks! Couple of notes I would like to add. One solution I tried is to execute the installer as administrator. The service does get installed. However, it creates another problem. An embedded 3rd party product and its files get installed with admin only rights. So we do need to run the installer as logged in user.

    Read the article

  • My service does not start on Windows 2008 (it works on Windows 2003)

    - by akirekadu
    When we install our product on Windows 2008 SP2, couple of services fail to start. After trying different things, we figured out that these service were able to start when "Log on as" is set to "Local system account". This service does need to run as a specific user because it requires access to secure resources. The service did run just fine under this special user account under Windows 2003. I am thinking the problem is related to UAC (user access control). Under interactive mode one can elevate permission by answering the security dialog box. How to do the same for a service? How to configure the service so it runs with necessary permissions? Thanks! Couple of notes I would like to add. One solution I tried is to execute the installer as administrator. The service does get installed. However, it creates another problem. An embedded 3rd party product and its files get installed with admin only rights. So we do need to run the installer as logged in user.

    Read the article

  • What Happens to Commit Logs on a Branch After Merging?

    - by Levi Hackwith
    Scenario: Programmer creates a branch for project 'foo' called 'my_foo' at revision 5 Programmer makes multiple changes to multiple files as he works on the 'my_foo' feature. At the end of each major step, say adding several new functions to class, the programmer does an svn commit on the appropriate files therefore committing them to the branch After several weeks and many commits later (each commit having a commit log describing what he did), the programmer merges the branch back into the trunk: #Assume the following is being done from inside a working copy of the trunk: svn merge -r 5:15 file:///path/to/repo/branches/my_foo Hazzah! he's merged all his changes back into trunk! There's much rejoicing and drinking of Mountain Dew. Now let's say another programmer comes along a week later and updates their working copy from revision 5 to revision 15. "Wow", they say. "I wonder what's changed since revision 5". The programmer then does an svn status on their working copy and they get something like this: ------------------------------------------------------------------------ r15 | programmer1 | 2010-03-20 21:27:04 -0400 (Sat, 20 Mar 2010) | 1 line Merging Version 2.0 Changes into trunk ------------------------------------------------------------------------ r5 | programmer2 | 2010-02-15 10:59:55 -0500 (Mon, 15 Feb 2010) | 1 line Added assets/images/tumblr_icon.png to trunk What the heck happened to all the notes that the other programmer put in with all of his commits in his branch? Do those not get pulled over during a merge? Am I crazy or just forgetting something?

    Read the article

  • WCF Data Services - neither .Expand or .LoadProperty seems to do what I need

    - by TomK
    I am building a school management app where they track student tardiness and absences. I've got three entities to help me in this. A Students entity (first name, last name, ID, etc.); a SystemAbsenceTypes entity with SystemAbsenceTypeID values for Late, Absent-with-Reason, Absent-without-Reason; and a cross-reference table called StudentAbsences (matching the student IDs with the absence-type ID, plus a date, and a Notes field). What I want to do is query my entities for a given student, and then add up the number of each kind of Absence, for a given date range. I prepare my currentStudent object without a problem, then I do this... Me.Data.LoadProperty(currentStudent, "StudentAbsences") 'Loads the cross-ref data lblDaysLate.Text = (From ab In currentStudent.StudentAbsences Where ab.SystemAbsenceTypes.SystemAbsenceTypeID = Common.enuStudentAbsenceTypes.Late).Count.ToString ...and this second line fails, complaining it has no value for an object. I presume the problem is that while it DOES see that there are (let's say) four absences for the currentStudent (ie, currentStudent.StudentAbsences.Count = 4) -- it can't yet "peer into" each one of the absences to look at its type. How do I use .Expand or .LoadProperty to make this happen? I tried fiddling with .LoadProperty but it doesn't take a two-level syntax like so... Data.LoadProperty(currentStudent,"StudentAbsences.SystemAbsenceTypeID") or the like. Is there some other technique?

    Read the article

  • CSS - 2 divs side-by-side, one floated - how do I make the other fit next to it without overlapping?

    - by Artem Russakovskii
    I have had the following problem for a while and I am really not sure how to solve it. The problem can currently be observed here: http://www.androidpolice.com/2009/11/16/the-not-so-good-the-bad-and-the-ugly-my-list-of-20-problems-with-htc-hero/ - feel free to use this for Firebugging. There are 2 notions here: a table of contents (toc) and notes. A note usually takes 100% of the post width and everything is fine. However, when a note appears next to a toc, the toc starts overlapping and covering the note (I set z-index:1 on the toc because otherwise the note covered it, which was even worse). It's interesting to point out that the text of the note doesn't get covered by the toc - only the note div itself does. In IE7, it's even worse - the note div jumps down to under the toc and leaves a lot of empty space (2nd screenshot). So, how can I solve this? The ideal solution would have the note div occupy 100% of the visible space - i.e. it would resize itself to fit right next to the toc when needed. Any points appreciated. Thank you! Here are some screenshots for future reference: in IE7:

    Read the article

  • XSLT string replace

    - by aximili
    I don't really know XSL but I need to fix this code, I have reduced it to make it simpler. I am getting this error Invalid XSLT/XPath function on this line <xsl:variable name="text" select="replace($text,'a','b')"/> This is the XSL <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:inm="http://www.inmagic.com/webpublisher/query" version='1.0'> <xsl:output method="text" encoding="UTF-8"/> <xsl:preserve-space elements="*"/> <xsl:template match="text()"></xsl:template> <xsl:template match="mos"> <xsl:apply-templates/> <xsl:for-each select="mosObj"> 'Notes or subject' <xsl:call-template name="rem-html"><xsl:with-param name="text" select="SBS_ABSTRACT"/></xsl:call-template> </xsl:for-each> </xsl:template> <xsl:template name="rem-html"> <xsl:param name="text"/> <xsl:variable name="text" select="replace($text,'a','b')"/> </xsl:template> </xsl:stylesheet> Can anyone tell me what's wrong with it? Thanks in advance.

    Read the article

  • How do I relate tables with different foreign key names in Kohana ORM?

    - by Matt H
    I'm building a Kohaha application to manage sip lines in asterisk. I'm wanting to use ORM but I'm wondering how do relate certain tables that are already well established. e.g. the table sip_lines looks like this. +--------------------+------------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +--------------------+------------------+------+-----+-------------------+-----------------------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | sip_name | varchar(80) | NO | UNI | NULL | | | displayname | varchar(48) | NO | | NULL | | | line_num | varchar(10) | NO | MUL | NULL | | | model | varchar(12) | NO | MUL | NULL | | | mac | varchar(16) | NO | MUL | NULL | | | areacode | varchar(6) | NO | MUL | NULL | | | per_line_astpp_acc | tinyint(1) | NO | | 0 | | | play_warning | tinyint(1) | NO | | 0 | | | callout_disabled | tinyint(1) | NO | | 0 | | | notes | varchar(80) | NO | | NULL | | | last_update | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | +--------------------+------------------+------+-----+-------------------+-----------------------------+ sip_buddies is this: +----------------+------------------------------+------+-----+-----------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+------------------------------+------+-----+-----------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(80) | NO | UNI | | | | host | varchar(31) | NO | | | | | | | lastms | int(11) | NO | | 0 *** snip *** +----------------+------------------------------+------+-----+-----------+----------------+ The two tables are actually related as sip_lines.sip_name = sip_buddies.name How do I relate them in Kohana ORM as this wouldn't be quite right would it? <?php defined('SYSPATH') or die('No direct script access.'); /* A model for all the account information */ class Sip_Line_Model extends ORM { protected $has_one = array("sip_buddies"); } ?> EDIT: Actually, it'd be fair to say that these tables are not properly related with foreign keys! doh. EDIT: Looks like Kohana ORM is not that flexible. ORM is probably not the way to go and works best for completely new projects where the data model can be altered. The reason being that the key names must follow a specific naming convention or else they won't relate in Kohana.

    Read the article

  • Data Access Layer, Best Practices

    - by labratmatt
    I'm looking for input on the best way to refactor the data access layer (DAL) in my PHP based web app. I follow an MVC pattern: PHP/HTML/CSS/etc. views on the front end, PHP controllers/services in the middle, and a PHP DAL sitting on top of a relational database in the model. Pretty standard stuff. Things are working fine, but my DAL is getting large (codesmell?) and becoming a bit unwieldy. My DAL contains almost all of the logic to interface with my database and is full of functions that look like this: function getUser($user_id) { $statement = "select id, name from users where user_id=:user_id"; PDO builds statement and fetchs results as an array return $array_of_results_generated_by_PDO_fetch_method; } Notes: The logic in my controller only interacts with the model using functions like the above in the DAL I am not using a framework (I'm of the opinion that PHP is a templating language and there's no need to inject complexity via a framework) I generally use PHP as a procedural language and tend to shy away from its OOP approach (I enjoy OOP development but prefer to keep that complexity out of PHP) What approaches have you taken when your DAL has reached this point? Do I have a fundamental design problem? Do I simply need to chop my DAL into a number of smaller files (logically divide it)? Thanks.

    Read the article

  • Best approach, Dynamic OpenXML in T-SQL

    - by Martin Ongtangco
    hello, i'm storing XML values to an entry in my database. Originally, i extract the xml datatype to my business logic then fill the XML data into a DataSet. I want to improve this process by loading the XML right into the T-SQL. Instead of getting the xml as string then converting it on the BL. My issue is this: each xml entry is dynamic, meaning it can be any column created by the user. I tried using this approach, but it's giving me an error: CREATE PROCEDURE spXMLtoDataSet @id uniqueidentifier, @columns varchar(max) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @name varchar(300); DECLARE @i int; DECLARE @xmlData xml; (SELECT @xmlData = data, @name = name FROM XmlTABLES WHERE (tableID = ISNULL(@id, tableID))); EXEC sp_xml_preparedocument @i OUTPUT, @xmlData DECLARE @tag varchar(1000); SET @tag = '/NewDataSet/' + @name; DECLARE @statement varchar(max) SET @statement = 'SELECT * FROM OpenXML(@i, @tag, 2) WITH (' + @columns + ')'; EXEC (@statement); EXEC sp_xml_removedocument @i END where i pass a dynamically written @columns. For example: spXMLtoDataSet 'bda32dd7-0439-4f97-bc96-50cdacbb1518', 'ID int, TypeOfAccident int, Major bit, Number_of_Persons int, Notes varchar(max)' but it kept on throwing me this exception: Msg 137, Level 15, State 2, Line 1 Must declare the scalar variable "@i". Msg 319, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.

    Read the article

  • Is NSPasteboard thread-safe?

    - by Joe
    Is it safe to write data to an NSPasteboard object from a background thread? I can't seem to find a definitive answer anywhere. I think the assumption is that the data will be written to the pasteboard before the drag begins. Background: I have an application that is fetching data from Evernote. When the application first loads, it gets the meta data for each note, but not the note content. The note stubs are then listed in an outline view. When the user starts to drag a note, the notes are passed to the background thread that handles getting the note content from Evernote. Having the main thread block until the data is gotten results in a significant delay and a poor user experience, so I have the [outlineView:writeItems:toPasteboard:] function return YES while the background thread processes the data and invokes the main thread to write the data to the pasteboard object. If the note content gets transferred before the user drops the note somewhere, everything works perfectly. If the user drops the note somewhere before the data has been processed... well, everything blocks forever. Is it safe to just have the background thread write the data to the pasteboard?

    Read the article

  • Is it possible to implement an infinite IEnumerable without using yield with only C# code?

    - by sinelaw
    Edit: Apparently off topic...moving to Programmers.StackExchange.com. This isn't a practical problem, it's more of a riddle. Problem I'm curious to know if there's a way to implement something equivalent to the following, but without using yield: IEnumerable<T> Infinite<T>() { while (true) { yield return default(T); } } Rules You can't use the yield keyword Use only C# itself directly - no IL code, no constructing dynamic assemblies etc. You can only use the basic .NET lib (only mscorlib.dll, System.Core.dll? not sure what else to include). However if you find a solution with some of the other .NET assemblies (WPF?!), I'm also interested. Don't implement IEnumerable or IEnumerator. Notes The closest I've come yet: IEnumerable<int> infinite = null; infinite = new int[1].SelectMany(x => new int[1].Concat(infinite)); This is "correct" but hits a StackOverflowException after 14399 iterations through the enumerable (not quite infinite). I'm thinking there might be no way to do this due to the CLR's lack of tail recursion optimization. A proof would be nice :)

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Standardizing a Release/Tools group on a specific language

    - by grahzny
    I'm part of a six-member build and release team for an embedded software company. We also support a lot of developer tools, such as Atlassian's Fisheye, Jira, etc., Perforce, Bugzilla, AnthillPro, and a couple of homebrew tools (like my Django release notes generator). Most of the time, our team just writes little plugins for larger apps (ex: customize workflows in Anthill), long-term utility scripts (package up a release for QA), or things like Perforce triggers (don't let people check into a specific branch unless their change description includes a bug number; authenticate against Active Directory instead of Perforce's internal passwords). That's about the scale of our problems, although we sometimes tackle something slightly more sizable. My boss, who is reasonably technical, has asked us to standardize on one or two languages so we can more easily substitute for each other. He's advocating bash scripts and Perl, due to their universality and simplicity. I can see his point--we mostly do "glue", so why not use "glue" languages rather than saddle ourselves with something designed for much larger projects? Since some of the tools we work with are Java-based, we do need to use something that speaks JVM sometimes. (The path of least resistance for these projects is BeanShell and Groovy.) I feel a tremendous itch toward language advocacy, but I'm trying to avoid saying "We should use Python 'cause I like it and Perl is gross." Instead, I'm trying to come up with a good approach to defining our problem set: what problems do we solve with scripts? Would we benefit from a library of common functions by our team, or are most of our projects more isolated? What is it reasonable to expect my co-workers to learn? What languages give us the most ease of development and ease of modification? Can you folks suggest some useful ways to approach this problem, both for my own thinking process and to help me facilitate some brainstorming among my coworkers?

    Read the article

  • How to query two tables based on whether or not record exists in a third?

    - by Katherine
    I have three tables, the first two fairly standard: 1) PRODUCTS table: pid pname, etc 2) CART table: cart_id cart_pid cart_orderid etc The third is designed to let people save products they buy and keep notes on them. 3) MYPRODUCTS table: myprod_id myprod_pid PRODUCTS.prod_id = CART.cart_prodid = MYPRODUCTS.myprod_pid When a user orders, they are presented with a list of products on their order, and can optionally add that product to myproducts. I am getting the info necessary for them to do this, with a query something like this for each order: SELECT cart.pid, products.pname, products.pid FROM products, cart WHERE products.pid = cart_prodid AND cart_orderid=orderid This is fine the first time they order. However, if they subsequently reorder a product which they have already added to myproducts, then it should NOT be possible for them to add it to myproducts again - basically instead of 'Add to MyProducts' they need to see 'View in MyProducts'. I am thinking I can separate the products using two queries: Products never added to MyProducts By somehow identifying whether the user has the product in MyProducts already, and if so excluding it from the query above. Products already in MyProducts By reversing the process above. I need some pointers on how to do this though.

    Read the article

  • Latex: Convert "Comment" into "Marginal Note"

    - by diegos
    Hi, using LyX I'm trying to convert the "comments" into "marginal notes". I tried several things but without luck. The best shot was like this: \makeatletter \@ifundefined{comment}{}{% \renewenvironment{comment}[1]% {\begingroup\marginpar{\bgroup#1\egroup}}% {\endgroup}} \makeatother or like this: \@ifundefined{comment}{}{% \renewenvironment{comment}% {\marginpar{}% {}}% But what I get is only the first character of the text converted. Like in this image: IMAGE MARGINAL NOTE I searched a lot trying to find how to solve this but without luck. I found the explanation of what is happening: Unexpected Output Only one character is in the new font You thought you changed font over a selection of text, but only the first character has come out in the new font. You have most probably used a command instead of a declaration. The command should take the text as its argument. If you don't group the text, only the first character will be passed as the argument. What I don't know and wasn't able to find is how to group the text. Hope someone could help me :-) Many thanks. Best Regards, Diego (diegostex)

    Read the article

  • Rendering LaTeX on third-party websites?

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? (The LaTeX would be enclosed within dollar signs, e.g. $\pi$. See the arXiv link above.) Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. There is one answer there, which is helpful, but perhaps Stack Overflow can provide better answers. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • What issues to consider when rolling your own data-backend for Silverlight / AJAX on non-ASP.NET ser

    - by Edward Tanguay
    I have read-only Silverlight and AJAX apps which read static text and XML files from a PHP/Apache server, which works very nicely with features such as asynchronous loading, lazy-loading only what I need for each page, loading in the background, developed a little query language to get a PHP script to create custom XML files etc. it's pragmatic read-only REST, and all works fast and fine for read-only sites. Now I want to also add the ability to write data from these apps to a database on the same PHP/Apache server. For those of you who have built similar data-access layers, what do I need to consider while building this, especially regarding security so that not just any client can write and alter my database, e.g.: check HTTP_USER_AGENT for security check REMOTE_ADDR for security require a special code for security, perhaps a list of TAN codes (such as banks use for online transactions) each which can only be used once, both the client and server have these I wonder if there is some kind of standard REST query I should lean on for e.g. building SQL-like statements in the URL parameters, e.g. http://www.thedatalayersite.com/query?insertinto=customers&... Any thoughts, notes from experience, ideas, gotchas, especially ideas on tightening down security in this endeavor would be helpful.

    Read the article

  • How can I test that my hash function is good in terms of max-load?

    - by philcolbourn
    I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins): Probability that a bin is empty, for large n is 1/e. Expected number of empty bins is n/e. Probability that a bin has k collisions is <= 1/k!. Probability that a bin has at least k collisions is <= (e/k)**k. These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely. Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere. Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test? This lecture seems to cover it best, but I am no mathematician. http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf BTW, with high probability seems to mean 1 - 1/n.

    Read the article

  • Proving that the distance values extracted in Dijkstra's algorithm is non-decreasing?

    - by Gail
    I'm reviewing my old algorithms notes and have come across this proof. It was from an assignment I had and I got it correct, but I feel that the proof certainly lacks. The question is to prove that the distance values taken from the priority queue in Dijkstra's algorithm is a non-decreasing sequence. My proof goes as follows: Proof by contradiction. Fist, assume that we pull a vertex from Q with d-value 'i'. Next time, we pull a vertex with d-value 'j'. When we pulled i, we have finalised our d-value and computed the shortest-path from the start vertex, s, to i. Since we have positive edge weights, it is impossible for our d-values to shrink as we add vertices to our path. If after pulling i from Q, we pull j with a smaller d-value, we may not have a shortest path to i, since we may be able to reach i through j. However, we have already computed the shortest path to i. We did not check a possible path. We no longer have a guaranteed path. Contradiction.

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >