Search Results

Search found 1052 results on 43 pages for 'jon romero'.

Page 24/43 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Change the precision of all decimal columns in every table in the database

    - by Jon
    Hi all, I have a rather large database that has alot of decimal columns in alot of tables, the customer has now changed their mind and wants all the numbers (decimals) to have a precision of 3 d.p. instead of the original two. Is there any quick way of going through all the tables in a database and changing any decimal column in that table to have 3.d.p instead of 2 d.p? The db is on sql 2005. Any help would be great.

    Read the article

  • Safari ignores input type="file" on server post

    - by Jon
    I have a real problem with a classic ASP page. The page allows the user to upload a document and save it to the database. The intial page posts to another asp page which saves down to the db. This works on IE and Firefox. However on Safari it fails. I've debugged the problem and it boils down to the fact that of all the controls that the server page has access to, only 1 control is missing. This happens to be this: <input type="file" size="40" id="myfile" name="myfile" /> So I'm wondering why safari would decide to not give me access to this control (using asp's Request("") ) and why it works in FF and IE. I have some debug code which writes out all controls and it doesn't see this control. p.s. I hate Web development

    Read the article

  • Emulating Dynamic Dispatch in C++ based on Template Parameters

    - by Jon Purdy
    This is heavily simplified for the sake of the question. Say I have a hierarchy: struct Base { virtual int precision() const = 0; }; template<int Precision> struct Derived : public Base { typedef Traits<Precision>::Type Type; Derived(Type data) : value(data) {} virtual int precision() const { return Precision; } Type value; }; I want a function like: Base* function(const Base& a, const Base& b); Where the specific type of the result of the function is the same type as whichever of first and second has the greater Precision; something like the following pseudocode: template<class T> T* operation(const T& a, const T& b) { return new T(a.value + b.value); } Base* function(const Base& a, const Base& b) { if (a.precision() > b.precision()) return operation((A&)a, A(b.value)); else if (a.precision() < b.precision()) return operation(B(a.value), (B&)b); else return operation((A&)a, (A&)b); } Where A and B are the specific types of a and b, respectively. I want f to operate independently of how many instantiations of Derived there are. I'd like to avoid a massive table of typeid() comparisons, though RTTI is fine in answers. Any ideas?

    Read the article

  • Ret Failure with SDL using FASM on Win32

    - by Jon Purdy
    I'm using SDL with FASM, and have code that's minimally like the following: format ELF extrn _SDL_Init extrn _SDL_SetVideoMode extrn _SDL_Quit extrn _exit SDL_INIT_VIDEO equ 0x00000020 section '.text' public _SDL_main _SDL_main: ccall _SDL_Init, SDL_INIT_VIDEO ccall _SDL_SetVideoMode, 640, 480, 32, 0 ccall _SDL_Quit ccall _exit, 0 ; Success, or ret ; failure. With the following quick-and-dirty makefile: SOURCES = main.asm OBJECTS = main.o TARGET = SDLASM.exe FASM = C:\fasm\fasm.exe release : $(OBJECTS) ld $(OBJECTS) -LC:/SDL/lib/ -lSDLmain -lSDL -LC:/MinGW/lib/ -lmingw32 -lcrtdll -o $(TARGET) --subsystem windows cleanrelease : del $(OBJECTS) %.o : %.asm $(FASM) $< $@ Using exit() (or Windows' ExitProcess()) seems to be the only way to get this program to exit cleanly, even though I feel like I should be able to use retn/retf. When I just ret without calling exit(), the application does not terminate and needs to be killed. Could anyone shed some light on this? It only happens when I make the call to SDL_SetVideoMode().

    Read the article

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • Can parser combination be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and shift oop f op (P(g, xs)) = let h, xs = loop (op, g, xs) loop (oop, f h, xs) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) | oop, f, ('^' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) | '^' -> pown loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '-'::P(f, xs) -> let f, xs = loop ('~', f, xs) P(-f, xs) | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Any way to automatically wrap comments at column 80 in Visual Studio 2008? ..or display where column

    - by Jon Cage
    Is there any way to automatically wrap comments at the 80-column boundary as you type them? ..or failing that, any way to display a faint line at the coulmn 80 boundary to make wrapping them manually a little easier? Several other IDEs I use have one or other of those functions and it makes writing comments that wrap in sensible places much easier/quicker. [Edit] If (like me) you're using Visual C++ Express, you need to change the VisualStudio part of the key into VCExpress - had me confused for a while there!

    Read the article

  • Anybody using Qi4J

    - by Jon
    I was reading an InfoQ article on Composite Oriented Programming earlier on: http://www.infoq.com/articles/Composite-Programming-Qi4j I was interested in finding out whether anybody is currently using (or has used) the Qi4j framework at all? How does it compares to using a traditional dependency injection framework such as Spring for wiring classes together. Is the resulting object graph (based on mixins rather than classes) easier to deal with from a maintenance point of view?

    Read the article

  • Php fetch rows from multiple MySQL tables

    - by Jon McIntosh
    Right now I am fetching all of the rows from one of my tables: query = "SELECT * FROM thread WHERE threadid = 2 ORDER BY threadid DESC"; $result = mysql_query($query); $num_rows = mysql_num_rows($result); if((!is_bool($result) || $result) && $num_rows) { while($row = mysql_fetch_array($result)) { $thread = $row['title']; $threadID = $row['threadid']; $poster = $row['postusername']; } What I want to do is go to another table on my database: "post_display", and get the row 'text' where the threadid = 2.

    Read the article

  • Are there any generic shipping calculators out there for DJango?

    - by Jon Cage
    I'm in the process of settings up a website (I'm using DJango) to begin selling some toys I build and need a way of calculating shipping costs for my customers. Are there any (preferably free) shipping calculators which accept a customers address and return the cost for different delivery companies / delivery options? It would be nice if the API could indicate cost vs delivery time. We'll be shipping world-wide if that makes a difference?

    Read the article

  • Poor DB4O performance - What am I doing wrong?

    - by Jon
    I read Rob Conery's post about Object databases and thought I'd give it a go. I have a class lets say Book with 15 properties mostly string, some int, one Datetime and 2 decimals. I used Rob's code to open the database file etc to test on a Winforms project not a Web project. So I did the following: for(int i = 0; i < 1000000; i++) { var Book = new Book(); //Assign variables s.Save(Book); } s.CommitChanges(); This took at least a couple of minutes. I then tried to retrieve all data to test speed: var spp = s.All<Passports>(); This one line was quick, however, then doing: sppp.Count() as well as a seperate test doing a foreach loop and incrementing a counter took a couple of minutes. This seems quite slow to me. Am I doing something wrong or am I expecting too much?

    Read the article

  • Issue using Visual Studio 2010 compiled C++ DLL in Windows 2000

    - by Jon Tackabury
    I have a very simple DLL written in unmanaged C++ that I access from my application. I recently switch to Visual Studio 2010, and the DLL went from 55k down to 35k with no code changes, and now it will no longer load in Windows 2000. I didn't change any code or compiler settings. I have my defines setup for 0x0500, which should include Windows 2000 support. Has anyone else run into this, or have any ideas of what I can do?

    Read the article

  • JUnit Custom Rules

    - by Jon
    JUnit 4.7 introduced the concept of custom rules: http://www.infoq.com/news/2009/07/junit-4.7-rules There are a number of built in JUnit rules including TemporaryFolder which helps by clearing up folders after a test has been run: @Rule public TemporaryFolder tempFolder = new TemporaryFolder(); There's a full list of built in rules here: http://kentbeck.github.com/junit/javadoc/latest/org/junit/rules/package-summary.html I'm interested in finding out what custom rules are in place where you work or what useful custom rules you currently use?

    Read the article

  • Views jump during ViewDidLoad

    - by Jon-Paul
    Hi, I have a custom button, subclassed from UIButton that is initialised as below (obviously all the button does is use a custom font). @implementation HWButton - (id)initWithCoder:(NSCoder *)decoder { if (self = [super initWithCoder: decoder]) { [self.titleLabel setFont:[UIFont fontWithName: @"eraserdust" size: self.titleLabel.font.pointSize]]; } return self; } So far so good. But when I use the custom class in my nib and launch the app, the button initially displays for a split second as tiny with small text, then grows. So the outcome is what I want, but I don't want to see the transition. Can anyone put me right? Thanks. JP

    Read the article

  • When doing a Schema Export with hbm2ddl, is there a way to specify that you DO NOT want Nullable For

    - by Jon Erickson
    The DDL that is being created is putting all of my many to many associations into 1 table, but I actually want each many to many association in its' own table (for other reasons) Right now hbm2ddl is creating this table (only Table1Key OR Table2Key OR Table3Key should be filled out for any given record, causing this table to have nullable foreign keys): +-----------+ | xRef | +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | +-----------+ I want hbm2ddl to create the following 3 tables so that there are no nullable foreign keys. +-----------+ +-----------+ +-----------+ | xRef1 | | xRef2 | | xRef3 | +-----------+ +-----------+ +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | | RiskKey | | RiskKey | +-----------+ +-----------+ +-----------+

    Read the article

  • Java paint speed relative to color model

    - by Jon
    I have a BufferedImage with an IndexColorModel. I need to paint that image onto the screen, but I've noticed that this is slow when using an IndexColorModel. However, if I run the BufferedImage through an identity affine transform it creates an image with a DirectColorModel and the painting is significantly faster. Here's the code I'm using AffineTransformOp identityOp = new AffineTransformOp(new AffineTransform(), AffineTransformOp.TYPE_BILINEAR); displayImage = identityOp.filter(displayImage, null); I have three questions 1. Why is painting the slower on an IndexColorModel? 2. Is there any way to speed up the painting of an IndexColorModel? 3. If the answer to 2. is no, is this the most efficient way to convert from an IndexColorModel to a DirectColorModel? I've noticed that this conversion is dependent on the size of the image, and I'd like to remove that dependency. Thanks for the help

    Read the article

  • Exiting from the Middle of an Expression Without Using Exceptions

    - by Jon Purdy
    Is there a way to emulate the use of flow-control constructs in the middle of an expression? Is it possible, in a comma-delimited expression x, y, for y to cause a return? Edit: I'm working on a compiler for something rather similar to a functional language, and the target language is C++. Everything is an expression in the source language, and the sanest, simplest translation to the destination language leaves as many things expressions as possible. Basically, semicolons in the target language become C++ commas. In-language flow-control constructs have presented no problems thus far; it's only return. I just need a way to prematurely exit a comma-delimited expression, and I'd prefer not to use exceptions unless someone can show me that they don't have excessive overhead in this situation. The problem of course is that most flow-control constructs are not legal expressions in C++. The only solution I've found so far is something like this: try { return x(), // x(); (1 ? throw Return(0) : 0); // return 0; } catch (Return& ret) { return ref.value; } The return statement is always there (in the event that a Return construct is not reached), and as such the throw has to be wrapped in ?: to get the compiler to shut up about its void result being used in an expression. I would really like to avoid using exceptions for flow control, unless in this case it can be shown that no particular overhead is incurred; does throwing an exception cause unwinding or anything here? This code needs to run with reasonable efficiency. I just need a function-level equivalent of exit().

    Read the article

  • Get the ID of user that like my page.

    - by jon
    Hello I would like to know if there is any API feature that can tell me which users liked my application page. I know that it is possible to check if a user is a fan of any given Page using page.isFan verification or to get the number of fans. But is it possible to get the fans' User ID? If so, is it possible to get that data ordered by the date they became fans of the page? For exemple usign FQL i can get the user ID of someone o like an facebook object like post or photo. http://developers.facebook.com/docs/reference/fql/like Can i do this for the people who liked my page? Thanks for any help or guideline .

    Read the article

  • Automatic e-mail processing

    - by Jon Harrop
    I'd like to write a .NET application in F# to automate some of the processing of my e-mails. For example, when an order comes in my program might compute a new htpasswd from the e-mail's contents, upload it to our web server and reply to the customer with login details. How do people do this? I've tried Outlook 2007 automation but it just prompts the user for security and my attempts to get it to stop doing this have failed so I cannot automate anything with it. Is there a .NET-friendly e-mail client I can use more easily? This has been so tedious that I'm seriously considering writing my own .NET-friendly e-mail client...

    Read the article

  • Setting a Forms Authentication cookie from a .NET client application

    - by Jon DellOro
    We currently have a .NET 2.0 web app that uses forms authentication via cookies. Associated with this web app is an old VB6 client application that has its own login system. Currently, the users have to login to the VB6 app, and then when they click on a link, need to authenticate themselves again with the .NET forms authentication system. I'm wondering if it's possible to create a client side .NET application, give it the username and password, and set the forms authentication cookie (without the browser being opened). Is that possible??

    Read the article

  • Escaping Code for Different Shells

    - by Jon Purdy
    Question: What characters do I need to escape in a user-entered string to securely pass it into shells on Windows and Unix? What shell differences and version differences should be taken into account? Can I use printf "%q" somehow, and is that reliable across shells? Backstory (a.k.a. Shameless Self-Promotion): I made a little DSL, the Vision Web Template Language, which allows the user to create templates for X(HT)ML documents and fragments, then automatically fill them in with content. It's designed to separate template logic from dynamic content generation, in the same way that CSS is used to separate markup from presentation. In order to generate dynamic content, a Vision script must defer to a program written in a language that can handle the generation logic, such as Perl or Python. (Aside: using PHP is also possible, but Vision is intended to solve some of the very problems that PHP perpetuates.) In order to do this, the script makes use of the @system directive, which executes a shell command and expands to its output. (Platform-specific generation can be handled using @unix or @windows, which only expand on the proper platform.) The problem is obvious, I should think: test.htm: <!-- ... --> <form action="login.vis" method="POST"> <input type="text" name="USERNAME"/> <input type="password" name="PASSWORD"/> </form> <!-- ... --> login.vis: #!/usr/bin/vision # Think USERNAME = ";rm -f;" @system './login.pl' { USERNAME; PASSWORD } One way to safeguard against this kind of attack is to set proper permissions on scripts and directories, but Web developers may not always set things up correctly, and the naive developer should get just as much security as the experienced one. The solution, logically, is to include a @quote directive that produces a properly escaped string for the current platform. @system './login.pl' { @quote : USERNAME; @quote : PASSWORD } But what should @quote actually do? It needs to be both cross-platform and secure, and I don't want to create terrible problems with a naive implementation. Any thoughts?

    Read the article

  • PHP DOMDocument Error Handling Problem

    - by Jon
    I'm having trouble trying to write an if statement for DOM that will check if $html is blank. However whenever the html page does end up blank, it just removes everything that would be below DOM (including what I had to check if it was blank). $html = file_get_contents("http://example.com/"); $dom = new DOMDocument; @$dom->loadHTML($html); $links = $dom->getElementById('dividhere')->getElementsByTagName('img'); foreach ($links as $link) { echo $link->getAttribute('src'); } All this does is grab an image url in the specified div, which works perfectly until the page is a blank html page. I've tried using SimpleHTMLDOM, which didn't work either (it didn't even fetch the image on working pages). Did I happen to miss something with this one or am I just missing something in both? include_once('simple_html_dom.php') $html = file_get_html("http://example.com/"); foreach($html->find('div[id="dividhere"]') as $div) { if(empty($div->src)) { continue; } echo $div->src; }

    Read the article

  • Implementing a robust async stream reader for a console

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. Edit: There are comments below correctly stating that since the data is coming from a stream, there is absolutely nothing that the receiver can infer about the structure of the data unless it is fully prepared to parse it. What I am trying to do here is leverage the "flush the output" "structure" that the owner of the console imparts while writing on it. I am prepared to assume (better: allow my caller to have the option to assume) that the OS will pass me the data written between two flushes of the stream in exactly one piece. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >