Search Results

Search found 8219 results on 329 pages for 'less'.

Page 242/329 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Converting an int64 value to a Number object in JavaScript

    - by Matt
    I have a COM object which has a method that returns an unsigned int64 (VT_UI8) value. We have an HTML page which contains some JavaScript which can load the COM object and make the call to that method, to retrieve the value as such: var foo = MyCOMObject.GetInt64Value(); This value can easily be displayed to the user in a message dialog using: alert(foo); or displayed on the page by: document.getElementById('displayToUser').innerHTML = foo; However, we cannot use this value as a Number (e.g. if we try to multiply it by 2) without the page throwing "Number expected" errors. If we check "typeof(foo)" it returns "unknown". I've found a workaround for this by doing the following: document.getElementById('displayToUser').innerHTML = foo; var bar = parseInt(document.getElementById('displayToUser').innerHTML); alert(bar*2); What I need to know is how to make that process more efficient. Specifically, is there a way to cast foo to a String explicitly, rather than having to set some document element's innerHTML to foo and then retrieve it from that. I wouldn't mind calling something like: alert(parseInt((string)foo) * 2); Even better would be if there is a way to directly convert the int64 to a Number, without going through the String conversion, but I hold out less hope for that.

    Read the article

  • TranslateTransform for drag and drop in Silverlight

    - by fuzzyman
    We're trying to implement drag and drop in Silverlight (3). We want users to be able to drag elements from a treeview onto another part of a UI. The parent element is a Grid, and we've been trying to use a TranslateTransform along with the MouseLeftButtonDown, MouseMove (etc) events, as recommended by various online examples. For example: http://www.85turns.com/2008/08/13/drag-and-drop-silverlight-example/ We're doing this in IronPython, but that should be more or less irrelevant. The drag start is correctly initiated, but the item we are dragging appears in the 'wrong' location (offset a few hundred pixels to the right and down from the cursor) and I can't for the life of me work out why. Basic xaml: <Grid x:Name="layout_root"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition Height="120"/> </Grid.RowDefinitions> <Border x:Name="drag" Background="LightGray" Width="40" Height="15" Visibility="Collapsed" Canvas.ZIndex="10"> <Border.RenderTransform> <TranslateTransform x:Name="transform" X="0" Y="0" /> </Border.RenderTransform> <TextBlock x:Name="dragText" TextAlignment="Center" Foreground="Gray" Text="foo" /> </Border> ... </Grid> The startDrag method is triggered by the MouseLeftButtonDown event (on a TextBlock in a TreeViewItem.Header). onDrag is triggered by MouseMove. In the following code self.root is Application.Current.RootVisual (top level UI element from app.xaml): def startDrag(self, sender, event): self.root.drag.Visibility = Visibility.Visible self.root.dragText.Text = sender.Text position = event.GetPosition(self.root.drag.Parent) self.root.drag.transform.X = position.X self.root.drag.transform.Y = position.Y self.root.CaptureMouse() self._captured = True def onDrag(self, sender, event): if self._captured: position = event.GetPosition(self.root.drag.Parent) self.root.drag.transform.X = position.X self.root.drag.transform.Y = position.Y The dragged item follows the mouse move, but is offset considerably. Any idea what I am doing wrong and how to correct it?

    Read the article

  • Microsoft Detours - DetourUpdateThread?

    - by pault543
    Hi, I have a few quick questions about the Microsoft Detours Library. I have used it before (successfully), but I just had a thought about this function: LONG DetourUpdateThread(HANDLE hThread); I read elsewhere that this function will actually suspend the thread until the transaction completes. This seems odd since most sample code calls: DetourUpdateThread(GetCurrentThread()); Anyway, apparently this function "enlists" threads so that, when the transaction commits (and the detours are made), their instruction pointers are modified if they lie "within the rewritten code in either the target function or the trampoline function." My questions are: When the transaction commits, is the current thread's instruction pointer going to be within the DetourTransactionCommit function? If so, why should we bother enlisting it to be updated? Also, if the enlisted threads are suspended, how can the current thread continue executing (given that most sample code calls DetourUpdateThread(GetCurrentThread());)? Finally, could you suspend all threads for the current process, avoiding race conditions (considering that threads could be getting created and destroyed at any time)? Perhaps this is done when the transaction begins? This would allow us to enumerate threads more safely (as it seems less likely that new threads could be created), although what about CreateRemoteThread()? Thanks, Paul For reference, here is an extract from the simple sample: // DllMain function attaches and detaches the TimedSleep detour to the // Sleep target function. The Sleep target function is referred to // through the TrueSleep target pointer. BOOL WINAPI DllMain(HINSTANCE hinst, DWORD dwReason, LPVOID reserved) { if (dwReason == DLL_PROCESS_ATTACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourAttach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } else if (dwReason == DLL_PROCESS_DETACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourDetach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } return TRUE; }

    Read the article

  • Rot13 for numbers.

    - by dreeves
    EDIT: Now a Major Motion Blog Post at http://messymatters.com/sealedbids The idea of rot13 is to obscure text, for example to prevent spoilers. It's not meant to be cryptographically secure but to simply make sure that only people who are sure they want to read it will read it. I'd like to do something similar for numbers, for an application involving sealed bids. Roughly I want to send someone my number and trust them to pick their own number, uninfluenced by mine, but then they should be able to reveal mine (purely client-side) when they're ready. They should not require further input from me or any third party. (Added: Note the assumption that the recipient is being trusted not to cheat.) It's not as simple as rot13 because certain numbers, like 1 and 2, will recur often enough that you might remember that, say, 34.2 is really 1. Here's what I'm looking for specifically: A function seal() that maps a real number to a real number (or a string). It should not be deterministic -- seal(7) should not map to the same thing every time. But the corresponding function unseal() should be deterministic -- unseal(seal(x)) should equal x for all x. I don't want seal or unseal to call any webservices or even get the system time (because I don't want to assume synchronized clocks). (Added: It's fine to assume that all bids will be less than some maximum, known to everyone, say a million.) Sanity check: > seal(7) 482.2382 # some random-seeming number or string. > seal(7) 71.9217 # a completely different random-seeming number or string. > unseal(seal(7)) 7 # we always recover the original number by unsealing.

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Avoid the collapsing effect on TreeView after updating data

    - by Manolete
    I have a TreeView used to display events. It works great, however every time new events are coming in and populating the tree collapse the tree again to the original position. That is very annoying when the refresh time is less than 1 second and it does not allow the user to interact with the items of the tree. Is there any way to avoid this behaviour? <TreeView Margin="1" BorderThickness="0" Name="eventsTree" ItemsSource="{Binding EventAlertContainers}" Background="#00000000" ScrollViewer.VerticalScrollBarVisibility="Auto" FontSize="14" VirtualizingStackPanel.IsVirtualizing="True"> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type C:EventAlertContainer}" ItemsSource="{Binding EventAlerts}"> <StackPanel Orientation="Horizontal"> <Image Width="20" Height="20" Margin="3,0" Source="Resources\Process_info_32.png" /> <TextBlock FontWeight="Bold" FontSize="16" Text="{Binding Description}" /> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type C:EventAlert}" ItemsSource="{Binding Events}"> <StackPanel Orientation="Horizontal"> <Image Width="20" Height="20" Margin="0,0" Source="Resources\clock2_32.jpg" /> <TextBlock FontWeight="DemiBold" FontSize="14" Text="{Binding Name}" /> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type C:Event}"> <StackPanel Orientation="Horizontal"> <Image Width="20" Height="20" Margin="0,0" Source="Resources\Task_32.png" /> <StackPanel Orientation="Vertical"> <TextBlock FontSize="12" Text="{Binding Name}" /> </StackPanel> </StackPanel> </HierarchicalDataTemplate> </TreeView.Resources> </TreeView>

    Read the article

  • Should I go vor Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = false) Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • Why not use tables for layout in HTML?

    - by Bno
    It seems to be the general opinion that tables should not be used for layout in HTML. Why? I have never (or rarely to be honest) seen good arguments for this. The usual answers are: It's good to separate content from layoutBut this is a fallacious argument; Cliche Thinking. I guess it's true that using the table element for layout has little to do with tabular data. So what? Does my boss care? Do my users care?Perhaps me or my fellow developers who have to maintain a web page care... Is a table less maintainable? I think using a table is easier than using divs and CSS.By the way... why is using a div or a span good separation of content from layout and a table not? Getting a good layout with only divs often requires a lot of nested divs. Readability of the codeI think it's the other way around. Most people understand html, few understand CSS. It's better for SEO not to use tablesWhy? Can anybody show some evidence that it is? Or a statement from Google that tables are discouraged from an SEO perspective? Tables are slower.An extra tbody element has to be inserted. This is peanuts for modern web browsers. Show me some benchmarks where the use of a table significantly slows down a page. A layout overhaul is easier without tables, see css Zen Garden.Most web sites that need an upgrade need new content (html) as well. Scenarios where a new version of a web site only needs a new CSS file are not very likely. Zen Garden is a nice web site, but a bit theoretical. Not to mention its misuse of CSS. I am really interested in good arguments to use divs + CSS instead of tables.

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • notify listener inside or outside inner synchronization

    - by Jary Zeels
    Hello all, I am struggling with a decision. I am writing a thread-safe library/API. Listeners can be registered, so the client is notified when something interesting happens. Which of the two implementations is most common? class MyModule { protected Listener listener; protected void somethingHappens() { synchronized(this) { ... do useful stuff ... listener.notify(); } } } or class MyModule { protected Listener listener; protected void somethingHappens() { Listener l = null; synchronized(this) { ... do useful stuff ... l = listener; } l.notify(); } } In the first implementation, the listener is notified inside the synchronization. In the second implementation, this is done outside the synchronization. I feel that the second one is advised, as it makes less room for potential deadlocks. But I am having trouble to convince myself. A downside of the second imlementation is that the client might receive 'incorrect' notifications, which happens if it accessed the module prior to the l.notify() statement. thanks a lot

    Read the article

  • "Pattern matching" of algebraic type data constructors

    - by jetxee
    Let's consider a data type with many constructors: data T = Alpha Int | Beta Int | Gamma Int Int | Delta Int I want to write a function to check if two values are produced with the same constructor: sameK (Alpha _) (Alpha _) = True sameK (Beta _) (Beta _) = True sameK (Gamma _ _) (Gamma _ _) = True sameK _ _ = False Maintaining sameK is not much fun, it is potentially buggy. For example, when new constructors are added to T, it's easy to forget to update sameK. I omitted one line to give an example: -- it’s easy to forget: -- sameK (Delta _) (Delta _) = True The question is how to avoid boilerplate in sameK? Or how to make sure it checks for all T constructors? The workaround I found is to use separate data types for each of the constructors, deriving Data.Typeable, and declaring a common type class, but I don't like this solution, because it is much less readable and otherwise just a simple algebraic type works for me: {-# LANGUAGE DeriveDataTypeable #-} import Data.Typeable class Tlike t where value :: t -> t value = id data Alpha = Alpha Int deriving Typeable data Beta = Beta Int deriving Typeable data Gamma = Gamma Int Int deriving Typeable data Delta = Delta Int deriving Typeable instance Tlike Alpha instance Tlike Beta instance Tlike Gamma instance Tlike Delta sameK :: (Tlike t, Typeable t, Tlike t', Typeable t') => t -> t' -> Bool sameK a b = typeOf a == typeOf b

    Read the article

  • Why does my call to Activator.CreateInstance intermittently fail?

    - by Daniel Stutzbach
    I'm using the following code to access the Windows Explorer Shell's band site service: Guid GUID_TrayBandSiteService = new Guid(0xF60AD0A0, 0xE5E1, 0x45cb, 0xB5, 0x1A, 0xE1, 0x5B, 0x9F, 0x8B, 0x29, 0x34); Type shellTrayBandSiteService = Type.GetTypeFromCLSID(GUID_TrayBandSiteService, true); site = Activator.CreateInstance(shellTrayBandSiteService) as IBandSite; Mostly, it works great. A very small percentage of the time (less than 1%), the call to Activator.CreateInstance throws the following exception: System.Runtime.InteropServices.COMException (0x80040154): Retrieving the COM class factory for component with CLSID {F60AD0A0-E5E1-45CB-B51A-E15B9F8B2934} failed due to the following error: 80040154. at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) I've looked up the error code, and it appears to indicate that the service isn't registered. That's nonsense in my case, since I'm trying to access a service provided by the operating system (explorer.exe, to be specific). I'm stumped. What might cause Activator.CreateInstance fail, but only rarely?

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Accuracy of OpenGL ES Instrument

    - by Rob Jones
    I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information. On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second. On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see 30 FPS on a regular basis. It actually seems to hang closer to 36-39. To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices. So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.

    Read the article

  • How can use the currently displayed node to filter a block-level view on that node's page?

    - by Deane
    I have parent/child relationship set up via Node Reference. A Child record can have a Parent record selected from a Node Reference field (this is optional -- I can have Parent-less Children as well). I've created a Views block to appear on the Parent pages, below the content. It's going to show a table of all the Child nodes for that Parent. Problem is, right now it shows every Child node. I need to filter it for just the Parent being displayed. What I need to be able to do is add a filter to this View to effectively say, "Only show the Child nodes that are assigned to the Parent being displayed on this page." So, somehow I need to be able to get the Nid of the currently displaying node (which will be a Parent, in all cases when this block is displayed), and use that in a filter in my View. How exactly can I do this? (Initially I used an attachment view for this (as this page instructs). I created a page view to display the Parent, then an attachment view to display all the Children, then attached that under the page view. This worked, but it was almost absurdly complicated to set up, and it was an undesirable for a number of other reasons -- primarily that my Parent now has two dedicated URLs, it's own node-level page, and the similar page created by this view.) Using Drupal 6.15.

    Read the article

  • Why is a CoreData forceFetch required after a delete on the iPad but not the iPhone?

    - by alyoshak
    When the following code is run on the iPhone the count of fetched objects after the delete is one less than before the delete. But on the iPad the count remains the same. This inconsistency was causing a crash on the iPad because elsewhere in the code, soon after the delete, fetchedObjects is called and the calling code, trusting the count, attempts access to the just-deleted object's properties, resulting in a NSObjectInaccessibleException error (see below). A fix has been to use that commented-out call to performFetch, which when executed makes the second call to fetchObjects yield the same result as on the iPhone without it. My question is: Why is the iPad producing different results than the iPhone? This is the second of these differences that I've discovered and posted recently. -(NSError*)deleteObject:(NSManagedObject*)mo; { NSLog(@"\n\nNum objects in store before delete: %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); [self.managedObjectContext deleteObject:mo]; // Save the context. NSError *error = nil; if (![self.managedObjectContext save:&error]) { } // [self.fetchedResultsController performFetch:&error]; // force a fetch NSLog(@"\n\nNum objects in store after delete (and save): %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); return error; } (The full NSObjectInaccessibleException is: "Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0x1dcf90 <x-coredata://DC02B10D-555A-4AB8-8BC4-F419C4982794/Blueprint/p"

    Read the article

  • How To Get the Name of the Current Procedure/Function in Delphi (As a String)

    - by Andreas Rejbrand
    Is it possible to obtain the name of the current procedure/function as a string, within a procedure/function? I suppose there would be some "macro" that is expanded at compile-time. My scenario is this: I have a lot of procedures that are given a record and they all need to start by checking the validity of the record, and so they pass the record to a "validator procedure". The validator procedure raises an exception if the record is invalid, and I want the message of the exception to include not the name of the validator procedure, but the name of the function/procedure that called the validator procedure (naturally). That is, I have procedure ValidateStruct(const Struct: TMyStruct; const Sender: string); begin if <StructIsInvalid> then raise Exception.Create(Sender + ': Structure is invalid.'); end; and then procedure SomeProc1(const Struct: TMyStruct); begin ValidateStruct(Struct, 'SomeProc1'); ... end; ... procedure SomeProcN(const Struct: TMyStruct); begin ValidateStruct(Struct, 'SomeProcN'); ... end; It would be somewhat less error-prone if I instead could write something like procedure SomeProc1(const Struct: TMyStruct); begin ValidateStruct(Struct, {$PROCNAME}); ... end; ... procedure SomeProcN(const Struct: TMyStruct); begin ValidateStruct(Struct, {$PROCNAME}); ... end; and then each time the compiler encounters a {$PROCNAME}, it simply replaces the "macro" with the name of the current function/procedure as a string literal.

    Read the article

  • C++ stack for multiple data types (RPN vector calculator)

    - by Arrieta
    Hello: I have designed a quick and basic vector arithmetic library in C++. I call the program from the command line when I need a rapid cross product, or angle between vectors. I don't use Matlab or Octave or related, because the startup time is larger than the computation time. Again, this is for very basic operations. I am extending this program, and I will make it work as an RPN calculator, for operations of the type: 1 2 3 4 5 6 x out: -3 6 -3 (give one vector, another vector, and the "cross" operator; spit out the cross product) The stack must accept 3d vectors or scalars, for operations like: 1 2 3 2 * out: 2 4 6 The lexer and parser for this mini-calculator are trivial, but I cannot seem to think of a good way for creating the internal stack. How would you create a stack of for containing vectors or doubles (I rolled up my own very simple vector class - less than one hundred lines and it does everything I need). How can I create a simple stack which accepts elements of class Vector or type double? Thank you.

    Read the article

  • Is a confirmation screen necessary for an order form?

    - by abeger
    In a discussion about how to streamline an order form on our site, the idea of eliminating the confirmation screen. So, instead of filling out the form, clicking "Submit", seeing a summary on a confirmation screen and clicking "Confirm", the user would simply fill out the form, hit "Submit", and the order's done. The theory is that fewer clicks and fewer screens means less time to order and therefore the ordering experience is easier. The opposing opinion says that without the confirmation screen, user error increases and people just end up canceling/changing orders after the fact. I'm looking for more input from the SO community. Have you ever done this? How has it worked out, compared to a traditional confirmation screen setup? Are there examples of a true "one click and done" setup on the web (does Amazon's 1-click have a confirmation screen? I've never been courageous enough to try it)? EDIT: Just to clarify, when I say "confirmation screen", I mean a second step where the customer reviews the order before placing it. Even if we did do away with it, the user would still receive a message saying "your order has been placed".

    Read the article

  • How to make a small flash swf with ComboBox in Actionscript 3?

    - by Sint
    I have a pure Actionscript 3 project, using flash.* libraries, compiles down to about 6k (using mxmlc). Program handles about 1k shapes, a few sprites, a sockets connection, works great (tastes less filling). Now, how would I add a ComboBox control without incurring excessive bloat? More specificially, I would like to keep the size under 100k. So far I have tried: Adobe mx.controls ComboBoxexample - simple mxml example compiles to 200+k both on my main Linux Box using mxmlc and in Windows using Flash Builder 4 Yahoo Astra - uses mx libraries underneath(so as bloated as Adobe?), plus does not contain exact ComboBox Keith Peter's MinimalComps - seems small, but far from providing ComboBox functionality SPAS (Swing Package for Actionscript) - compiles to 130k, but alpha version of ComboBox does not let me adjust height... asuilib - compiles to 40k, unfortunately this ComboBox does not provide for scrolling items...if it does not fit on screen no way to scroll to it Now my questions: Is there a way to lower size for projects importing mx.controls ? Maybe there is a way to fix SPAS or asuilib ComboBoxes? Perhaps, there are some other libraries which provide a ComboBox(or DropList)?

    Read the article

  • Convert rank-per-candidate format to OpenSTV BLT format

    - by kibibu
    I recently gathered, using a questionnaire, a set of opinions on the importance of various software components. Figuring that some form of Condorcet voting method would be the best way to obtain an overall rank, I opted to use OpenSTV to analyze it. My data is in tabular format, space delimited, and looks more or less like: A B C D E F G # Candidates 5 2 4 3 7 6 1 # First ballot. G is ranked first, and E is ranked 7th 4 2 6 5 1 7 3 # Second ballot etc In this format, the number indicates the rank and the sequence order indicates the candidate. Each "candidate" has a rank (required) from 1 to 7, where a 1 means most important and a 7 means least important. No duplicates are allowed. This format struck me as the most natural way to represent the output, being a direct representation of the ballot format. The OpenSTV/BLT format uses a different method of representing the same info, conceptually as follows: G B D C A F E # Again, G is ranked first and E is ranked 7th E B G A D C F # etc The actual numeric file format uses the (1-based) index of the candidate, rather than the label, and so is more like: 7 2 4 3 1 6 5 # Same ballots as before. 5 2 7 1 4 3 6 # A -> 1, G -> 7 In this format, the number indicates the candidate, and the sequence order indicates the rank. The actual, real, BLT format also includes a leading weight and a following zero to indicate the end of each ballot, which I don't care too much about for this. My question is, what is the most elegant way to convert from the first format to the (numeric) second?

    Read the article

  • Storing Arbitrary Contact Information in Ruby on Rails

    - by Anthony Chivetta
    Hi, I am currently working on a Ruby on Rails app which will function in some ways like a site-specific social networking site. As part of this, each user on the site will have a profile where they can fill in their contact information (phone numbers, addresses, email addresses, employer, etc.). A simple solution to modeling this would be to have a database column per piece of information I allow users to enter. However, this seems arbitrary and limited. Further, to support allowing users to enter as many phone numbers as they would like requires the addition of another database table and joins. It seems to me that a better solution would be to serialize all the contact information entered by a user into a single field in their row. Since I will never be conditioning a SQL query on this information, such a solution wouldn't be any less efficient. Ideally, I would like to use a vCard as my serialization format. vCards are the standard solution to storing contact information across the web, and reusing tested solutions is a Good Thing. Alternative serialization formats would include simply marshaling a ruby hash, or YAML. Regardless of serialization format, supporting the reading and updating of this information in a rails-like way seems to be a major implementation challenge. So, here's the question: Has anyone seen this approach used in a rails application? Are there any rails plugins or gems that make such a system easy to implement? Ideally what I would like is an acts_as_vcard to add to my model object that would handle editing the vcard for me and saving it back to the database.

    Read the article

  • Algorithm To Select Most Popular Places from Database

    - by Russell C.
    We have a website that contains a database of places. For each place our users are able to take one of the follow actions which we record: VIEW - View it's profile RATING - Rate it on a scale of 1-5 stars REVIEW - Review it COMPLETED - Mark that they've been there WISH LIST - Mark that they want to go there FAVORITE - Mark that it's one of their favorites In our database table of places each place contains a count of the number of times each action above was taken as well as the average rating given by users. views ratings avg_rating completed wishlist favorite What we want to be able to do is generate lists of the top places using the above information. Ideally, we would want to be able to generate this list using a relatively simple SQL query without needing to do any legwork to calculate additional fields or stack rank places against one another. That being said, since we only have about 50,000 places we could run a nightly cron job to calculate some fields such as rankings on different categories if it would make a meaningful difference in the overall results of our top places. I'd appreciate if you could make some suggestions on how we should think about bubbling the best places to the top, which criteria we should weight more heavily, and given that information - suggest what the MySQL query would need to look like in order to select the top 10 places. One thing to note is that at this time we are less concerned with the recency of a place being popular - meaning that looking at the aggregate information is fine and that more recent data doesn't need to be weighted more heavily. Thanks in advance for your help & advice!

    Read the article

  • SQL UNION ALL problem after using UNION ALL more than 10 times

    - by VBGKM
    I'm getting a formatting problem if I use more than 10 UNION ALL statements in my VBA Code. If I use 10 or less everything works great. What I'm trying to do is combine 12 worksheets (Excel 2007). I have a numerical column called SC that turns into string and date if I have more than 10 UNION ALL. If I try to use ROUND with more than 10 UNION ALL my last selection will change all the records by one unit. I'm using Microsoft.ACE.OLEDB.12.0 as my provider and my connection string has worked for several things in my code so far. Is there any limit for UNION ALL statements when using OLEDB? Here is my code. Dim StrOr As String Dim i As Variant Dim Cnt As ADODB.Connection Dim Rs As ADODB.Recordset For i = 1 To 12 StrOr = StrOr & " " & "SELECT SC FROM [" & MonthName(i, True) & "$" & "] UNION ALL" Next StrOr = Left(StrOr, Len(StrOr) - 9) & ";" Call GetADOCnt Call ADORs

    Read the article

  • conditional update records mysql query

    - by Shakti Singh
    Hi, Is there any single msql query which can update customer DOB? I want to update the DOB of those customers which have DOB greater than current date. example:- if a customer have dob 2034 update it to 1934 , if have 2068 updated with 1968. There was a bug in my system if you enter date less than 1970 it was storing it as 2070. The bug is solved now but what about the customers which have wrong DOB. So I have to update their DOB. All customers are stored in customer_entity table and the entity_id is the customer_id Details is as follows:- desc customer_entity -> ; +------------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+----------------------+------+-----+---------------------+----------------+ | entity_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_set_id | smallint(5) unsigned | NO | | 0 | | | website_id | smallint(5) unsigned | YES | MUL | NULL | | | email | varchar(255) | NO | MUL | | | | group_id | smallint(3) unsigned | NO | | 0 | | | increment_id | varchar(50) | NO | | | | | store_id | smallint(5) unsigned | YES | MUL | 0 | | | created_at | datetime | NO | | 0000-00-00 00:00:00 | | | updated_at | datetime | NO | | 0000-00-00 00:00:00 | | | is_active | tinyint(1) unsigned | NO | | 1 | | +------------------+----------------------+------+-----+---------------------+----------------+ 11 rows in set (0.00 sec) And the DOB is stored in the customer_entity_datetime table the column value contain the DOB. but in this table values of all other attribute are also stored such as fname,lname etc. So the attribute_id with value 11 is DOB attribute. mysql> desc customer_entity_datetime; +----------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+----------------------+------+-----+---------------------+----------------+ | value_id | int(11) | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_id | smallint(5) unsigned | NO | MUL | 0 | | | entity_id | int(10) unsigned | NO | MUL | 0 | | | value | datetime | NO | | 0000-00-00 00:00:00 | | +----------------+----------------------+------+-----+---------------------+----------------+ 5 rows in set (0.01 sec) Thanks.

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >