Search Results

Search found 7805 results on 313 pages for 'high voltage'.

Page 249/313 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Cooperative/Non-preemptive threading avoiding threadlooks?

    - by Wayne
    Any creative ideas to avoid deadlocks on a yield or sleep with cooperative/non-preemptive multitasking without doing an O/S Thread.Sleep(10)? Typically the yield or sleep call will call back into the scheduler to run other tasks. But this can sometime produce deadlocks. Some background: This application has enormous need for speed and, so far, it's extremely fast as compared to other systems in the same industry. One of the speed techniques is cooperative/non-preemptive threading rather then the cost of a context switch from O/S threads. The high level design a priority manager which calls out to tasks depending on priority and processing time. Each task does one "iteration" of work and returns to wait its turn again in the priority queue. The tricky thing with non-preemptive threading is what to do when you want to a particular task to stop in the middle of work and wait for some other event from a different task before continuing. In this case, we have 3 tasks, A B and C where A is a controller that must synchronize the activity of B and C. First, A starts both B and C. Then B yields so C gets invoked. When C yields, A sees they are both inactive, decides it's time for B to run but not time for C yet. Well B is now stuck in a yield that has called C, so it can never run. Sincerely, Wayne

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Which technology is best suited to store and query a huge readonly graph?

    - by asmaier
    I have a huge directed graph: It consists of 1.6 million nodes and 30 million edges. I want the users to be able to find all the shortest connections (including incoming and outgoing edges) between two nodes of the graph (via a web interface). At the moment I have stored the graph in a PostgreSQL database. But that solution is not very efficient and elegant, I basically need to store all the edges of the graph twice (see my question PostgreSQL: How to optimize my database for storing and querying a huge graph). It was suggested to me to use a GraphDB like neo4j or AllegroGraph. However the free version of AllegroGraph is limited to 50 million nodes and also has a very high-level API (RDF), which seems too powerful and complex for my problem. Neo4j on the other hand has only a very low level API (and the python interface is not mature yet). Both of them seem to be more suited for problems, where nodes and edges are frequently added or removed to a graph. For a simple search on a graph, these GraphDBs seem to be too complex. One idea I had would be to "misuse" a search engine like Lucene for the job, since I'm basically only searching connections in a graph. Another idea would be, to have a server process, storing the whole graph (500MB to 1GB) in memory. The clients could then query the server process and could transverse the graph very quickly, since the graph is stored in memory. Is there an easy possibility to write such a server (preferably in Python) using some existing framework? Which technology would you use to store and query such a huge readonly graph?

    Read the article

  • Understanding OOP Principles in passing around objects/values

    - by Hans
    I'm not quite grokking a couple of things in OOP and I'm going to use a fictional understanding of SO to see if I can get help understand. So, on this page we have a question. You can comment on the question. There are also answers. You can comment on the answers. Question - comment - comment - comment Answer -comment Answer -comment -comment -comment Answer -comment -comment So, I'm imagining a very high level understanding of this type of system (in PHP, not .Net as I am not yet familiar with .Net) would be like: $question = new Question; $question->load($this_question_id); // from the URL probably echo $question->getTitle(); To load the answers, I imagine it's something like this ("A"): $answers = new Answers; $answers->loadFromQuestion($question->getID()); // or $answers->loadFromQuestion($this_question_id); while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } Or, would you do ("B"): $answers->setQuestion($question); // inject the whole obj, so we have access to all the data and public methods in $question $answers->loadFromQuestion(); // the ID would be found via $this->question->getID() instead of from the argument passed in while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } I guess my problem is, I don't know when or if I should be passing in an entire object, and when I should just be passing in a value. Passing in the entire object gives me a lot of flexibility, but it's more memory and subject to change, I'd guess (like a property or method rename). If "A" style is better, why not just use a function? OOP seems pointless here. Thanks, Hans

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • DOS batch file to enter commands in proprietary java app and receive feedback?

    - by Justine
    Hello, I'm working on a project in which I'd like to be able to turn lights on and off in the Duke Smart Home via a high frequency chirp. The lighting system is called Clipsal Square-D and the program that gives a user access to the lighting controls is called CGate. I was planning on doing some signal processing in Matlab, then create a batch file from Matlab to interact with Cgate. Cgate is a proprietary Java app that, if run from a DOS command line, opens up another window that looks like the command prompt. I have a batch file that can check to see if Cgate is running and if not, open it. But what I can't figure out how to do is actually run commands in the Cgate program from the batch file and likewise, take the response from Cgate. An example of such a command is "noop," which should return "200 OK." Any help would be much appreciated! Thank you very much in advance :) (here's my existing batch file by the way) @ECHO off goto checkIfOpen :checkIfOpen REM pv finds all open processes and puts it in result.txt %SystemRoot%\pv\pv.exe %SystemRoot%\pv\pv.exe result.txt REM if result has the word notepad in it then notepad is running REM if not then it opens notepad FIND "notepad.exe" result.txt IF ERRORLEVEL 1 START %SystemRoot%\system32\Clipsal\C-Gate2\cgate.exe goto end :end

    Read the article

  • Mac OS X: Getting detailed process information (specifically its launch arguments) for arbitrary run

    - by Jasarien
    I am trying to detect when particular applications are launched. Currently I am using NSWorkspace, registering for the "did launch application" notification. I also use the runningApplications method to get apps that are currently running when my app starts. For most apps, the name of the app bundle is enough. I have a plist of "known apps" that I cross check with the name of that passed in the notification. This works fine until you come across an app that acts as a proxy for launching another application using command line arguments. Example: The newly released Portal on the Mac doesn't have a dedicated app bundle. Steam can create a shortcut, which serves as nothing more than to launch the hl2_osx app with the -game argument and portal as it's parameter. Since more Source based games are heading to the Mac, I imagine they'll use the same method to launch, effectively running the hl2_osx app with the -game argument. Is there a nice way to get a list of the arguments (and their parameters) using a Cocoa API? NSProcessInfo comes close, offering an `-arguments' method, but only provides information for its own process... NSRunningApplication offers the ability to get information about arbitrary apps using a PID, but no command line args... Is there anything that fills the gap between the two? I'm trying not to go down the route of spawning an NSTask to run ps -p [pid] and parsing the output... I'd prefer something more high level.

    Read the article

  • How to properly combine two files in XAML in Microsoft Blend?

    - by MartyIX
    Hello, I have a test project with the file MainWindow.xaml with the content: <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" xmlns:view="clr-namespace:Sokoban.View;assembly=Solvers" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <ad:DockingManager x:Name="dockingManager"> <ad:ResizingPanel Orientation="Vertical"> <view:Solvers x:Name="solvers" diag:PresentationTraceSources.TraceLevel="High" /> <!-- LINE BELOW DEMONSTRATES WORKING CODE INSTEAD OF LINE ABOVE --> <!--<ad:DocumentPane Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <ad:DockableContent x:Name="classesContent" Title="Classes"> <TextBlock>test</TextBlock> </ad:DockableContent> </ad:DocumentPane>--> </ad:ResizingPanel> </ad:DockingManager> </Window> and in another project I have the file Solvers.xaml: <ad:DocumentPane x:Class="Sokoban.View.Solvers" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> </ad:DocumentPane> When I open my Visual Studio solution in Microsoft Blend 4 then I see the error: InvalidOperationException: DocumentPane must be put under a DockingManager! when I open either MainWindow.xaml or Solvers.xaml. It is all right in Solvers.xaml because there really is no DockingManager but MainWindow.xaml should work, shouldn't it? How to solve the problem? Note: It seems to me that the files are processed separately and because the file Solvers.xaml contains the error the MainWindow.xaml file also contains the very same error. Note 2: XAML files use AvalonDock library Is there a way how to say that Solvers.xaml is only an extension of another file? Thank you for any help!

    Read the article

  • Silverlight 4 seems like starving of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes up to 1.5 GB. If the process won't take that much the layout is rendered properly even though the memory usage is still extremely high. Otherwise everything crashes due to out of memory exception. Running the same application compiled in SL3, the memory used is about 200MB when all the usercontrols are loaded. Time spent to load the application in SL3 is about 10 seconds, while it takes up to 3 mins in SL4 There are no transparencies, no opacities set, no effects and animations in the layout. User controls are instantied on the fly and added or removed in the visual tree on purpose when the user switches from one screen to another. The resources are all cleaned properly when a usercontrol is removed from the visual tree to allow the GC to operate in the background. I may do something wrong but I could not figure out where exactly nail out the source of this problem. As far as I know there is no memory profiler in SL4 that can help me out to find where to look at. But again I could not be updated on new debugging tools available.

    Read the article

  • Writing a JMS Publisher without "public static void main"

    - by The Elite Gentleman
    Hi guys, Every example I've seen on the web, e.g. http://www.codeproject.com/KB/docview/jms_to_jms_bridge_activem.aspx, creates a publisher and subscriber with a public static void main method. I don't think that'll work for my web application. I'm learning JMS and I've setup Apache ActiveMQ to run on JBoss 5 and Tomcat 6 (with no glitches). I'm writing a messaging JMS service that needs to send email asynchronously. I've already written a JMS subscriber that receives the message (the class inherits MessageListener). My question is simple: How do I write a publisher that will so that my web applications can call it? Does it have to be published somewhere? My thought is to create a publisher with a no-attribute constructor (in there) and get the MessageQueue Factory, etc. from the JNDI pool (in the constructor). Is my idea correct? How do I subscribe my subscriber to the Queue Receiver? (So far, the subscriber has no constructor, and if I write a constructor, do I always subscribe myself to the Queue receiver?) Thanks for your help, sorry if my terminology is not up to scratch, there are too many java terminologies that I get lost sometimes (maybe a java GPS will do! :-) ) PS Is there a tutorial out there that explains how to write a "better" (better can mean anything, but in my case it's all about performance in high demand requests) JMS Publisher and Subscriber that I can run on Application Server such as JBoss or Glassfish? Don't forget that the JMS application will needs a "guarantee" uptime as many applications will use this.

    Read the article

  • Where to find algorithms for standard math functions?

    - by dsimcha
    I'm looking to submit a patch to the D programming language standard library that will allow much of std.math to be evaluated at compile time using the compile-time function evaluation facilities of the language. Compile-time function evaluation has several limitations, the most important ones being: You can't use assembly language. You can't call C code or code for which the source is otherwise unavailable. Several std.math functions violate these and compile-time versions need to be written. Where can I get information on good algorithms for computing things such as logarithms, exponents, powers, and trig functions? I prefer just high level descriptions of algorithms to actual code, for two reasons: To avoid legal ambiguity and the need to make my code look "different enough" from the source to make sure I own the copyright. I want simple, portable algorithms. I don't care about micro-optimization as long as they're at least asymptotically efficient. Edit: D's compile time function evaluation model allows floating point results computed at compile time to differ from those computed at runtime anyhow, so I don't care if my compile-time algorithms don't give exactly the same result as the runtime version as long as they aren't less accurate to a practically significant extent.

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • Mouse bugginess - SWFObject, Firefox 3 for Mac, and Flash

    - by justinbach
    I'm pulling my hair out over a problem I'm encountering on Firefox 3.5 & 3.6 on OS X. I'm using SWFobject to embed an AmMap of the US, which has rollover tooltips for various states. The rollovers are working fine in every other browser I've tested, but they're very buggy on FF for Mac--most of the time they don't show up at all, but if I persistently click a state that's supposed to have a hover event, I might catch a glimpse of the tooltip. Here's the code for the SWFObject embed (incidentally, this isn't being done in the document head due to templating reasons). The reason that the SWFObject initialization is wrapped in Jquery's document.ready handler is that the swf wasn't even appearing in FF 3.5.9 for mac until I added that in: $(document).ready(function() { var params = { quality: "high", scale: "noscale", allowscriptaccess: "always", allowfullscreen: "true", bgcolor: "#FFFFFF", base:"/<?php print LANG . "/locations/" ?>" }; var flashvars = { path: "", settings_file: "mapsettings", data_file:"mapdata" }; var attributes = { id: "flashmap", name: "flashmap" }; swfobject.embedSWF("/assets/flash/ammap.swf", "flashmap", "470", "300", "8", null, flashvars, params, attributes); }); Any feedback would be greatly appreciate...site goes live in 48 hours! Thanks!

    Read the article

  • How do you concat multiple rows into one column in SQL Server?

    - by Jason
    I've searched high and low for the answer to this, but I can't figure it out. I'm relatively new to SQL Server and don't quite have the syntax down yet. I have this datastructure (simplified): Table "Users" | Table "Tags": UserID UserName | TagID UserID PhotoID 1 Bob | 1 1 1 2 Bill | 2 2 1 3 Jane | 3 3 1 4 Sam | 4 2 2 ----------------------------------------------------- Table "Photos": | Table "Albums": PhotoID UserID AlbumID | AlbumID UserID 1 1 1 | 1 1 2 1 1 | 2 3 3 1 1 | 3 2 4 3 2 | 5 3 2 | I'm looking for a way to get the all the photo info (easy) plus all the tags for that photo concatenated like CONCAT(username, ', ') AS Tags of course with the last comma removed. I'm having a bear of a time trying to do this. I've tried the method in this article but I get an error when I try to run the query saying that I can't use DECLARE statements... do you guys have any idea how this can be done? I'm using VS08 and whatever DB is installed in it (I normally use MySQL so I don't know what flavor of DB this really is... it's an .mdf file?)

    Read the article

  • Why can't I create a database in an empty ASP MVC 2 project using Project->Add->New Item->SQL Server

    - by Dr Dork
    I'm diving head first into ASP MVC and am playing around with creating and manipulating a database. I did a search and found this tutorial for creating a database, however when I follow it, I get this error when trying to add a new database to my fresh, empty ASP MVC 2 project... A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) The only requirement the tutorial mentioned was SQL Server Express, but when I went to download it, it said it was already installed. I'm assuming it was part of the VS 2010 RC I installed and am running. So I don't know what else I need if I am missing something. This is all new to me, so I'm sure I'm missing something obvious here and after I'm done posting this question, I plan to do some more research into the topic of databases and how they work with ASP MVC. In the meantime, I was you could help me answer a couple high level questions... What am I missing/forgetting to do that is causing this error? Any suggestions for good resources/tutorials that focus on using databases with ASP MVC? I've done a lot of database programming in the past, so I'm familiar with the concepts of relational databases and the SQL language. I wish I could find a good resource for learning how to work with them in an ASP dev environment, as well as a good breakdown of all the related technologies used for working with them (i.e. LINQ to SQL). Thanks so much in advance for all your help! I'm going to start researching these questions right now.

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • Automatically install and launch a code-signed application from Safari

    - by Thomas Jung
    Is it possible and if so what are the steps necessary to package (or build) a Mac OS X application and code-sign it so that it can be downloaded with Safari and automatically launch? ... possibly after the user responds to some sort of dialog explaining that it is a signed application and the publisher has been verified. An example of the user experience I am trying to create is "installing Google Chrome for the first time on Windows", which is a 3-click, less-than-a-minute process. For the concerned among you: I am not trying to create a drive-by download. I am fine with some sort of intermittent user step approving the download. I just want to make the installation as quick and painless as possible and not require the user drag the app from a mounted DMG into the application folder. This may not 100% jibe with established Mac OS X user interaction guidelines, but it would work better for the not-power users. I only need the high-level steps or pointers to resources ... my google-fu was weak on this one.

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Algorithm to determine which points should be visible on a map based on zoom

    - by lgratian
    Hi! I'm making a Google Maps-like application for a course at my Uni (not something complex, it should load the map of a city for example, not the whole world). The map can have many layers, including markers (restaurants, hospitals, etc.) The problem is that when you have many points and you zoom out the map it doesn't look right. At this zoom level only some points need to be visible (and at the maximum map size, all points). The question is: how can you determine which points should be visible for a specified zoom level? Because I have implemented a PR Quadtree to speed up rendering I thought that I could define some "high-priority" markers (that are always visible, defined in the map editor) and put them in a queue. At each step a marker is removed from the queue and all it's neighbors that are at least D units away (D depends on the zoom levels) are chosen and inserted in the queue, and so on. Is there any better way than the algorithm I thought of? Thanks in advance!

    Read the article

  • Datastore performance, my code or the datastore latency

    - by fredrik
    I had for the last month a bit of a problem with a quite basic datastore query. It involves 2 db.Models with one referring to the other with a db.ReferenceProperty. The problem is that according to the admin logs the request takes about 2-4 seconds to complete. I strip it down to a bare form and a list to display the results. The put works fine, but the get accumulates (in my opinion) way to much cpu time. #The get look like this: outputData['items'] = {} labelsData = Label.all() for label in labelsData: labelItem = label.item.name if labelItem not in outputData['items']: outputData['items'][labelItem] = { 'item' : labelItem, 'labels' : [] } outputData['items'][labelItem]['labels'].append(label.text) path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, outputData)) #And the models: class Item(db.Model): name = db.StringProperty() class Label(db.Model): text = db.StringProperty() lang = db.StringProperty() item = db.ReferenceProperty(Item) I've tried to make it a number of different way ie. instead of ReferenceProperty storing all Label keys in the Item Model as a db.ListProperty. My test data is just 10 rows in Item and 40 in Label. So my questions: Is it a fools errand to try to optimize this since the high cpu usage is due to the problems with the datastore or have I just screwed up somewhere in the code? ..fredrik

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • CPU friendly infinite loop

    - by Adi
    Writing an infinite loop is simple: while(true){ //add whatever break condition here } But this will trash the CPU performance. This execution thread will take as much as possible from CPU's power. What is the best way to lower the impact on CPU? Adding some Thread.Sleep(n) should do the trick, but setting a high timeout value for Sleep() method may indicate an unresponsive application to the operating system. Let's say I need to perform a task each minute or so in a console app. I need to keep Main() running in an "infinite loop" while a timer will fire the event that will do the job. I would like to keep Main() with the lowest impact on CPU. What methods do you suggest. Sleep() can be ok, but as I already mentioned, this might indicate an unresponsive thread to the operating system. LATER EDIT: I want to explain better what I am looking for: I need a console app not Windows service. Console apps can simulate the Windows services on Windows Mobile 6.x systems with Compact Framework. I need a way to keep the app alive as long as the Windows Mobile device is running. We all know that the console app runs as long as its static Main() function runs, so I need a way to prevent Main() function exit. In special situations (like: updating the app), I need to request the app to stop, so I need to infinitely loop and test for some exit condition. For example, this is why Console.ReadLine() is no use for me. There is no exit condition check. Regarding the above, I still want Main() function as resource friendly as possible. Let asside the fingerprint of the function that checks for the exit condition.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >