Search Results

Search found 8688 results on 348 pages for 'per salmi'.

Page 208/348 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Voting software with remote units - architectural questions

    - by David Neale
    I'm looking at designing some software that registers live votes (let's say A,B,C or D). The vote needs to be picked up and processed by a .NET engine. The remote voting units should be as small as possible. What form of data transmission should be used for the voting? The data is obviously very simple but there is a need to make sure each unit can only vote once per question. How would the data be received by the computer running the software?

    Read the article

  • Good .NET library for fast streaming / batching trigonometry (Atan)?

    - by Sean
    I need to call Atan on millions of values per second. Is there a good library to perform this operation in batch very fast. For example, a library that streams the low level logic using something like SSE? I know that there is support for this in OpenCL, but I would prefer to do this operation on the CPU. The target machine might not support OpenCL. I also looked into using OpenCV, but it's accuracy for Atan angles is only ~0.3 degrees. I need accurate results.

    Read the article

  • How to populate an ontology at runtime?

    - by Chan
    I have a configuration file with a lot of data like sensor locations, type, rules for activating devices etc. Basically related to a pervasive system. I plan to design an ontology for this domain. The doubt in my mind is how should I populate the ontology with the information in the configuration file, as the configuration files are going to change every now and then. Earlier I was planning to use XML, so I can just read the configuration file at runtime and create an XML as per the XSD. Do we use the same technique for Ontologies? If yes then what is the format of the populated ontology? Thanks Chan

    Read the article

  • Disk-based caching of dynamic images in IIS 7

    - by Daniel Schierbeck
    I'm writing an image server which needs to handle a relatively large number of concurrent requests (~5,000). The images being served are dynamically scaled down and cropped based on per-image specifications, which are queried from a database. The number of images is rather large, so an in-memory cache isn't viable (thrashing would most definitely occur). I'm using native caching in IIS 7 to avoid hitting the ASP.NET app which generates the images on-the-fly. I've looked around, but I couldn't find a simple way to configure IIS to store the cache on-disk -- is there such an option, or would I need to roll my own? I'd rather avoid placing the generated images in a public folder, so they can be served statically, since I would prefer to invalidate the cache entries using a query parameter (last-edit time from the database,) which doesn't seem possible to reconcile with static caching. I would love to get some feedback on this!

    Read the article

  • Selecting multiple columns and rows for formatting - Excel

    - by Joyce
    I have a report which I used the command subtotals. Aesthetically, I just want to make these subtotal rows (columns A to P) filled with color, be in Bold and have a surrounding border. There are hundreds of totals generated in my report. And they do not have a recurring row position. So basically in order for it to look good, I do it manually per row. Is there a faster way? Thanks!

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • Code reading: where can i read great, modern, and well-documented C++ code?

    - by baol
    Reading code is one of the best ways to learn new idioms, tricks, and techniques. Sadly it's very common to find badly written C++ code. Some use C++ as it was C, others as if it was Java, some just shoot in their feet. I believe gtkmm is a good example of C++ design, but a binding could not be the better code to read (you need to know the C library behind that). Boost libraries (at least the one I read) tend to be less readable than I'd like. Can you mention open source projects (or other projects which source is freely readable) that are good example of readable, modern, well-documented, and auto-contained, C++ code to learn from? (I believe that one project per answer will be better, and I'd include the motivation that led you to selecting that one.)

    Read the article

  • ForceContext Queries method twice?

    - by azz0r
    Hello, I'm doing a per minute script, to output the xml that a server will read, I am used forceContext. <?php class My_Controller_Action_Helper_ForceContext extends Zend_Controller_Action_Helper_ContextSwitch { public function initContext($format = null) { $request = $this->getRequest(); $action = $request->getActionName(); $context = $this->getActionContexts($action); //check if this is the only context if(count($context) === 1) { $format = $context[0]; } return parent::initContext($format); } } class Video_PerMinuteController extends Zend_Controller_Action { function init() { $contextSwitch = $this->_helper->getHelper('ForceContext'); $contextSwitch->addActionContext('transaction', 'xml')->initContext(); In my method, it gets the current minute count, adds 1, then saves. So I can clearly see when its accessed more than once in a minute. If I comment out the second contextSwitch line, it only goes up 1, if Its not, it displays the xml page but adds 2 minutes (being called twice somehow). Any ideas?

    Read the article

  • Entity Framework associations killing performance

    - by Chris
    Here is the performance test i am looking at. I have 8 different entities that are table per type. Some of the entities contain over 100 thousand rows. This particular application does several recursive calculations on the client so I think it may be best to preload the data instead of lazy loading. If there are no associations I can load the entire database in about 3 seconds. As I add associations in any way the performance starts to drastically decline. I am loading all the data the same way (just calling toList() on the entity attached to the context). I ran the test with edmx generated classes and self tracking entities and had similar results. I am sure if I were to try and deal with the associations myself, similar to how I would in a dataset, the performance problem would go away. On the other hand I am pretty sure this is not how the entity framework was intended to being used. Any thoughts or ideas?

    Read the article

  • Please suggest some alternative to Drupal

    - by abovesun
    Drupal propose completely different approach in web development (comparing with RoR like frameworks) and it is extremely good from development speed perspective. For example, it is quite easy to clone 90% of stackoverflow functionality using Drupal. But it has several big drawbacks: it is f''cking slow (100-400 requests per page) db structure very complicated, need at least 2 tables for easy content (entity) type, CCK fields very easy generate tons of new db tables anti-object oriented, rather aspect-oriented bad "view" layer implementation, no strange forward layouts and so on. After all this items I can say I like Drupal, but I would like something same, but more elegant and more object oriented. Probably something like http://drupy.net/ - drupal emulation on the top of django. P.S. I wrote this question not for new holy word flame, just write if you know alternative that uses something similar approach.

    Read the article

  • Piping to findstr's input, dos prompt

    - by Gauthier
    I have a text file with a list of macro names (one per line). My final goal is to get a print of how many times the macro's name appears in the files of the current directory. The macro's names are in C:\temp\macros.txt. type C:\temp\macros.txt in the dos prompt prints the list alright. Now I want to pipe that output to the standard input of findstr. type C:\temp\macros.txt | findstr *.ss (ss is the file type where I am looking for the macro names). This does not seem to work, I get no result (very fast, it does not seem to try at all). findstr <the first row of the macro list> *.ss does work. I also tried findstr *.ss < c:\temp\macros.txt with no success.

    Read the article

  • Throughput measurements

    - by dotsid
    I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history: 4 test executions 2 threads 36ms overall time - idle * test execution 5ms 9ms 4ms 13ms T1 |-*****-*********-****-*************-| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-----| <-----------------36ms---------------> For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount. But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason): 3ms 3ms 3ms 3ms T1 |-***-***-***-***----------------| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-| <--------------32ms--------------> In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

    Read the article

  • How change the videos max of page with jQuery ?

    - by Shady
    I have one website here that contains videos to see, like youtube, and the videos are divided in pages, and in the source of this page I got this part of the source: <input type="hidden" value="12" id="vid_count"> <input type="hidden" value="422" id="vid_max"> <input type="hidden" value="12" id="vids_per_page"> The site contains 422 videos with 36 pages, 12 videos per page I need to show all videos in only one page... I've already tried document.getElementById("vids_per_page").setAttribute("value", "500"); but this doesn't work... How can I do it? (via greasemonkey) any additional info?

    Read the article

  • How to migrate large amounts of data from old database to new

    - by adam0101
    I need to move a huge amount of data from a couple tables in an old database to a couple different tables in a new database. The databases are SQL Server 2005 and are on the same box and sql server instance. I was told that if I try to do it all in one shot that the transaction log would fill up. Is there a way to disable the transaction log per table? If not, what is a good method for doing this? Would a cursor do it? This is just a one-time conversion.

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • Can games be considered real-time systems?

    - by harry
    I've been reading up on real-time systems and how they work etc. I was looking at the wikipedia article as well that said a game of Chess with a timer per move can be considered a real-time system because the program MUST compute a move in that time. What about other games? As we know, games generally try and run at 25+ FPS, could it be considered a soft real-time system since if it falls under 25 (I'm using 25 as a pre-defined threshold btw) it's not the end of the world, just a hit to the performance that we wanted? Also - games have events they must handle as well. The user uses the keyboard/mouse and the system must answer those events accordingly within (again) a pre-defined time, before the game is considered to have "failed". Oh, and I'm talking single-player for now to keep things simple. It sounds like games fit the soft real-time system criteria, but I'd like to know if I'm missing anything... thanks.

    Read the article

  • Way to get VS 2008 to stop forcing indentation on namespaces?

    - by Earlz
    I've never really been a big fan of the way most editors handle namespaces. They always force you to add an extra pointless level of indentation. For instance, I have a lot of code in a page that I would much rather prefer formatted as namespace mycode{ class myclass{ void function(){ foo(); } void foo(){ bar(); } void bar(){ //code.. } } } and not something like namespace mycode{ class myclass{ void function(){ foo(); } void foo(){ bar(); } void bar(){ //code.. } } } Honestly, I don't really even like the class thing being indented most of the time because I usually only have 1 class per file. And it doesn't look as bad here, but when you get a ton of code and lot of scopes, you can easily have indentation that forces you off the screen, and plus here I just used 2-space tabs and not 4-space as is used by us. Anyway, is there some way to get Visual Studio to stop trying to indent namespaces for me like that?

    Read the article

  • MySQL - display rows of names and addresses grouped by name, where name occures more than once

    - by Stoob
    I have two tables, "name" and "address". I would like to list the last_name and joined address.street_address of all last_name in table "name" that occur more than once in table "name". The two tables are joined on the column "name_id". The desired output would appear like so: 213 | smith | 123 bluebird | 14 | smith | 456 first ave | 718 | smith | 12 san antonia st. | 244 | jones | 78 third ave # 45 | 98 | jones | 18177 toronto place | Note that if the last_name "abernathy" appears only once in table "name", then "abernathy" should not be included in the result. This is what I came up with so far: SELECT name.name_id, name.last_name, address.street_address, count(*) FROM `name` JOIN `address` ON name.name_id = address.name_id GROUP BY `last_name` HAVING count(*) > 1 However, this produces only one row per last name. I'd like all the last names listed. I know I am missing something simple. Any help is appreciated, thanks!

    Read the article

  • Delphi Clientdataset Lookup/Aggregate

    - by TheRoadrunner
    Hi, I need a little help with ClientDatasets in Delphi. What I want to achieve is a grid showing customers, where one of the columns shows the number of orders for each customer. I put a ClientDataset on a form and load Customers.xml from Delphi demo-data. Another ClienDataset is loaded with orders.xml. Relatively simple, I can define an aggregate on the orders CDS showing the total amount per customer (or the count). (See Cary Jensens article on this: http://edn.embarcadero.com/article/29272) The problem is getting this aggregate result from orders dataset into the customer dataset. It is kind of an reverse lookup, since there is a 1-n relationship between customers and orders, not an n-1 as normally in lookup scenarios. Any ideas ? Søren

    Read the article

  • Will SQL Server Partitioning increase performance without changing filegroups

    - by Tom
    Scenario I have a 10 million row table. I partition it into 10 partitions, which results in 1 million rows per partition but I do not do anything else (like move the partitions to different file groups or spindles) Will I see a performance increase? Is this in effect like creating 10 smaller tables? If I have queries that perform key lookups or scans, will the performance increase as if they were operating against a much smaller table? I'm trying to understand how partitioning is different from just having a well indexed table, and where it can be used to improve performance. Would a better scenario be to move the old data (using partition switching) out of the primary table to a read only archive table? Is having a table with a 1 million row partition and a 9 million row partition analagous (performance wise) to moving the 9 million rows to another table and leaving only 1 million rows in the original table?

    Read the article

  • Google app engine - what is the lifecycle of PersistenceManager?

    - by Domchi
    What is the preferred way of using GAE datastore PersistenceManager for web app? GAE instructions are a bit ambiguous on the matter. Do I instantiate PersistenceManagerFactory for each RPC call, or do I use only one factory for all requests? Do I call PMF.get().getPersistenceManager(), or do I call PMF.get().getPersistenceManagerProxy()? Do I close PM after each RPC call, or do I leave it open? What are you guys doing? Furthermore, I'm not certain how GAE handles 30-second-per-request limit. Is it even possible to reference the same PM between requests?

    Read the article

  • Kohana 3 and CRON always accessing index.php (not following the URI argument)

    - by alex
    OK, I hope this is my last question about CRON jobs and Kohana 3. Note: others are not duplicates, just other problems. Here is my CRON job (setup in cPanel) php /home/user/public_html/index.php --uri=properties/update As per this answer. I have set it up so it emails me the output. It is running every 5 mins. Unfortunately, it always emails me the source of the home page of my site (index.php or /). I can access that URL fine in my browser, i.e. http://www.example.com/properties/update and it works and does it's job correctly. I can tell the Cron is never hitting the script because I have a file logger in place. Would this have anything to do with .htaccess? Has this happened to anyone before, and how did they fix it? Many thanks.

    Read the article

  • How do you pass or share variables between django views?

    - by Hugh
    Hi, I'm kind of lost as to how to do this: I have some chained select boxes, with one select box per view. Each choice should be saved so that a query is built up. At the end, the query should be run. But how do you share state in django? I can pass from view to template, but not template to view and not view to view. Or I'm truly not sure how to do this. Please help!

    Read the article

  • ruby on rails group by with null values problem

    - by winter sun
    I have an hour table in witch I store user time tracking information, the table consists from the following cells project_id task_id (optional can be null) worker_id reported_date working_hours each worker enters sevral records per day so generally the table is looking like this id project_id worker_id task_id reported_date working hours; == =========== ========= ========= ============= ============== 1 1 1 1 10/10/2011 4 2 1 1 1 10/10/2011 14 3 1 1 10/10/2011 4 4 1 1 10/10/2011 14 the task_id is not a must field so there can be times when the user is not selecting it and ther task_id cell is empty now i need to display the data by using group by clouse so the resualt will be somthing like this project_id worker_id task_id working hours ========== ========= ========= ============== 1 1 1 18 1 1 18 I did the folowing group by condition @group_hours=Hour.group('project_id,worker_id,task_id)').select('project_id, task_id ,worker_id,sum(working_hours)as working_hours_sum') My view looks like this <% @group_hours.each do |b| % <%= b.project.name if b.project % <%= b.worker.First_name if b.worker % <%= b.task.name if b.task % <%= b.working_hours_sum % <%end% This it is working but only if the task_id is not null when task id is null it present all the records without grouping them like this project_id worker_id task_id working hours =========== ========= ========= ============== 1 1 1 18 1 1 4 1 1 14 I will appreciate any kind of solution to this problem

    Read the article

  • [Alfresco] property qualified-name in method getContentReader

    - by Ar3s
    Hi there, I first apologize for my poor english level and maybe for the stupidity of my question ;) I am on an alfresco project to learn how it works. I have to browse programatically my content repository and gather datas all along. In order to do that I guessed I had to use a ContentReader (I get from my ContentService) but the method getReader wants a nodeRef and a propertyQualifiedName. I am ok with the nodeRef, I get what it's needed for. But the propertyQualifiedName puzzles me, I barely get what it is but I frankly don't get how it is used. Reading some alfresco forum threads I get more and more scared that I dont even get how a reader works, I somewhere saw that a reader can read only one node and only one time per instance. If anyone knows a bit about the Java API for Alfresco Content Repository use I am all hears ! Cheers all !

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >