Search Results

Search found 10098 results on 404 pages for 'per pixel'.

Page 330/404 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Checkbox does not change when clicked directly on it

    - by Matt McCormick
    I have a table of data and have one checkbox per row for the user to select the item. I'm using the following jQuery code to allow the user to select the item by clicking anywhere in the row: $("tbody tr").click(function() { var checkbox = $(this).find(':checkbox'); checkbox.attr('checked', !checkbox.attr('checked')); }); The problem is if I click directly on the checkbox, nothing happens. ie. if the checkbox is not checked, it remains unchecked. If I click anywhere else in the row, the checkbox changes status. I'm thinking that the jQuery code is causing the action to be performed twice. If I click on the checkbox, the checkbox will change but then the jQuery code for clicking on the row will be performed and the checkbox will be changed back. Not sure if that is actually happening but that is my guess. How can I get a checkbox to be toggled when a user clicks on a row and the same thing if they click directly on the checkbox?

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

  • Storing an object to use in multiple classes

    - by Aaron Sanders
    I am wondering the best way to store an object in memory that is used in a lot of classes throughout an application. Let me set up my problem for you: We have multiple databases, 1 per customer. We also have a master table and each row is detailed information about the databases such as database name, server IP it's located and a few config settings. I have an application that loops through those multiple databases and runs some updates on them. The settings I mentioned above are updated each loop iteration into memory. The application then runs through series of processes that include multiple classes using this data. The data never changes during the processes, only during the loop iteration. The variables are related to a customer, so I have them stored in a customer class. I suppose I could make all of the members shared or should I use a singleton for the customer class? I've never actually used a singleton, only read they are good in this type of situation. Are there better solutions to this type of scenario? Also, I could have plans for this application to be multithreaded later. Sorry if this is confusing. If you have questions, let me know and I will answer them. Thanks for your help.

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

  • Thinking Sphinx with Rails - Delta indexing seems to work fine for one model but not for the other

    - by hack3r
    I have 2 models User and Discussion. I have defined the indices for the models as below: For the User model: define_index do indexes email indexes first_name indexes last_name, :sortable => true indexes groups(:name), :as => :group_names has "IF(email_confirmed = true and status = 'approved', true, false)", :as => :approved_user, :type => :boolean has "IF(email_confirmed = true and (status = 'approved' or status='blocked'), true, false)", :as => :approved_or_blocked_user, :type => :boolean has points, :type => :integer has created_at, :type => :datetime has user(:id) set_property :delta => true end For the Discussion model: define_index do indexes title indexes description indexes category(:title), :as => :category_title indexes tags(:title), :as => :tag_title has "IF(publish_to_blog = true AND sticky = false, true, false)", :as => :publish_to_main, :type => :boolean has created_at has updated_at, :type => :datetime has recent_activity_at, :type => :datetime has views_count, :type => :integer has featured has publish_to_blog has sticky set_property :delta => true end I have added a delta column to both tables as per the documentation. My problem is that delta indexing works only for the Discussion model and not for the User model. For ex: When I update the 'title' of a discussion, I can see the thinking sphinx is rotating the indices etc. (as is evident from the logs). But when I update the 'first_name' or the 'last_name' of a user, nothing happens. The User model also has a has_many :through association through a model called GroupsUser. I have setup a after_save on the GroupsUser as follows: def set_user_delta_flag user.delta = true user.save end Even this doesn't seem to trigger delta indexing on the User model. A similar setup for the Discussion model works perfectly! Can anyone tell me why this is happening?

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • Java method keyword "final" and its use

    - by Lukas Eder
    When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example: interface Garble { int zork(); } interface Gnarf extends Garble { /** * This is the same as calling {@link #zblah(0)} */ int zblah(); int zblah(int defaultZblah); } And then abstract class AbstractGarble implements Garble { @Override public final int zork() { ... } } abstract class AbstractGnarf extends AbstractGarble implements Gnarf { // Here I absolutely want to fix the default behaviour of zblah // No Gnarf shouldn't be allowed to set 1 as the default, for instance @Override public final int zblah() { return zblah(0); } // This method is not implemented here, but in a subclass @Override public abstract int zblah(int defaultZblah); } I do this for several reasons: It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy) I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it. So the final keyword works perfectly for me. My question is: Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

    Read the article

  • How can I setup Hudson to use the same repository for different projects and maintain separate chang

    - by Allen
    I typically setup SVN to host 1 big project per repository but a lot of our infrastructure has changed and we now have one main SVN server that has a hierarchy like so Branches Tags Trunk Project1 files & folders Project2 files & folders Project3 files & folders Projects1,2, and 3 do not share anything amongst themselves, they are independent projects each with their own solution file to be built. I can setup projects in Hudson like so Repository Url: http://server/svn/MainRepository Local module directory (optional): /Trunk/Project1 And that will maintain a separate workspace for each project, but every time you commit to Project 2 or Project 3, a build gets kicked off in Hudson for every project based in that repository. Also, any commit made anywhere in the repository is pulled down and inserted into the Hudson changelog for all of them. I know the easiest solution would be to simply separate every project into its own repository. However, if I couldn't do that due to various reasons, is there a feasible way to achieve the functionality that having separate repositories gets me? I want commits to the sub folder of project 1 to only affect project 1. No other project's commits should cause project 1 to build and project 1's changelog in Hudson should only have commit notes from project 1.

    Read the article

  • Increasing Regex Efficiency

    - by cam
    I have about 100k Outlook mail items that have about 500-600 chars per Body. I have a list of 580 keywords that must search through each body, then append the words at the bottom. I believe I've increased the efficiency of the majority of the function, but it still takes a lot of time. Even for 100 emails it takes about 4 seconds. I run two functions for each keyword list (290 keywords each list). public List<string> Keyword_Search(HtmlNode nSearch) { var wordFound = new List<string>(); foreach (string currWord in _keywordList) { bool isMatch = Regex.IsMatch(nSearch.InnerHtml, "\\b" + @currWord + "\\b", RegexOptions.IgnoreCase); if (isMatch) { wordFound.Add(currWord); } } return wordFound; } Is there anyway I can increase the efficiency of this function? The other thing that might be slowing it down is that I use HTML Agility Pack to navigate through some nodes and pull out the body (nSearch.InnerHtml). The _keywordList is a List item, and not an array.

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • WHy am I unable to add text along with <%# Container.DataItem %> in repeater in user control

    - by Jamie Hartnoll
    I have a User Control which is dynamically placed by CodeBehind as follows: Dim myControl As Control = CType(Page.LoadControl("~/Controls/mainMenu.ascx"), Control) If InStr(Request.ServerVariables("url"), "/Login.aspx") <= 0 Then mainMenu.Controls.Add(myControl) End If As per an example from my previous question on here. Within this Control is a repeater which calls a database to generate values. My Repeater mark-up is as follows <asp:Repeater runat="server" ID="locationRepeater" OnItemDataBound="getQuestionCount"> <ItemTemplate> <p id='locationQuestions' title='<%# Container.DataItem %>' runat='server'></p> </ItemTemplate> </asp:Repeater> The example above works fine, but I want to be able to prepend text to <%# Container.DataItem %> in the title attribute of that <p to print to the browser like this is some text DATA_ITEM_OUTPUT When I try to do that though, it prints this is some text <%# Container.DataItem %> exactly like that, ie, turning <%# Container.DataItem %> into text, NOT the value from the repeater code. It was working fine before I made it into a dynamically inserted control, so I am thinking I might have something being generated in the wrong order, but given that it works without any prepended text, I am stumped to fix it! I'm new to .net and using vb.net, please could someone point me in the right direction?

    Read the article

  • NHibernate Child items query using Parent Id

    - by thorkia
    So I have a set up similar to this questions: Parent Child Setup Everything works great when saving the parent and the children. However, I seem to have a problem when selecting the children. I can't seem to get all the children with a specific parent. This fails with: NHibernate.QueryException: could not resolve property: ParentEntity_id of: Test.Data.ChildEntity Here is my code: public IEnumerable<ChildEntity> GetByParent(ParentEntity parent) { using (ISession session = OrmHelper.OpenSession()) { return session.CreateCriteria<ChildEntity>().Add(Restrictions.Eq("ParentEntity_id ", parent.Id)).List<ChildEntity>(); } } Any help in building a proper function to get all the items would be appreciated. Oh, I am using Fluent NHibernate to construct the mappings - version 1 RTM and NHibernate 2.1.2 GA If you need more information, let me know. As per you request, my fluent mappings: public ParentEntityMap() { Id(x => x.Id); Map(x => x.Name); Map(x => x.Code).UniqueKey("ukCode"); HasMany(x => x.ChildEntity).LazyLoad() .Inverse().Cascade.SaveUpdate(); } public ChildEntityMap() { Id(x => x.Id); Map(x => x.Amount); Map(x => x.LogTime); References(x => x.ParentEntity); } That maps to the following 2 tables: CREATE TABLE "ParentEntity" ( Id integer, Name TEXT, Code TEXT, primary key (Id), unique (Code) ) CREATE TABLE "ChildEntity" ( Id integer, Amount NUMERIC, LogTime DATETIME, ParentEntity_id INTEGER, primary key (Id) ) The data store in SQLite.

    Read the article

  • Design Pattern for error handling in ASP.NET 3.5 site

    - by Kevin
    I am relatively new to ASP.NET programming, and web programming in general. We have a site we recently ported from .NET 1.1 to 3.5. Currently we have two methods of error handling: either catching the error during data load on a page and displaying the formatted error in a label on the page, or redirecting to a generic error page. Both of these are somewhat annoying, as right now I'm trying to redesign how our errors are displayed. We are soon moving to Master pages, and I'm wondering if there is a way to "build in" an error handling control. What I mean by this is using a ASP.NET user control I've designed that simply gets passed the error string returned from the server. If an error occurs, the page would not display the content, and instead display the error control. This provides us with the ability to retain the current banner/navigation during an error (which we don't get with the generic error page), as well as keeping me from having to add the control to every aspx page we have (which I have to do with using the label-per-page system). Does something like this make sense? Ultimately I just want to have the error control added to a single page, and all other pages have access to it directly. Is this something Master pages help with? Thanks!

    Read the article

  • ScrollView content async downloading problem

    - by Newbee
    Hi! I have UIScrollView with lots of UIImageView inside. In the loadView method I assign some temporary image for each of subview UIImageView images and starts several threads to async download images from internet. Each thread downloads and assign images as follows: NSData *data = [NSData dataWithContentsOfURL:URL]; UIImage *img = [UIImage imageWithData:data]; img_view.image = img; Here is the problem - I expects picture will changed after each image downloaded by I can see only temporary images until all images will downloads. UIScrollView still interact while images downloads - I can scroll temporary images inside it and see scrollers and nothing blocks run loop, but downloaded images doesn't updates.. What I tried to do: Call sleep() in the download thread -- not helps. Call setNeedsDisplay for each ImageView inside ScrollView and for ScrollView -- not helps. What's wrong ? Thanks. Update. I tried some experiments with number of threads and number of images to download. Now I'm sure -- images redraws only when thread finished. For example - if I load 100 images with one thread -- picture updates one time after all images downloads. If I increase number of threads to 10 -- picture updates 10 times -- 10 images appears per update. One more update. I fixed problem by staring new thread from the downloading threads each time one image downloaded and exit current thread (instead of download several images in one thread in the cycle and exit thread only when all downloaded). Obviously it's not a good solution and there must be right approach.

    Read the article

  • Comet, responseText and memory usage

    - by ithcy
    Is there a way to clear out the responseText of an XHR object without destroying the XHR object? I need to keep a persistent connection open to a web server to feed live data to a browser. The problem is, there is a relatively large amount of data coming through (several hundred K per second constantly), so memory usage is a big problem, because this connection must remain open for at least several minutes. responseText gets very big very quickly, even though the JSON I send back has been crunched as small as it can get. Due to the way the server-side app works, if I use AJAX-style short polling and just destroy the XHR object when I'm done with it, I miss significant amounts of important data even in the few milliseconds it takes to parse the response, create a new XHR and send it out. I do not have the option to use overlapping requests, as the web server only accepts one connection at a time. (Don't ask.) So Comet is exactly the model I need. What I would like to do is parse each JSON chunk as it comes back from the server, and then clear out responseText so that I can keep using the same connection. However, responseText is read-only. It cannot be directly emptied by any method I have found. Is there a part of the picture I am missing here? Does anyone know any tricks I can use to free up responseText when I'm done reading it? Or is there another place the server responses can go? I am not including code because this is really almost a code-agnostic question. The Javascript routines that spawn the XHRs and handle the returned data are very, very simple.

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • Building a custom (dynamic) dataset and grid

    - by marko.ivanovski.nz
    Hi, I'm in the process of building a dynamic table in which you can add/remove rows & columns so it varies in size depending on what the user wants. Its purpose is to store properties for a product, but there can be from 1 to 10 different properties(columns) per product, and multiple instances(rows) of the product as well. Here's a screenshot of what I mean http://i40.tinypic.com/nbqkxc.jpg As you can see I need the structure to be completely up to the client which is where I'm getting stuck. I've started writing a custom DataSet that has "add column" & "add row" buttons, and have built a custom Table with Textboxes in each cell which builds from that dataset. I have no idea how to store the data on submit though, and to make it even more complex I need to store this in the database as a string which I think I can do by converting it to XML. Any help is appreciated, I think I just need a pointer in the right direction and am happy to do research from there. Thanks in advance. Marko

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Rails - handling global site settings

    - by egarcia
    I'm developing a new rails application which is supposed to be installed several times in order to implement several sites. There are some things, like the "Site Title" or the "Default Number of Items per Page" that clearly belong to a "global settings" table / config file. I've made a list of the things I think I'll need: ActiveRecord model that is capable of: Storing different kinds of data. I suppose this would be accomplished encoding the values on a string on the db, probably with a "type" field. Indexing settings by name Validations based on a "type" attribute (i.e. don't accept invalid dates on "date" settings) Validations based on a allows_nil property. A controller that allows me to change settings via views. I'm pretty sure I could implement this myself, but I'm not willing to reinvent the wheel. I've done some searching, but I could only find rails-settings, which doesn't really serve me: I need a proper model & controller so I can use declarative-authorization, and it does not provide any controller or view facilities. Is there a gem or plugin out there that implements what I want, or any library I should look at? Thanks a lot.

    Read the article

  • Sending series of images to display like a movie on iPhone

    - by unknownthreat
    Allow me to elaborate more. On the server, we will have a program that will take data from iPhone and process that data and produce series of images. Each time an image is generated, it will be send back to display on iPhone. I have done all of the things above using UDP, OpenGL, and such. It works. The images are transferred to iPhone and can be displayed, but it is slow. The image's resolution is around 320 x 420 and we send the image pixels by pixels. This naive implementation leads to a slow framerate. I can see around 2-3 frames per second. There are also some UDP packets dropped, and this is expected. Are there any sort of compression method available for something like this? Are there any other method that can make this better? NOTE: please don't just write "compression" as an answer, because we are aware that we will need to do it in some ways.

    Read the article

  • Personal Cache vs Memcache?

    - by Kerry
    I have a personal caching class, which can be seen here ( based off WordPress' ): http://pastie.org/988427 I recently learned about memcache and it said to memcache EVERYTHING: http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html My first thought was just to keep my class with the current functions and make it use memcache instead -- is there any downside to doing this? The main difference I see is that memcache stays on with the server from page to page, while mine is for 1 page load. The problem I see arising, and this is with any system, is that they're dynamic. They change all the time. Whether its search results, visible products, etc. etc. If it's all cached, won't the create a problem? Is there a way to handle this? Obviously if something is bringing back the same results everytime it would be cached, but that's why I was doing it on a per page load basis. I'm sure there is a way to handle this, or is the cache time usually set between 5 minutes and an hour?

    Read the article

  • Problem with DSL and Business Rules creation in Drools

    - by jillika iyer
    Hi, I am using Eclipse with the Drools plugin to create rules. I want to create business rules and main aim is to try and provide the user a set of options which he can use to create rules. For eg:If an Apple can have only 3 colors: I want to provide an option like a drop down so that the user can know before hand which are the options he can use in his rules. Is it possible? I am creating a dsl but unable to still provide the above functionality for a business rule. I am having an error implementing a basic dsl also. The code to add the dsl is as follows in my RuleRunner class() InputStream ruleSource = RuleRunner.class.getClassLoader().getResourceAsStream("/Rule1.dslr"); InputStream dslSource = RuleRunner.class.getClassLoader().getResourceAsStream("/sample-dsl.dsl"); //Load the rules , using DSL addRulesToThisPackage.addPackageFromDrl( new InputStreamReader(ruleSource),new InputStreamReader(dslSource)); I have both the sample-dsl .dsl and Rule1.dslr in my working directory. Error encountered at adding the dsl to the package (last line) Error stack: Exception in thread "main" java.lang.NullPointerException at java.io.Reader.<init>(Unknown Source) at java.io.InputStreamReader.<init>(Unknown Source) at com.org.RuleRunner.loadRuleFile(RuleRunner.java:96) at com.org.RuleRunner.loadRules(RuleRunner.java:48) at com.org.RuleRunner.runStatelessRules(RuleRunner.java:109) at com.org.RulesTest.main(RulesTest.java:41) my dsl file has basic mapping as per the online documentations. The dsl rule I created is: expander sample-dsl.dsl rule "A status changes B status" when There is an A - has an address There is a B - has name then - print updated A and Aaddress End I have created DSL in eclipse. Is the code I added for it to be loaded to my package correct?? Or am I missing something???? It seems like my program is unable to find the dsl? Please help. Can you point me towards the right direction to create a user friendly business rule ?? Thanks. J

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >