Search Results

Search found 62526 results on 2502 pages for 'data processing'.

Page 17/2502 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Proper Use Of HTML Data Attributes

    - by VirtuosiMedia
    I'm writing several JavaScript plugins that are run automatically when the proper HTML markup is detected on the page. For example, when a tabs class is detected, the tabs plugin is loaded dynamically and it automatically applies the tab functionality. Any customization options for the JavaScript plugin are set via HTML5 data attributes, very similar to what Twitter's Bootstrap Framework does. The appeal to the above system is that, once you have it working, you don't have worry about manually instantiating plugins, you just write your HTML markup. This is especially nice if people who don't know JavaScript well (or at all) want to make use of your plugins, which is one of my goals. This setup has been working very well, but for some plugins, I'm finding that I need a more robust set of options. My choices seem to be having an element with many data-attributes or allowing for a single data-options attribute with a JSON options object as a value. Having a lot of attributes seems clunky and repetitive, but going the JSON route makes it slightly more complicated for novices and I'd like to avoid full-blown JavaScript in the attributes if I can. I'm not entirely sure which way is best. Is there a third option that I'm not considering? Are there any recommended best practices for this particular use case?

    Read the article

  • Data classes: getters and setters or different method design

    - by Frog
    I've been trying to design an interface for a data class I'm writing. This class stores styles for characters, for example whether the character is bold, italic or underlined. But also the font-size and the font-family. So it has different types of member variables. The easiest way to implement this would be to add getters and setters for every member variable, but this just feels wrong to me. It feels way more logical (and more OOP) to call style.format(BOLD, true) instead of style.setBold(true). So to use logical methods insteads of getters/setters. But I am facing two problems while implementing these methods: I would need a big switch statement with all member variables, since you can't access a variable by the contents of a string in C++. Moreover, you can't overload by return type, which means you can't write one getter like style.getFormatting(BOLD) (I know there are some tricks to do this, but these don't allow for parameters, which I would obviously need). However, if I would implement getters and setters, there are also issues. I would have to duplicate quite some code because styles can also have a parent styles, which means the getters have to look not only at the member variables of this style, but also at the variables of the parent styles. Because I wasn't able to figure out how to do this, I decided to ask a question a couple of weeks ago. See Object Oriented Programming: getters/setters or logical names. But in that question I didn't stress it would be just a data object and that I'm not making a text rendering engine, which was the reason one of the people that answered suggested I ask another question while making that clear (because his solution, the decorator pattern, isn't suitable for my problem). So please note that I'm not creating my own text rendering engine, I just use these classes to store data. Because I still haven't been able to find a solution to this problem I'd like to ask this question again: how would you design a styles class like this? And why would you do that? Thanks on forehand!

    Read the article

  • Are the only types of data "sources" static and dynamic?

    - by blunders
    Thinking that there might be others, but not sure -- but before getting into that, let me explain what I mean by static and dynamic data sources. Static (or datastore) - Meaning that the data's state is non-changing, and if was changed, that would be a new state, and the old data would be considered stateless; meaning it no longer is known to exist, or not exist. Another way of possibly looking at a static data source might be that if read and written back without modification, the checksum for before and after should be exactly the same regardless of the duration of time between the reading and rewriting of the data. Examples: Photos, Files, Database Record, Dynamic (or datastream) - Meaning that the data's state is known to be in flux, and never expected to be the same per input. Example: Live video/audio feed, Stock Market feed, First let me say, the above is a very loose mapping of the concepts, and I'd welcome any feedback. Next, onto the core of the question, that being are these the only two types of data sources. My guess, is that yes, they are -- but that there are hybrid versions of the two. That being, streaming data that has a fixed state. For example, the data being streamed has a checksum given and each unique checksum is known to be a single instance of static data. On the flip side, static data could be chained via say a version control system; when played back, each version might be viewed as a segment of a stream; thing is, the very fact that it can be played back makes the data source static. Another type might be that the data source is being organically discovered, and it's simply unknown what the state is. Questions, feedback, requests -- just comment, thanks!!

    Read the article

  • How to make the members of my Data Access Layer object aware of their siblings

    - by Graham
    My team currently has a project with a data access object composed like so: public abstract class DataProvider { public CustomerRepository CustomerRepo { get; private set; } public InvoiceRepository InvoiceRepo { get; private set; } public InventoryRepository InventoryRepo { get; private set; } // couple more like the above } We have non-abstract classes that inherit from DataProvider, and the type of "CustomerRepo" that gets instantiated is controlled by that child class. public class FloridaDataProvider { public FloridaDataProvider() { CustomerRepo = new FloridaCustomerRepo(); // derived from base CustomerRepository InvoiceRepo = new InvoiceRespository(); InventoryRepo = new InventoryRepository(); } } Our problem is that some of the methods inside a given repo really would benefit from having access to the other repo's. Like, a method inside InventoryRepository needs to get to Customer data to do some determinations, so I need to pass in a reference to a CustomerRepository object. Whats the best way for these "sibling" repos to be aware of each other and have the ability to call each other's methods as-needed? Virtually all the other repos would benefit from having the CustomerRepo, for example, because it is where names/phones/etc are selected from, and these data elements need to be added to the various objects that are returned out of the other repos. I can't just new-up a plain "CustomerRepository" object inside a method within a different repo, because it might not be the base CustomerRepository that actually needs to run.

    Read the article

  • a flexible data structure for geometries

    - by AkiRoss
    What data structure would you use to represent meshes that are to be altered (e.g. adding or removing new faces, vertices and edges), and that have to be "studied" in different ways (e.g. finding all the triangles intersecting a certain ray, or finding all the triangles "visible" from a given point in the space)? I need to consider multiple aspects of the mesh: their geometry, their topology and spatial information. The meshes are rather big, say 500k triangles, so I am going to use the GPU when computations are heavy. I tried using arrays with vertices and arrays with indices, but I do not love adding and removing vertices from them. Also, using arrays totally ignore spatial and topological information, which I may need studying the mesh. So, I thought about using custom double-linked list data structures, but I believe doing so will require me to copy the data to array buffers before going on the GPU. I also thought about using BST, but not sure it fits. Any help is appreciated. If I have been too fuzzy and you require other information feel free to ask.

    Read the article

  • Recover Lost data/partition

    - by Kaleido
    This is what happened: I was running 12.04.1 and wanted to install 12.10. upgrade, but a fresh install. When setting up my computer I anticipated for this by dividing my 640GB HD in following partitions: 1. 60 GB for Ubuntu, boot 2. 576 GB for data, mountpoint /home 3. swap, 4GB The idea was to manually select the correct partition in the installer but I got distracted for a moment and selected the wrong option. Result: The installer started repartitioning the entire HD. When I noticed this I interrupted the installer, but upon reboot it was clear that I was too late, no OS to boot to. I booted from a Gparted Live CD to see if I could recover the data on my previous /home-partition, but it's gone. Is there any way to recover the lost data? I searched around and read alot about Testdisk, but in all the tutorials I've seen, the set-up has been much easier than what I'm facing. I've not only lost my partition table, it's been replaced. Thanks in advance for any ideas that might help! If extra info is needed, please specify and I will do my best to provide.

    Read the article

  • Dynamic Data Associate Related Table Value?

    - by davemackey
    I have create a LINQ-to-SQL project in Visual Studio 2010 using Dynamic Data. In this project I have two tables. One is called phones_extension and the other phones_ten. The list of columns in phones_extension looks like this: id, extension, prefix, did_flag, len, ten_id, restriction_class_id, sfc_id, name_display, building_id, floor, room, phone_id, department_id In phones_ten it looks like this: id, name, pbxid Now, I'd like to be able to somehow make it so that there is an association (or inheritance?) that essentially results in me being able to make a query like phones_extension.ten and it gives me the result of phones_ten.name. Right now I have to get phones_extension.ten_id and then match that against phones_ten.id - I'm trying to get the DBML to handle this translation automatically. Is this possible?

    Read the article

  • Core Data grouping data in table

    - by OscarTheGrouch
    I am using core data trying to create a simple database app, I have an entity called "Game" which has a "creator". I have basically used the iPhone table view template and replaced the names. I have the games listed by creator. Currently the tableview looks like this... Chris Ryder Chris Ryder Chris Ryder Chris Ryder Dan Grimaldi Dan Grimaldi Dan Grimaldi Scott Ricardo Tim Thermos Tim Thermos I am trying to group the tableview, so that each creator has only one cell in the tableview and is listed once and only once like this... Chris Ryder Dan Grimaldi Scott Ricardo Tim Thermos any help or suggestions would be greatly appreciated.

    Read the article

  • Insert new relationship data in core data

    - by michael
    Hoping someone can shed some light on what I might be doing wrong here. Trying to add an "event" to a list of events, represented by a one to many (inverse) relationship: MyEvents <--- Event This is the code: MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSLog(@"MYEVENTS: %@", myEvents); NSLog(@"EVENT: %@", event); [myEvents addEventObject:event]; My context is fine, and both the myEvents and event print perfectly valid information. When I try to add the event that I have (which is passed into this view controller, having been retrieved from core data previously) with this code [myEvents addEventObject:event]; It falls over with *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance MyEvents and Event are just the default generated code. MyEvents contains only the relationship to event. Thanks.

    Read the article

  • Maintaining integrity of Core Data Entities with many incoming one-to-many relationships

    - by Henry Cooke
    Hi all. I have a Core Data store which contains a number of MediaItem entities that describe, well, media items. I also have NewsItems, which have one-to-many relationships to a number of MediaItems. So far so good. However, I also have PlayerItems and GalleryItems which also have one-to-many relationships to MediaItems. So MediaItems are shared across entities. Given that many entities may have one-to-many relationships, how can I set up reciprocal relationships from a MediaItem to all (1 or more) of the entities which have relationships to it and, furthermore, how can I implement rules to delete MediaItems when the number of those reciprocal relationships drops to 0?

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • Implementation of "Automatic Lightweight Migration" for Core Data (iPhone)

    - by RickiG
    Hi I would like to make my app able to do an automatic lightweight migration when I add new attributes to my core data model. In the guide from Apple this is the only info on the subject I could find: Automatic Lightweight Migration To request automatic lightweight migration, you set appropriate flags in the options dictionary you pass in addPersistentStoreWithType:configuration:URL:options:error:. You need to set values corresponding to both the NSMigratePersistentStoresAutomaticallyOption and the NSInferMappingModelAutomaticallyOption keys to YES: NSError *error; NSURL *storeURL = <#The URL of a persistent store#>; NSPersistentStoreCoordinator *psc = <#The coordinator#>; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; if (![psc addPersistentStoreWithType:<#Store type#> configuration:<#Configuration or nil#> URL:storeURL options:options error:&error]) { // Handle the error. } My NSPersistentStoreCoordinator is initialized in this way: - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"FC.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } I am having trouble seeing where and how I should add the Apple code to get the Automatic Lightweight Migration working?

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • Spring Data Neo4J @Indexed(unique = true) not working

    - by Markus Lamm
    I'm new to Neo4J and I have, probably an easy question. There're NodeEntitys in my application, a property (name) is annotated with @Indexed(unique = true) to achieve the uniqueness like I do in JPA with @Column(unique = true). My problem is, that when I persist an entity with a name that already exists in my graph, it works fine anyway. But I expected some kind of exception here...?! Here' s an overview over basic my code: @NodeEntity public abstract class BaseEntity implements Identifiable { @GraphId private Long entityId; ... } public class Role extends BaseEntity { @Indexed(unique = true) private String name; ... } public interface RoleRepository extends GraphRepository<Role> { Role findByName(String name); } @Service public class RoleServiceImpl extends BaseEntityServiceImpl<Role> implements { private RoleRepository repository; @Override @Transactional public T save(final T entity) { return getRepository().save(entity); } } And this is my test: @Test public void testNameUniqueIndex() { final List<Role> roles = Lists.newLinkedList(service.findAll()); final String existingName = roles.get(0).getName(); Role newRole = new Role.Builder(existingName).build(); newRole = service.save(newRole); } That's the point where I expect something to go wrong! How can I ensure the uniqueness of a property, without checking it for myself?? THANKS IN ADVANCE FOR ANY IDEAS!! P.S.: I'm using neo4j 1.8.M07, spring-data-neo4j 2.1.0.BUILD-SNAPSHOT and Spring 3.1.2.RELEASE.

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • C#.NET (AForge) against Java (JavaCV, JMF) for video processing

    - by Leron
    I'm starting to get really confused looking deeper and deeper at video processing world and searching for optimal choices. This is the reason to post this and some other questions to try and navigate myself in the best possible way. I really like Java, working with Java, coding with Java, at the same time C# is not that different from Java and Visual Studio is maybe the best IDE I've been working with. So even though I really want to do my projects in Java so I can get better and better Java programmer at the same time I'm really attract to video processing and even though I'm still at the beginning of this journey I want to take the right path. So I'm really in doubt could Java be used in a production environment for serious video processing software. As the title says I already have been looking at maybe the two most used technologies for video processing in Java - JMF and JavaCV and I'm starting to think that even they are used and they provide some functionality, when it comes to real work and real project that's not the first thing that comes to once mind, I mean to someone that have a professional opinion about this. On the other hand I haven't got the time to investigate .NET (c# specificly) options but even AForge looks a lot more serious library then those provided for Java. So in general -either ways I'm gonna spend a lot of time learning some technology and trying to do something that make sense with it, but my plan is at the end the thing that I'll eventually come up to be my headline project. To represent my skills and eventually help me find a job in the field. So I really don't want to spend time learning something that will give me the programming result I want but at the same time is not something that is needed in the real world development. So what is your opinion, which language, technology is better for this specific issue. Which one worths more in terms that I specified above?

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Basic image processing question

    - by indoman
    Can you enlarge a feature so that rather than take up a certain number of pixels it actually takes up one or two times that many to make it easier to analyze? Would there be a way to generalize that in MATLAB?

    Read the article

  • image processing algorithm in MATLAB

    - by user261002
    I am trying to reconstruct an algorithm belong to this paper: Decomposition of biospeckle images in temporary spectral bands Here is an explanation of the algorithm: We recorded a sequence of N successive speckle images with a sampling frequency fs. In this way it was possible to observe how a pixel evolves through the N images. That evolution can be treated as a time series and can be processed in the following way: Each signal corresponding to the evolution of every pixel was used as input to a bank of filters. The intensity values were previously divided by their temporal mean value to minimize local differences in reflectivity or illumination of the object. The maximum frequency that can be adequately analyzed is determined by the sampling theorem and s half of sampling frequency fs. The latter is set by the CCD camera, the size of the image, and the frame grabber. The bank of filters is outlined in Fig. 1. In our case, ten 5° order Butterworth11 filters were used, but this number can be varied according to the required discrimination. The bank was implemented in a computer using MATLAB software. We chose the Butter-worth filter because, in addition to its simplicity, it is maximally flat. Other filters, an infinite impulse response, or a finite impulse response could be used. By means of this bank of filters, ten corresponding signals of each filter of each temporary pixel evolution were obtained as output. Average energy Eb in each signal was then calculated: where pb(n) is the intensity of the filtered pixel in the nth image for filter b divided by its mean value and N is the total number of images. In this way, en values of energy for each pixel were obtained, each of hem belonging to one of the frequency bands in Fig. 1. With these values it is possible to build ten images of the active object, each one of which shows how much energy of time-varying speckle there is in a certain frequency band. False color assignment to the gray levels in the results would help in discrimination. and here is my MATLAB code base on that : clear all for i=0:39 str = num2str(i); str1 = strcat(str,'.mat'); load(str1); D{i+1}=A; end new_max = max(max(A)); new_min = min(min(A)); for i=20:180 for j=20:140 ts = []; for k=1:40 ts = [ts D{k}(i,j)]; %%% kth image pixel i,j --- ts is time series end ts = double(ts); temp = mean(ts); ts = ts-temp; ts = ts/temp; N = 5; % filter order W = [0.00001 0.05;0.05 0.1;0.1 0.15;0.15 0.20;0.20 0.25;0.25 0.30;0.30 0.35;0.35 0.40;0.40 0.45;0.45 0.50]; N1 = 5; for ind = 1:10 Wn = W(ind,:); [B,A] = butter(N1,Wn); ts_f(ind,:) = filter(B,A,ts); end for ind=1:10 imag_test1{ind}(i,j) =sum((ts_f(ind,:)./mean(ts_f(ind,:))).^2); end end end for i=1:10 temp_imag = imag_test1{i}(:,:); x=isnan(temp_imag); temp_imag(x)=0; temp_imag=medfilt2(temp_imag); t_max = max(max(temp_imag)); t_min = min(min(temp_imag)); temp_imag = (temp_imag-t_min).*(double(new_max-new_min)/double(t_max-t_min))+double(new_min); imag_test2{i}(:,:) = temp_imag; end for i=1:10 A=imag_test2{i}(:,:); B=A/max(max(A)); B=histeq(B); figure,imshow(B) colorbar end but I am not getting the same result as paper. has anybody has aby idea why? or where I have gone wrong? Refrence Link to the paper

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >