Search Results

Search found 33758 results on 1351 pages for 'primary key design'.

Page 49/1351 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Java - Is this a bad design pattern?

    - by Walter White
    Hi all, In our application, I have seen code written like this: User.java (User entity) public class User { protected String firstName; protected String lastName; ... getters/setters (regular POJO) } UserSearchCommand { protected List<User> users; protected int currentPage; protected int sortColumnIndex; protected SortOder sortOrder; // the current user we're editing, if at all protected User user; public String getFirstName() {return(user.getFirstName());} public String getLastName() {return(user.getLastName());} } Now, from my experience, this pattern or anti-pattern looks bad to me. For one, we're mixing several concerns together. While they're all user-related, it deviates from typical POJO design. If we're going to go this route, then shouldn't we do this instead? UserSearchCommand { protected List<User> users; protected int currentPage; protected int sortColumnIndex; protected SortOder sortOrder; // the current user we're editing, if at all protected User user; public User getUser() {return(user);} } Simply return the user object, and then we can call whatever methods on it as we wish? Since this is quite different from typical bean development, JSR 303, bean validation doesn't work for this model and we have to write validators for every bean. Does anyone else see anything wrong with this design pattern or am I just being picky as a developer? Walter

    Read the article

  • The Primary Cause of Failed IT Projects

    - by Paul Nielsen
    During my career I’ve been a part of dozens of projects. Some I was on from the start, most I came in to help bail out. Some went smooth and were a pleasure to build and maintain and some projects failed (failed being broadly defined as projects that were not completed, or were completed but were a horrid mess – very complex, impossible to maintain, refactor, and a royal pain to keep running.) While there are a number of factors that can contribute to a failed project, in my career it seems the primary...(read more)

    Read the article

  • The Primary Cause of Failed IT Projects

    - by Paul Nielsen
    During my career I’ve been a part of dozens of projects. Some I was on from the start, most I came in to help bail out. Some went smooth and were a pleasure to build and maintain and some projects failed (failed being broadly defined as projects that were not completed, or were completed but were a horrid mess – very complex, impossible to maintain, refactor, and a royal pain to keep running.) While there are a number of factors that can contribute to a failed project, in my career it seems the primary...(read more)

    Read the article

  • Database design for credit based purchases

    - by FreshCode
    I need an elegant way to implement credit-based purchases for an online store with a small variety of products which can be purchased using virtual credit or real currency. Alternatively, products could only be priced in credits. Previous work I have implemented credit-based purchasing before using different product types (eg. Credit, Voucher or Music) with post-order processing to assign purchased credit to users in the form of real currency, which could subsequently be used to discount future orders' charge totals. This worked fairly well as a makeshift solution, but did not succeed in disconnecting the virtual currency from the real currency, which is what I'd like to do, since spending credits is psychologically easier for customers than spending real currency. Design I need guidance on designing the database correctly with support for the simultaneous bulk purchase of credits at a discount along with real currency products. Alternatively, should all products be priced in credits and only credit have a real currency value? Existing Database Design Partial Products table: ProductId Title Type UnitPrice SalePrice Partial Orders table: OrderId UserId (related to Users table, not shown) Status Value Total Partial OrderItems table (similar to CartItems table): OrderItemId OrderId (related to Orders table) ProductId (related to Products table) Quantity UnitPrice SalePrice Prospective UserCredits table: CreditId UserId (related to Users table, not shown) Value (+/- value. Summed over time to determine saldo.) Date I'm using ASP.NET MVC and LINQ-to-SQL on a SQL Server database.

    Read the article

  • Database design for sharing photos site?

    - by javaLearner.java
    I am using php and mysql. If I want to do a sharing photos website, whats the best database design for upload and display photos. This is what I have in mind: domain: |_> photos |_> user Logged in user will upload photo in [http://www.domain.com/user/upload.php] The photos are stored in filesystems, and the path-to-photos stored in database. So, in my photos folder would be like: photos/userA/subfolders+photos, photos/userB/subfolders+photos, photos/userC/subfolders+photos etc Public/others people may view his photo in: [http://www.domain.com/photos/user/?photoid=123] where 123 is the photoid, from there, I will query from database to fetch the path and display the image. My questions: Whats the best database design for photo-sharing website (like flickr)? Will there be any problems if I keep creating new folder in "photos" folder. What if hundreds of thousands users registered? Whats the best practices What size of photos should I keep? Currently I only stored thumbnail (100x100) and (max) 1600x1200 px photos. What others things I should take note when developing photos-sharing website?

    Read the article

  • Android Multiple Handlers Design Question

    - by Soumya Simanta
    This question is related to an existing question I asked. I though I'll ask a new question instead of replying back to the other question. Cannot "comment" on my previous question because of a word limit. Marc wrote - I've more than one Handlers in an Activity." Why? If you do not want a complicated handleMessage() method, then use post() (on Handler or View) to break the logic up into individual Runnables. Multiple Handlers makes me nervous. I'm new to Android. Is having multiple handlers in a single activity a bad design ? I'm new to Android. My question is - is having multiple handlers in a single activity a bad design ? Here is the sketch of my current implementation. I've a mapActivity that creates a data thread (a UDP socket that listens for data). My first handler is responsible for sending data from the data thread to the activity. On the map I've a bunch of "dynamic" markers that are refreshed frequently. Some of these markers are video markers i.e., if the user clicks a video marker, I add a ViewView that extends a android.opengl.GLSurfaceView to my map activity and display video on this new vide. I use my second handler to send information about the marker that the user tapped on ItemizedOverlay onTap(int index) method. The user can close the video view by tapping on the video view. I use my third handler for this. I would appreciate if people can tell me what's wrong with this approach and suggest better ways to implement this. Thanks.

    Read the article

  • Movies recommendation engine conceptual database design

    - by Supyxy
    I am working at an movie recommendations engine and i'm facing a DB design issue. My actual database looks like this: MOVIES [ID,TITLE] KEYWORDS_TABLE [ID,KEY_ID] - where ID is Foreign Key for MOVIES.id and KEY_ID is a key for a text keywords table This is not the entire DB, but i showed here what's important for my problem. I have about 50,000 movies and about 1,3 milion keywords correlations, and basically my algorithm consists in extracting all the who have the same keywords with a given movie, then ordering them by the number of keywords correlations. For example i looked for a movie similar to 'Cast away' and it returned 'Six days and six nights' because it had the most keywords correlations (4 keywords): Island Airplane crash Stranded Pilot The algorithm is based on more factors, but this one is the most important and the most difficult for the approach. Basically what i do now is getting all the movies that have at least one keyword similar to the given movie and then ordering them by other factors which are not important for a moment. There wouldn't be any problem if there weren't so many records, a query lasts in many cases up to 10-20 seconds and some of them return even over 5000 movies. Someone already helped me on here (thanks Mark Byers) with optimizing the query but that's not enough because it takes too longer SELECT DISTINCT M.title FROM keywords_table K1 JOIN keywords_table K2 ON K2.key_id = K1.key_id JOIN movies M ON K2.id = M.id WHERE K1.id = 4 So i thought it would be better if i pre-made those lists with movies recommendations for each movie, but i'm not sure how to design the tables.. whatever is it a good idea or how would you take this approach?

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • Silverlight Async Design Pattern Issue

    - by Mike Mengell
    I'm in the middle of a Silverlight application and I have a function which needs to call a webservice and using the result complete the rest of the function. My issue is that I would have normally done a synchronous web service call got the result and using that carried on with the function. As Silverlight doesn't support synchronous web service calls without additional custom classes to mimic it, I figure it would be best to go with the flow of async rather than fight it. So my question relates around whats the best design pattern for working with async calls in program flow. In the following example I want to use the myFunction TypeId parameter depending on the return value of the web service call. But I don't want to call the web service until this function is called. How can I alter my code design to allow for the async call? string _myPath; bool myFunction(Guid TypeId) { WS_WebService1.WS_WebService1SoapClient proxy = new WS_WebService1.WS_WebService1SoapClient(); proxy.GetPathByTypeIdCompleted += new System.EventHandler<WS_WebService1.GetPathByTypeIdCompleted>(proxy_GetPathByTypeIdCompleted); proxy.GetPathByTypeIdAsync(TypeId); // Get return value if (myPath == "\\Server1") { //Use the TypeId parameter in here } } void proxy_GetPathByTypeIdCompleted(object sender, WS_WebService1.GetPathByTypeIdCompletedEventArgs e) { string server = e.Result.Server; myPath = '\\' + server; } Thanks in advance, Mike

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • Design of std::ifstream class

    - by Nawaz
    Those of us who have seen the beauty of STL try to use it as much as possible, and also encourage others to use it wherever we see them using raw pointers and arrays. Scott Meyers have written a whole book on STL, with title Effective STL. Yet what happened to the developers of ifstream that they preferred char* over std::string. I wonder why the first parameter of ifstream::open() is of type const char*, instead of const std::string &. Please have a look at it's signature: void open(const char * filename, ios_base::openmode mode = ios_base::in ); Why this? Why not this: void open(const string & filename, ios_base::openmode mode = ios_base::in ); Is this a serious mistake with the design? Or this design is deliberate? What could be the reason? I don't see any reason why they have preferred char* over std::string. Note we could still pass char* to the latter function that takes std::string. That's not a problem! By the way, I'm aware that ifstream is a typedef, so no comment on my title.:P. It looks short that is why I used it. The actual class template is : template<class _Elem,class _Traits> class basic_ifstream;

    Read the article

  • how to design a schema where the columns of a table are not fixed

    - by hIpPy
    I am trying to design a schema where the columns of a table are not fixed. Ex: I have an Employee table where the columns of the table are not fixed and vary (attributes of Employee are not fixed and vary). Nullable columns in the Employee table itself i.e. no normalization Instead of adding nullable columns, separate those columns out in their individual tables ex: if Address is a column to be added then create table Address[EmployeeId, AddressValue]. Create tables ExtensionColumnName [EmployeeId, ColumnName] and ExtensionColumnValue [EmployeeId, ColumnValue]. ExtensionColumnName would have ColumnName as "Address" and ExtensionColumnValue would have ColumnValue as address value. Employee table EmployeeId Name ExtensionColumnName table ColumnNameId EmployeeId ColumnName ExtensionColumnValue table EmployeeId ColumnNameId ColumnValue There is a drawback is the first two ways as the schema changes with every new attribute. Note that adding a new attribute is frequent. I am not sure if this is the good or bad design. If someone had a similar decision to make, please give an insight on things like foreign keys / data integrity, indexing, performance, reporting etc.

    Read the article

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • Are Tuples a poor design decision in C#?

    - by Jason Webb
    With the addition of the Tuple class in .net 4, I have been trying to decide if using them in my design is a bad choice or not. The way I see it, a Tuple is a shortcut to writing a result class (I am sure there are other uses too). So this: public class ResultType { public string StringValue { get; set; } public int IntValue { get; set; } } public ResultType GetAClassedValue() { //..Do Some Stuff ResultType result = new ResultType { StringValue = "A String", IntValue = 2 }; return result; } Is equivalent to this: public Tuple<string, int> GetATupledValue() { //...Do Some stuff Tuple<string, int> result = new Tuple<string, int>("A String", 2); return result; } So setting aside the possibility that I am missing the point of Tuples, is the example with a Tuple a bad design choice? To me it seems like less clutter, but not as self documenting and clean. Meaning that with the type ResultType, it is very clear later on what each part of the class means but you have extra code to maintain. With the Tuple<string, int> you will need to look up and figure out what each Item represents, but you write and maintain less code. Any experience you have had with this choice would be greatly appreciated.

    Read the article

  • design pattern for related inputs

    - by curiousMo
    My question is a design question : let's say i have a data entry web page with 4 drop down lists, each depending on the previous one, and a bunch of text boxes. ddlCountry (DropDownList) ddlState (DropDownList) ddlCity (DropDownList) ddlBoro (DropDownList) txtAddress (TxtBox) txtZipcode(TxtBox) and an object that represents a datarow with a value for each: countrySeqid stateSeqid citySeqid boroSeqid address zipCode naturally the country, state, city and boro values will be values of primary keys of some lookup tables. when the user chooses to edits that record, i would load it from database and load it into the page. the issue that I have is how to streamline loading the DropDownLists. i have some code that would grab the object,look thru its values and move them to their corresponding input controls in one shot. but in this case i will have to load the ddlCountry with possible values, then assign values, then do the same thing for the rest of the ddls. I guess i am looking for an elegant solution. i am using asp.net, but i think it is irrelevant to the question. i am looking more into a design pattern.

    Read the article

  • Compromising design & code quality to integrate with existing modules

    - by filip-fku
    Greetings! I inherited a C#.NET application I have been extending and improving for a while now. Overall it was obviously a rush-job (or whoever wrote it was seemingly less competent than myself). The app pulls some data from an embedded device & displays and manipulates it. At the core is a communications thread in the main application form which executes a 600+ lines of code method which calls functions all over the place, implementing a state machine - lots of if-state-then-do type code. Interaction with the device is done by setting the state/mode globally and letting the thread do it's thing. (This is just one example of the badness of the code - overall it is not very OO-like, it reminds of the style of embedded C code the device firmware is written in). My problem is that this piece of code is central to the application. The software, communications protocol or device firmware are not documented at all. Obviously to carry on with my work I have to interact with this code. What I would like some guidance on, is whether it is worth scrapping this code & trying to piece together something more reasonable from the information I can reverse engineer? I can't decide! The reason I don't want to refactor is because the code already works, and changing it will surely be a long, laborious and unpleasant task. On the flip side, not refactoring means I have to sometimes compromise the design of other modules so that I may call my code from this state machine! I've heard of "If it ain't broke don't fix it!", so I am wondering if it should apply when "it" is influencing the design of future code! Any advice would be appreciated! Thanks!

    Read the article

  • Importing data from third party datasource (open architecture design )

    - by mare
    How would you design an application (classes, interfaces in class library) in .NET when we have a fixed database design on our side and we need to support imports of data from third party data sources, which will most likely be in XML? For instance, let us say we have a Products table in our DB which has columns Id Title Description TaxLevel Price and on the other side we have for instance Products: ProductId ProdTitle Text BasicPrice Quantity. Currently I do it like this: Have the third party XML convert to classes and XSD's and then deserialize its contents into strong typed objects (what we get as a result of this process is classes like ThirdPartyProduct, ThirdPartyClassification, etc.). Then I have methods like this: InsertProduct(ThirdPartyProduct newproduct) I do not use interfaces at the moment but I would like them to. What I would like is implement something like public class Contoso_ProductSynchronization : ProductSynchronization InsertProduct(ContosoProduct p) where ProductSynchronization will be an interface or abstract class. There will most likely be many implementations of ProductSynchronization. I cannot hardcode the types - classes like ContosoProduct, NorthwindProduct might be created from the third party XML's (so preferably I would continue to use deserialization). Hopefully someone will understand what I'm trying to explain here. Just imagine you are the seller and you have numerous providers and each one uses their own proprietary XML format. I don't mind the development, which will of course be needed everytime new format appears, because it will only require 10-20 methods to be implemented, I just want the architecture to be open and support that.

    Read the article

  • SQL SERVER Create Primary Key with Specific Name when Creating Table

    It is interesting how sometimes the documentation of simple concepts is not available online. I had received email from one of the reader where he has asked how to create Primary key with a specific name when creating the table itself. He said, he knows the method where he can create the table and then [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Android - Key Dispatching Timed Out

    - by Donal Rafferty
    In my Android application I am getting a very strange crash, when I press a button (Image) on my UI the entire application freezes and after a couple of seconds I getthe dreaded force close dialog appearing. Here is what gets printed in the log: WARN/WindowManager(88): Key dispatching timed out sending to package name/Activity WARN/WindowManager(88): Dispatch state: {{KeyEvent{action=1 code=5 repeat=0 meta=0 scancode=231 mFlags=8} to Window{432bafa0 com.android.launcher/com.android.launcher.Launcher paused=false} @ 1281611789339 lw=Window{432bafa0 com.android.launcher/com.android.launcher.Launcher paused=false} lb=android.os.BinderProxy@431ee8e8 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{4335fc58 package name/Activity paused=false}}} WARN/WindowManager(88): Current state: {{null to Window{4335fc58 package name/Activity paused=false} @ 1281611821193 lw=Window{4335fc58 package name/Activity paused=false} lb=android.os.BinderProxy@434c9bd0 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{4335fc58 package name/Activity paused=false}}} INFO/ActivityManager(88): ANR in process: package name (last in package name) INFO/ActivityManager(88): Annotation: keyDispatchingTimedOut INFO/ActivityManager(88): CPU usage: INFO/ActivityManager(88): Load: 5.18 / 5.1 / 4.75 INFO/ActivityManager(88): CPU usage from 7373ms to 1195ms ago: INFO/ActivityManager(88): package name: 6% = 1% user + 5% kernel / faults: 7 minor INFO/ActivityManager(88): system_server: 5% = 4% user + 1% kernel / faults: 27 minor INFO/ActivityManager(88): tiwlan_wifi_wq: 3% = 0% user + 3% kernel INFO/ActivityManager(88): mediaserver: 0% = 0% user + 0% kernel INFO/ActivityManager(88): logcat: 0% = 0% user + 0% kernel INFO/ActivityManager(88): TOTAL: 12% = 5% user + 6% kernel + 0% softirq INFO/ActivityManager(88): Removing old ANR trace file from /data/anr/traces.txt INFO/Process(88): Sending signal. PID: 1812 SIG: 3 INFO/dalvikvm(1812): threadid=7: reacting to signal 3 INFO/dalvikvm(1812): Wrote stack trace to '/data/anr/traces.txt' This is the code for the Button (Image): findViewById(R.id.endcallimage).setOnClickListener(new OnClickListener() { public void onClick(View v) { mNotificationManager.cancel(2); Log.d("Handler", "Endcallimage pressed"); if(callConnected) elapsedTimeBeforePause = SystemClock.elapsedRealtime() - stopWatch.getBase(); try { serviceBinder.endCall(lineId); } catch (RemoteException e) { e.printStackTrace(); } dispatchKeyEvent(new KeyEvent(KeyEvent.ACTION_DOWN,KeyEvent.FLAG_SOFT_KEYBOARD)); dispatchKeyEvent(new KeyEvent(KeyEvent.ACTION_UP, KeyEvent.KEYCODE_BACK)); } }); If I comment the following out the pressing of the button (image) doesn't cause the crash: try { serviceBinder.endCall(lineId); } catch (RemoteException e) { e.printStackTrace(); } The above code calls down through several levels of the app and into the native layer (NDK), could the call passing through several objects be leading to the force close? It seems unlikely as several other buttons do the same without issue. How about the native layer? Could some code I've built with the NDK be causing the issue? Any other ideas as to what the cause of the issue might be?

    Read the article

  • OCR an RSA key fob (security token)

    - by user130582
    I put together a quick WinForm/embedded IE browser control which logs into our company's bank website each morning and scrapes/exports the desired deposit information (the bank is a smallish regional bank). Since we have a few dozen "pseudoaccounts" that draw from the same master account, this actually takes 10-15 minutes to retrieve. Anyway, the only problem is that our business bank account reuires an RSA security token (http://www.rsa.com/node.aspx?id=1156)--if you are not familiar, it is a small device which shows a random 6 digit number every 15(?) seconds, so I have to prompt for this value before starting. This is on top of the website's login based security model, so even if you create a read-only account that can't do anything, you still have to put the RSA number in. We have 5 of these tokens for different people in the company. From our perspective this is nusiance security. I was joking about using a web camera to OCR the digits from the key fob so they didn't have to type it in -- mainly so that the scraping/export would be done before anyone arrives in the morning. Well, they asked if I could really do it. So now I ask you, how hard (how many hours) do you think it would take to OCR these digits reliably from a JPEG image produced by the camera? I already know I can get the JPEG easily. I think you get 3 tries to log in, so it really needs to hit a 99% accuracy rate. I could work on this on my off time, but they don't want me to put more than a few hours into it, so I want to leverage as much existing code as possible. This is a 7-segment display (like an alarm clock) so it's not exactly text that an OCR package would be used to seeing. Also--there is a countdown timer on the side of the display; typically when it is down to 1 bar, you wait until the next number appears and it starts over at 5 bars (like signal strength on your cell phone). So this would need to be OCRd as well but it is not text. Anyway the more I think about it as I type this, the less convinced I am that I can truly get this right, so maybe I should just work on it in my spare time?

    Read the article

  • Data access pattern

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • Data access pattern, combining push and pull?

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • how to design this relation in a DB schema

    - by raticulin
    I have a table Car in my db, one of the columns is purchaseDate. I want to be able to tag every car with a number of Policies (limited to 10 policies). Each policy has a time to life (ttl, a duration of time, like '5 years', '10 months' etc), that is, for how long since the car's purchaseDate the policy can be applied. I need to perform the following actions: when inserting a Car, it will be set with a number of Policies (at least one is set) sometimes a Car will be updated to add/remove a Policy searches must be done taking into account date/policies, for example: 'select all cars that are not covered by any policy as of today' My current design is (pol0..pol9 are the policies): CREATE TABLE Car ( id int NOT NULL IDENTITY(1,1), purchaseDate datetime NOT NULL, //more stuff... pol0 smallint default NULL, pol1 smallint default NULL, pol2 smallint default NULL, pol3 smallint default NULL, pol4 smallint default NULL, pol5 smallint default NULL, pol6 smallint default NULL, pol7 smallint default NULL, pol8 smallint default NULL, pol9 smallint default NULL, PRIMARY KEY (id) ) CREATE TABLE Policy ( id smallint NOT NULL, name varchar(50) collate Latin1_General_BIN NOT NULL, ttl varchar(100) collate Latin1_General_BIN NOT NULL, PRIMARY KEY (id) ) The problem I am facing is that the sql to perform the query above is a nightmare to write. As I don't know in which column each policy can be, so I have to check all columns for every policy etc etc. So I am wondering wether it is worth changing this. My questions are: The smallint as Policy id was chosen instead of an 'int IDENTITY' in order to save some space as there are going to be millions of Car records. It just adds complexity when creating a Policy as we must handle the id etc. Was it worth doing this? I am thinking that maybe there is a much better design? Obviously we could move the policy/car relation to its own table CarPolicy, benefits would be: no limit on 10 policies per car adding/removing etc much easier when only the default policy is applied (when no others are applied one called Default policy is applied), we could signal that by not having any entry in CarPolicy, now this is just done inserting the Default policy id in one of the columns. The cons are that we would need to change the DB, ORM classes etc. What would you recommend? Maybe there is another smart way to implement this that we are not aware without using the CarPolicy table?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >