Search Results

Search found 41035 results on 1642 pages for 'object oriented design'.

Page 392/1642 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • What's the proper way of importing option lists into an Android app?

    - by Scott
    I have been storing option lists for my Android app in a cloud table. For example, categories like "historical fiction","biography","science fiction", etc. I see the following pros and cons: Pro: I can make changes to the list without sending an app update to Google Play Not normalized - I can use the text in my other data tables instead of a reference ID Con: App needs to take time to download from the web each time (or at least check for changes) English only I believe the "proper" way to do this is the use the XML resource files. But I need to make sure the selection references correctly with my data. That is, my app needs to understand that "Poetry" and "Poesía" are the same thing. Is the correct thing to do: Forget about it since I'll never get to the point where I'm translating my app anyway Use a string-array and use the index (0...x) to know what the selection is Use a 2-dimensional string-array with a reference ID in the first column and the text in the second?

    Read the article

  • How many databases to support eCommerce?

    - by Terry Lorber
    I have a system with two databases, one that the customer-facing website uses, the second that is used by the "backroom" order-fulfillment system. I've been asked to run queries from the website to the backroom system. I'd rather not, it seems risky to allow web-based request to run unheeded on the internal system. Additionally, this means opening up routing in the firewall to allow external connections to the internal server. What's the best practice for eCommerce? Run the entire company off of one database? Or individual databases for each system, and middleware to connect them? Sometimes it might be necessary for the web application to pull date from the internal system, but not based on an HTTP request from the internet. I'm sure the best answer is "it depends!" So, if people have a rule of thumb for when to use middleware and when not to, I'd like to here it.

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • How can several different datatypes be saved in one table

    - by poseidon
    This is my situation: I am constructing an ad-like application in Django and Mysql. I am using a flexible-ad approach where we have: a table with ad categories (several categories such as home, furniture, cars, etc.) id_category name a table with details for the ad categories (home: area, squared meters. car: seats, color.) id_detail id_category (the categ the detail describes) name type (boolean, char, int, long, etc.) the ad table (i am selling a house. i am selling a car.) id_ad id_category text date a table where i plan to consolidate the details of the ads (home: A-area, 500 sq-meters. car: 5 seats, red.) id_detail_ad id_ad id_detail value Is this possible? Can I have a table of details for all the ads, even if details include numbers, texts, booleans, etc? Or would I have to save them all as text and then interpret them via code accordingly? Please express your opinions. Thank you.

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • Table with unequal rows in XUL layout

    - by Michael
    Trying to get something similar for Firefox extension using XUL. The most difficult part is the grid with unequal number of rows. I tried <grid> but it didn't work well. Also don't want to use HTML inside of XUL. Any ideas how to build such table?

    Read the article

  • how to handle exceptions/errors in php?

    - by fayer
    when using 3rd part libraries they tend to throw exceptions to the browser and hence kill the script. eg. if im using doctrine and insert a duplicate record to the database it will throw an exception. i wonder, what is best practice for handling these exceptions. should i always do a try...catch? but doesn't that mean that i will have try...catch all over the script and for every single function/class i use? Or is it just for debugging? i don't quite get the picture. Cause if a record already exists in a database, i want to tell the user "Record already exists". And if i code a library or a function, should i always use "throw new Expcetion($message, $code)" when i want to create an error? Please shed a light on how one should create/handle exceptions/errors. Thanks

    Read the article

  • Pattern for creating a database schema using JDBC

    - by Space_C0wb0y
    I have a Java-application that loads data from a legacy file format into an SQLite-Database using JDBC. If the database file specified does not exist, it is supposed to create a new one. Currently the schema for the database is hardcoded in the application. I would much rather have it in a separate file as an SQL-Script, but apparently there is now easy way to execute an SQL-Script though JDBC. Is there any other way or a pattern to achieve something like this?

    Read the article

  • What strategies are efficient to handle concurrent reads on heterogeneous multi-core architectures?

    - by fabrizioM
    I am tackling the challenge of using both the capabilities of a 8 core machine and a high-end GPU (Tesla 10). I have one big input file, one thread for each core, and one for the the GPU handling. The Gpu thread, to be efficient, needs a big number of lines from the input, while the Cpu thread needs only one line to proceed (storing multiple lines in a temp buffer was slower). The file doesn't need to be read sequentially. I am using boost. My strategy is to have a mutex on the input stream and each thread locks - unlocks it. This is not optimal because the gpu thread should have a higher precedence when locking the mutex, being the fastest and the most demanding one. I can come up with different solutions but before rush into implementation I would like to have some guidelines. What approach do you use / recommend ?

    Read the article

  • Tree-like queues

    - by Rehno Lindeque
    I'm implementing a interpreter-like project for which I need a strange little scheduling queue. Since I'd like to try and avoid wheel-reinvention I was hoping someone could give me references to a similar structure or existing work. I know I can simply instantiate multiple queues as I go along, I'm just looking for some perspective by other people who might have better ideas than me ;) I envision that it might work something like this: The structure is a tree with a single root. You get a kind of "insert_iterator" to the root and then push elements onto it (e.g. a and b in the example below). However, at any point you can also split the iterator into multiple iterators, effectively creating branches. The branches cannot merge into a single queue again, but you can start popping elements from the front of the queue (again, using a kind of "visitor_iterator") until empty branches can be discarded (at your discretion). x -> y -> z a -> b -> { g -> h -> i -> j } f -> b Any ideas? Seems like a relatively simple structure to implement myself using a pool of circular buffers but I'm following the "think first, code later" strategy :) Thanks

    Read the article

  • Geohashing - recursively find neighbors of neighbors

    - by itsme
    I am now looking for an elegant algorithm to recursively find neighbors of neighbors with the geohashing algorithm (http://www.geohash.org). Basically take a central geohash, and then get the first 'ring' of same-size hashes around it (8 elements), then, in the next step, get the next ring around the first etc. etc. Have you heard of an elegant way to do so? Brute force could be to take each neighbor and get their neighbors simply ignoring the massive overlap. Neighbors around one central geohash has been solved many times (here e.g. in Ruby: http://github.com/masuidrive/pr_geohash/blob/master/lib/pr_geohash.rb) Edit for clarification: Current solution, with passing in a center key and a direction, like this (with corresponding lookup-tables): def adjacent(geohash, dir) base, lastChr = geohash[0..-2], geohash[-1,1] type = (geohash.length % 2)==1 ? :odd : :even if BORDERS[dir][type].include?(lastChr) base = adjacent(base, dir) end base + BASE32[NEIGHBORS[dir][type].index(lastChr),1] end (extract from Yuichiro MASUI's lib) I say this approach will get ugly soon, because directions gets ugly once we are in ring two or three. The algorithm would ideally simply take two parameters, the center area and the distance from 0 being the center geohash only (["u0m"] and 1 being the first ring made of 8 geohashes of the same size around it (= [["u0t", "u0w"], ["u0q", "u0n"], ["u0j", "u0h"], ["u0k", "u0s"]]). two being the second ring with 16 areas around the first ring etc. Do you see any way to deduce the 'rings' from the bits in an elegant way?

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • Books/resources on authentication and authorization in layered applications

    - by Tommy Jakobsen
    I've been trying to find resources and guidelines for implementing authentication and authorization in multiple layered architectures (C#), but haven't found any "best practices" or patterns to use. And I figured, that there must be some patterns for this, as it is a pretty important area? The application that we're developing, is layered traditionally, having data layer (Entity Framework 4) repositories domain layer service layer (can be WCF, with data transfer objects) multiple clients consuming the WCF service (ASP.NET [MVC], Silverlight, WPF) and clients accessing a service layer directly (no WCF) Are there books/articles/blogs that dig deeply into this area? Primarily about authorization such as handling multiple roles and attributes attached to users). It doesn’t have to be specific for the .NET Framework, but it would be preferred.

    Read the article

  • How do I link ASP.NET membership/role users to tables in db?

    - by SnapConfig.com
    I am going to use forms authentication but I want to be able to link the asp.net users to some tables in the db for example If I have a class and students (as roles) I'll have a class students table. I'm planning to put in a Users table containing a simple int userid and ASP.NET username in there and put userid wherever I want to link the users. Does that sound good? any other options of modeling this? it does sound a bit convoluted?

    Read the article

  • Best datastructure for this relationship...

    - by Travis
    I have a question about database 'style'. I need a method of storing user accounts. Some users "own" other user accounts (sub-accounts). However not all user accounts are owned, just some. Is it best to represent this using a table structure like so... TABLE accounts ( ID ownerID -> ID name ) ...even though there will be some NULL values in the ownerID column for accounts that do not have an owner. Or would it be stylistically preferable to have two tables, like so. TABLE accounts ( ID name ) TABLE ownedAccounts ( accountID -> accounts(ID) ownerID -> accounts(ID) ) Thanks for the advice.

    Read the article

  • What is the standard way to add an icon to a link with CSS?

    - by ewernli
    I'm used to use padding + background-image to place an icon next to a link. There are many example of this approach. Here is one from here: <a class="external" href="http://www.othersite.com/">link</a> a.external { padding-right: 15px; background: transparent url(images/external-link-icon.gif) no-repeat top right; } But most browser don't print background image, which is annoying. What is the standard to place icon next to links which is semantically correct and works in all cases? (I couldn't find an exact similar question. If there is one, just close this one as duplicate)

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >