Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 17/1983 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Database structure for various items

    - by XGouchet
    I'm building a sqlite database for an android app which will hold a list of items, each of which have different characteristics. Some of the characteristics are available for all objects, some are only relevant for a subset of objects. For example, all my items have a name, a description, an image. Some items will also have an expiration date, others wont. Some will have a size, some wont. Etc... How should I build my Database, as I don't know how many characteristics may be added in the future, and knowing I should be able to filter the list by any characteristic ?

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Handle all authentication logic in database or code?

    - by Snuffleupagus
    We're starting a new(ish) project at work that has been handed off to me. A lot of the database sided stuff has been fleshed out, including some stored procedures. One of the stored procedures, for example, handles creation of a new user. All of the data is validated in the stored procedure (for example, password must be at least 8 characters long, must contain numbers, etc) and other things, such as hashing the password, is done in the database as well. Is it normal/right for everything to be handled in the stored procedure instead of the application itself? It's nice that any application can use the stored procedure and have the same validation, but the application should have a standard framework/API function that solves the same problem. I also feel like it takes away the data from the application and is going to be harder to maintain/add new features to.

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • calling database driver in java app [basically a swing app]

    - by user993250
    I have made a java application that allows a user to choose from certain standards and then allows him to customize those standards according to his needs. Now, the customization [via a swing application] that has been made needs to be persistent. For this we use a database [mysql/access] and hook it to the application so that with each customization made, a table [if non-existent] is created [thus making it runtime and we can not pre-determine the table names or the keys of table etc] and an appropriate entry in the table is made. I have written the driver for this connection. How do I call it in the java application and what approach should I take? I would much appreciate that if somebody can refer me some example that not only shows a sample connection being made via driver but its appropriate calls to the database as well so that i can use it as a guide.

    Read the article

  • Eloquera Database 2.7.0 is released (native .NET object database)

    Eloquera ( www.eloquera.com ) originally designed and developed for use in the Web environment and its designed as native .NET application in C#. Eloquera wasnt ported from Java as many other databases. Eloquera natively as part of architecture supports: Save the data with a single line of code// Create the object we would like to work with. Movie movie = new Movie() { Location = "Sydney", Year = 2010, OpenDates = new DateTime[] { new DateTime(2003, 12, 10),...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Synthetic database records

    - by michipili
    Assume we are getting some statistics from a customer which we analyse and we send our comments to the customer. Now, the customer tells us that the statistic they computed between January and March are based on a wrong methodology and sends us corrected series. We want perform analysis with the wrong and with the correct set of data, which are huge and only differ from January to March. Therefore, we need something like synthetic database records implementing the following logic: synthetic[1] = wrong_data synthetic[2] = correct_data between Januar and March, wrong_data otherwise With this, we can easily perform our analyses on synthetic records. Should such synthetic records be implemented in the application logic or on the side of the database? What are common pitfalls of such an implementation?

    Read the article

  • ????!Platinum?????????Oracle Database ??????@??????

    - by Yusuke.Yamamoto
    ?ORACLE MASTER Platinum??????????????????????????????????????????? ?????????????????????????????????PS??????????????????? 12???????????????????????????????????????? ??????????????????????18?30?~??? 4????????????????????? ????? Oracle Database ????????????????????????????????????????????????????? ???????·????????? SQL??? ??????????? ?????? ??? ???? ???? 2011?03?11?(?)18:30~20:30 ?? ORACLE MASTER Platinum ?????????Oracle Database ??????

    Read the article

  • How to use robocopy to move Users folder from partition c to d on Windows 7?

    - by Bastian
    I just tried to copy my Users folder from partition C to partition D using the method mentioned in this post. Unfortunately I encountered two problems: When using the command robocopy c:\Users d:\Users /mir /xj /copyall, robocopy says that it can't find the file C:\Users\, although it exists. When using the command robocopy x:\Users d:\Users /mir /xj /copyall, robocopy says that it cannot find the path d:\Users\Administrator\Application Data, error code <0x00000003>. I started the command line mode of my Windows 7 installation disk (repair mode). Does anybody know what the reasons for these errors might be?

    Read the article

  • Creating a test database with copied data *and* its own data

    - by Jordan Reiter
    I'd like to create a test database that each day is refreshed with data from the production database. BUT, I'd like to be able to create records in the test database and retain them rather than having them be overwritten. I'm wondering if there is a simple straightforward way to do this. Both databases run on the same server, so apparently that rules out replication? For clarification, here is what I would like to happen: Test database is created with production data I create some test records that I want to keep running on the test server (basically so I can have example records that I can play with) Next day, the database is completely refreshed, but the records I created that day are retained. Records that were untouched that day are replaced with records from the production database. The complication is if a record in the production database is deleted, I want it to be deleted on the test database too, so I do want to get rid of records in the test database that no longer exist in the production database, unless those records were created within the test database. Seems like the only way to do this would be to have some sort of table storing metadata about the records being created? So for example, something like this: CREATE TABLE MetaDataRecords ( id integer not null primary key auto_increment, tablename varchar(100), action char(1), pk varchar(100) ); DELETE FROM testdb.users WHERE NOT EXISTS (SELECT * from proddb.users WHERE proddb.users.id=testdb.users.id) AND NOT EXISTS (SELECT * from testdb.MetaDataRecords WHERE testdb.MetaDataRecords.pk=testdb.users.pk AND testdb.MetaDataRecords.action='C' AND testdb.MetaDataRecords.tablename='users' );

    Read the article

  • Database Table Schema and Aggregate Roots

    - by bretddog
    Hi, Applicaiton is single user, 1-tier(1 pc), database SqlCE. DataService layer will be (I think) : Repository returning domain objects and quering database with LinqToSql (dbml). There are obviously a lot more columns, this is simplified view. http://img573.imageshack.us/img573/3612/ss20110115171817w.png This is my first attempt of creating a 2 tables database. I think the table schema makes sense, but I need some reassurance or critics. Because the table relations looks quite scary to be honest. I'm hoping you could; Look at the table schema and respond if there are clear signs of troubles or errors that you spot right away.. And if you have time, Look at Program Summary/Questions, and see if the table layout makes makes sense to those points. Please be brutal, I will try to defend :) Program summary: a) A set of categories, each having a set of strategies (1:m) b) Each day a number of items will be produced. And each strategy MAY reference it. (So there can be 50 items, and a strategy may reference 23 of them) c) An item can be referenced by more than one strategy. So I think it's an m:m relation. d) Status values will be logged at fixed time-fractions through the day, for: - .... each Strategy.....each StrategyItem....each item e) An action on an item may be executed by a strategy that reference it. - This is logged as ItemAction (Could have called it StrategyItemAction) User Requsts b) - e) described the main activity mode of the program. To work with only today's DayLog , for each category. 2nd priority activity is retrieval of history, which typically will be From all categories, from day x to day y; Get all StrategyDailyLog. Questions First, does the overall layout look sound? I'm worried to see that there are so many relationships in all directions, connecting everything. Is this normal, or does it look like trouble? StrategyItem is made to represent an m:m relationship. Is it correct as I noted 1:m / 1:1 (marked red) ? StrategyItemTimeLog and ItemTimeLog; Logs values that both need to be retrieved together, when retreiving a StrategyItem. Reason I separated is that the first one is strategy-specific, and several strategies can reference same item. So I thought not to duplicate those values that are not dependent no strategy, but only on the item. Hence I also dragged out the LogTime, as it seems to be the only parameter to unite the logs. But this all looks quite disturbing with those 3 tables. Does it make sense at all? Or you have suggestion? Pink circles shows my vague attempt of Aggregate Root Paths. I've been thinking in terms of "what entity is responsible for delete". Though I'm unsure about the actual root. I think it's Category. Does it make sense related to User Requests described above?

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • friendship database schema

    - by Daniel Hertz
    I'm creating a db schema that involves users that can be friends, and I was wondering what the best way to model the ability for these friends to have friendships. Should it be its own table that simply has two columns that each represent a user? Thanks!

    Read the article

  • What's the "best" database for embedded?

    - by mawg
    I'm an embedded guy, not a database guy. I've been asked to redesign an existing system which has bottlenecks in several places. The embedded device is based around an ARM 9 processor running at 220mHz. There should be a database of 50k entries (may increase to 250k) each with 1k of data (max 8 filed). That's approximate - I can try to get more precise figures if necessary. They are currently using SqlLite 2 and planning to move to SqlLite 3. Without starting a flame war - I am a complete d/b newbie just seeking advice - is that the "best" decision? I realize that this might be a "how long is a piece of string?" question, but any pointers woudl be greatly welcomed. I don't mind doing a lot of reading & research, but just hoped that you could get me off to a flying start. Thanks. p.s Again, a total rewrite, might not even stick with embedded Linux, but switch to eCos, don't worry too much about one time conversion between d/b formats. Oh, and accesses should be infrequent, at most one every few seconds. edit: ok, it seems they have 30k entries (may reach 100k or more) of only 5 or 6 fields each, but at least 3 of them can be a search key for a record. They are toying with "having no d/b at all, since the data are so simple", but it seems to me that with multiple keys, we couldn't use fancy stuff like a quicksort() type search (recursive, binary search). Any thoughts on "no d/b", just data-structures? Btw, one key is 800k - not sure how well SqlLite handles that (maybe with "no d/b" I have to hash that 800k to something smaller?)

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Saving Abstract and Sub classes to database

    - by bretddog
    Hi, I have an abstract class "StrategyBase", and a set of sub classes, StrategyA/B/C etc. The sub classes use some of the properties of the base class, and have some individual properties. My question is how to save this to a database. I'm currently using SqlCE, and Linq-To-Sql by creating entity classes automatically with SqlMetal.exe. I've seen there are three solutions shown in this question, but I'm not able to see how these solutions will work or not with SqlMetal/entity classes. Though it seems to me the "concrete table inheritance" would probably work without any manual modifying. What about the other two, would they be problematic? For "Single Table Inheritance" wouldn't all classes get all variables, even though they don't need them? And for the "Class table inheritance" solution I can't really see at all how that will map into the entity-classes for a useful purpose. I may note that I extend these partial entity classes for making the classes of my business objects. I may also consider moving to EntityFramework instead of SqlMetal/Linq2Sql, so would be nice also to know if that makes any difference to what schema is easy to implement. One likely important thing to note is that I will constantly be develop new strategies, which makes me have to modify the program code, and probably the database shcema; when adding a new strategy. Sorry the question is a bit "all over the place", but hopefully it's some clear advantages/disadvantages here that you may be able to advice. ? Cheers!

    Read the article

  • Database design: Calculating the Account Balance

    - by 001
    How do I design the database to calculate the account balance? 1) Currently I calculate the account balance from the transaction table In my transaction table I have "description" and "amount" etc.. I would then add up all "amount" values and that would work out the user's account balance. I showed this to my friend and he said that is not a good solution, when my database grows its going to slow down???? He said I should create separate table to store the calculated account balance. If did this, I will have to maintain two tables, and its risky, the account balance table could go out of sync. Any suggestion? EDIT: OPTION 2: should I add an extra column to my transaction tables "Balance". now I do not need to go through many rows of data to perform my calculation. Example John buys $100 credit, he debt $60, he then adds $200 credit. Amount $100, Balance $100. Amount -$60, Balance $40. Amount $200, Balance $240.

    Read the article

  • find users that are following other users

    - by Jakob
    Hi I'm wondering how I can go about this problem I have a DB with a Users Table, and a Followers table. The Followers table contains 2 Columns "FollowerID" and "FollowedID" I have a 1 - * relation in my datamodel between Users.ID -> Followers.FollowerID and Users.ID -> FollowedID How do I in LINQ get a set of users that are following a specific user? I'll express what I'm trying to achieve programatically I can get this far: ctx.Followers.Where(f => f.FollowedID == CurrentUser.ID) so now i have a Followers set where I have the ID of the users that follow the CurrentUser, and I could iterate through this collection and then add users after each iteration to a collection that would be a total USER collection that followed CurrentUser, but isn't there a smarter, or LINQ'er way to do this? Much appreciated Thx

    Read the article

  • Database ERD design: 2 types user in one table

    - by Giskin Leow
    I have read this (Database design: 3 types of users, separate or one table?) I decided to put admin and normal user in one table since the attributes are similar: fullname, address, phone, email, gender ... Then I want to draw ERD, suddenly my mind pop out a question. How to draw? Customer make appointment and admin approve appointment. now only two tables, and admin, customer in same table. Help.

    Read the article

  • Sinatra and XML POST request

    - by user292815
    I don't know is it my mistake or no. So i have that code: <code> post '/singin/get_token' do content_type :xml puts request.body.read puts xmlRequest xmlRequest = REXML::Document.new(request.body.read) ... </code> And when i post something like that: <code> <?xml version="1.0" encoding="utf-16"?><request xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><username>adsfasdf</username></request> </code> I receive that in my console: <code> 127.0.0.1 - - [12/Mar/2010 21:18:20] "POST /singin/get_token HTTP/1.1" 500 105872 0.1339 Iconv::InvalidCharacter - ">": /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `conv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `decode_iconv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:58:in `encoding=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:46:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:164:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `create_from' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:146:in `stream=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:123:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `build' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' zaiaku-game-server.rb:70:in `new' zaiaku-game-server.rb:70:in `block in <main>' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run'Iconv::InvalidCharacter: ">" /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `conv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `decode_iconv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:58:in `encoding=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:46:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:164:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `create_from' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:146:in `stream=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:123:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `build' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' zaiaku-game-server.rb:70:in `new' zaiaku-game-server.rb:70:in `block in <main>' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:811:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:811:in `block in route' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:488:in `instance_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:488:in `route_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:477:in `block (2 levels) in route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:474:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:474:in `block in route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:453:in `each' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:453:in `route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:569:in `dispatch!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:388:in `block in call!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `instance_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `block in invoke' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `invoke' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:388:in `call!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:377:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:928:in `block in call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:973:in `synchronize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:928:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:in `block in pre_process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in `pre_process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in `process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:in `receive_data' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run_machine' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in `start' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:in `start' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:896:in `run!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/main.rb:35:in `block in <top (required)>' !! Unexpected error while processing request: incompatible character encodings: ASCII-8BIT and UTF-8 <code>

    Read the article

  • Exception with RubyAMF and Ruby 1.9 although code works

    - by Tam
    I'm getting an exception with RubyAMF using Ruby 1.9 and Rails 2.3.5. Although code afterward executes normally I'm not very comfortable with seeing such exception in the log file. Do you know what is causing it: >>>>>>>> RubyAMF >>>>>>>>> #<RubyAMF::Actions::PrepareAction:0x0000010139ff48> took: 0.00020 secs >>>>>>>> RubyAMF >>>>>>>>> #<RubyAMF::Actions::RailsInvokeAction:0x0000010139ff10> took: 0.29973 secs You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.include? /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:142:in `create_time_zone_conversion_attribute?' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:75:in `block in define_attribute_methods' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:71:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:71:in `define_attribute_methods' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:242:in `method_missing' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/base.rb:2832:in `hash' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `hash' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `hash' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `[]=' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `store_object' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:234:in `write_amf3_object' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:154:in `write_amf3' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:78:in `write' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:70:in `block in run' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:56:in `upto' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:56:in `run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:91:in `block in run' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/benchmark.rb:309:in `realtime' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:91:in `run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:12:in `block in run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:11:in `each' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:11:in `run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/rails_gateway.rb:28:in `service' /Users/tammam56/lal/app/controllers/rubyamf_controller.rb:19:in `gateway' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:1331:in `perform_action' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/filters.rb:617:in `call_filters' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in `perform_action_with_filters' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `block in perform_action_with_benchmark' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `block in ms' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/benchmark.rb:309:in `realtime' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_with_benchmark' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in `perform_action_with_rescue' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in `perform_action_with_flash' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `process' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in `process_with_filters' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in `process' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in `dispatch' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in `_call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in `block in build_middleware_stack' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `block in call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in `cache' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/session/cookie_store.rb:93:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/lock.rb:11:in `block in call' <internal:prelude>:8:in `synchronize' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in `block in call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in `run' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/rack/static.rb:31:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `block in call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/rack/log_tailer.rb:17:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/content_length.rb:13:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/chunked.rb:15:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:64:in `process' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:159:in `block in process_client' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `process_client' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `block (2 levels) in run '

    Read the article

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • build a Database from Ms Word list information...

    - by Jayron Soares
    Please someone can advise me how to approach a given problem: I have a sequential list of metadata in a document in MS Word. The basic idea is create a python algorithm to iterate over of the information, retrieving just the name of PROCESS, when is made a queue, from a database. for example. Process: Process Walker (1965) Exact reference: Walker Process Equipment., nc. v. Food Machinery Corp.. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari To The United States Court of Appeals for the SeventhCircuit. Parties: Walker Process Equipment, Inc. Sector: Systems is … Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patenton "knee ction swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond .. Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of creating an algorithm in python to be able to break this information sequenced and try to store them in a web database[open source application that I’m looking for] in order to allow for free consultations ...

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >