Search Results

Search found 27248 results on 1090 pages for 'table adapter'.

Page 457/1090 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Unable to SSH to a virtualbox Redhat

    - by Rajat
    I am using a MAC and using virtualbox to start a Redhat instance. The instance is started with two adapters (first being NAT, and second being host-only-adapter). The problem is that I am unable to SSH from my Mac to the virtualbox instance using the IP (I am able to ping the IP, though). I checked in the iptables and SSH is allowed (port 22), and sshd daemon is also running. Anything I am missing?

    Read the article

  • Setting the height of a row in a JTable in java

    - by Douglas Grealis
    I have been searching for a solution to be able to increase the height of a row in a JTable. I have been using the setRowHeight(int int) method which compiles and runs OK, but no row[s] have been increased. When I use the getRowHeight(int) method of the row I set the height to, it does print out the size I increased the row to, so I'm not sure what is wrong. The code below is a rough illustration how I am trying to solve it. My class extends JFrame. String[] columnNames = {"Column 1", "Column 2", "Column 1 3"}; JTable table = new JTable(new DefaultTableModel(columnNames, people.size())); DefaultTableModel model = (DefaultTableModel) table.getModel(); int count =1; for(Person p: people) { model.insertRow(count,(new Object[]{count, p.getName(), p.getAge()+"", p.getNationality})); count++; } table.setRowHeight(1, 15);//Try set height to 15 (I've tried higher) Can anyone tell me where I am going wrong? I am trying to increase the height of row 1 to 15 pixels?

    Read the article

  • Hibernate mapping Problem

    - by Geln Yang
    Hi, There is a table Item like, code,name 01,parent1 02,parent2 0101,child11 0102,child12 0201,child21 0202,child22 Create a java object and hbm xml to map the table.The Item.parent is a Item whose code is equal to the first two characters of its code : class Item{ String code; String name; Item parent; List<Item> children; .... setter/getter.... } <hibernate-mapping> <class name="Item" table="Item"> <id name="code" length="4" type="string"> <generator class="assigned" /> </id> <property name="name" column="name" length="50" not-null="true" /> <many-to-one name="parent" class="Item" not-found="ignore"> <formula> <![CDATA[ (select i.code,r.name from Item i where (case length(code) when 4 then i.code=SUBSTRING(code,1,2) else false end)) ]]> </formula> </many-to-one> <bag name="children"></bag> </class> </hibernate-mapping> I try to use formula to define the many-to-one relationship,but it doesn't work!Is there something wrong?Or is there other method? Thanks! ps,I use mysql database.

    Read the article

  • What are the best practices for storing PHP session data in a database?

    - by undefined
    I have developed a web application that uses a web server and database hosted by a web host (on the ground) and a server running on Amazon Web Services EC2. Both servers may be used by a user during a session and both will need to know some session information about a user. I don't want to POST the information that is needed by both servers because I dont want it to be visible to browsers / Firebug etc. So I need my session data to persist across servers. And I think that this means that the best option is to store all / some of the data that I need in the database rather than in a session. The easiest thing to do seems to be to keep the sessions but to POST the session_id between servers and use this as the key to lookup the data I need from a 'user_session_data' table in the database. I have looked at Tony Marston's article "Saving PHP Session Data to a database" - should I use this or will a table with the session data that I need and session_id as key suffice? What would be the downside of creating my own table and set of methods for storing the data I need in the database?

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

  • Titanium TableViewRow classname with custom rows

    - by pancake
    I would like to know in what way the 'className' property of a Ti.UI.TableViewRow helps when creating custom rows. For example, I populate a tableview with custom rows in the following way: function populateTableView(tableView, data) { var rows = []; var row; var title, image; var i; for (i = 0; i < data.length; i++) { title = Ti.UI.createLabel({ text : data[i].title, width : 100, height: 30, top: 5, left: 25 }); image = Ti.UI.createImage({ image : 'some_image.png', width: 30, height: 30, top: 5, left: 5 }); /* and, like, 5+ more views or whatever */ row = Ti.UI.createTableViewRow(); row.add(titleLabel); row.add(image); rows.push(row); } tableView.setData(rows); } Of course, this example of a "custom" row is easily created using the standard title and image properties of the TableViewRow, but that isn't the point. How is the allocation of new labels, image views and other child views of a table view prevented in favour of their reuse? I know in iOS this is achieved by using the method -[UITableView dequeueReusableCellWithIdentifier:] to fetch a row object from a 'reservoir' (so 'className' is 'identifier' here) that isn't currently being used for displaying data, but already has the needed child views laid out correctly in it, thus only requiring to update the data contained within (text, image data, etc). As this system is so unbelievably simple, I have a lot of trouble believing the method employed by the Titanium API does not support this. After reading through the API and searching the web, I do however suspect this is the case. The 'className' property is recommended as an easy way to make table views more efficient in Titanium, but its relation to custom table view rows is not explained in any way. If anyone could clarify this matter for me, I would be very grateful.

    Read the article

  • Not able to connect to game using "Game Ranger"

    - by Daud
    Every time the game is about to start it says that I have lost my connection to EA's servers with the following message: Make sure your network adapter and network cable is plugged in. Although my connection works perfectly but when it comes to online gaming it messes up everything. My System specifications: Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz 4096MB RAM Windows 7 Ultimate 64-bit Internet connection of 2Mbps Can you help me to sort it out?

    Read the article

  • For a set of sql-queries, how do you determine which result-set contains a certain row?

    - by ManBugra
    I have a set of sql - queries: List<String> queries = ... queries[0] = "select id from person where ..."; ... queries[8756] = "select id from person where ..."; Each query selects rows from the same table 'person'. The only difference is the where-clause. Table 'person' looks like this: id | name | ... many other columns How can i determine which queries will contain a certain person in their subset? For example: List<Integer> matchingQueries = magicMethod(queries, [23,45]); The list obtained by 'magicMethod' filters all sql queries present in the list 'queries' (defined above) and returns only those that contain either the person with id 23 OR a person with id 45. Why i need it: I am dealing with an application that contains products and categories where the categories are sql queries that define which products belong to them (queries stored in a table also). Now i have a requirement where an admin has to see all categories an item belongs to immediately after the item was created. Btw, over 8.000 categories defined (so far, more to come). language and db: java && postgreSQL Thanks,

    Read the article

  • Kohana 3.2 - Database Session losing data on new Page Request

    - by reado
    I've setup my dev Kohana server to use an encrypted database as the default Session type. I'm also using this in combination with Auth to implement user authentication. Right now my user's are able to authenticate correctly and the authentication keys are being stored in the session. I'm also storing additional data like the user's firstname and businessname during the login procedure. When my login function is ready to redirect the user to the user dashboard, I'm able to see all the data correctly when I do $session::instance()->as_array(); (Array ( [auth_user] => NRyk6lA8 [businessname] => Dudetown [firstname] => Matt )) As soon as I redirect the user to another page, $session::instance()->as_array(); is empty. By dumping out the Session::instance() object, I can see that the Session id's are still the same. When I look at my database table though, i dont see any session records being saved and my session table is empty. My bootstrap.php contains: Session::$default = 'database'; Cookie::$salt = 'asdfasdf'; Cookie::$expiration = 1209600; Cookie::$domain = FALSE; and my session.php config file looks like: return array( 'database' => array( 'name' => 'auth_user', 'encrypted' => TRUE, 'lifetime' => 24 * 3600, 'group' => 'default', 'table' => 'sessions', 'columns' => array( 'session_id' => 'session_id', 'last_active' => 'last_active', 'contents' => 'contents' ), 'gc' => 500, ), ); I've looked high and low for an answer.. if anyone has any suggestions, i'm all ears! Thanks!

    Read the article

  • Align div inline

    - by Rajeev
    How to make the second div inline to the first div.I need the flash swf to appear next to the tds <html> <div style="display: inline"> <table style="table-layout:fixed;width:100%;"> <tr> <td width ="20%"> </td> </tr> <tr> <td width="20%"> 1.Can you view the image? </td> <td width="20%"> 1.Can you upload the image? </td> </tr> </table> </div> <div style="display: inline;"> <object width="100" height="100"> <embed src="image_tr.swf" width="250" height="250"> </embed> </object> </div>

    Read the article

  • query structure - ignoring entries for the same event from multiple users?

    - by Andrew Heath
    One table in my MySQL database tracks game plays. It has the following structure: SCENARIO_VICTORIES [ID] [scenario_id] [game] [timestamp] [user_id] [winning_side] [play_date] ID is the autoincremented primary key. timestamp records the moment of submission for the record. winning_side has one of three possible values: 1, 2, or 0 (meaning a draw) One of the queries done on this table calculates the victory percentage for each scenario, when that scenario's page is viewed. The output is expressed as: Side 1 win % Side 2 win % Draw % and queried with: SELECT winning_side, COUNT(scenario_id) FROM scenario_victories WHERE scenario_id='$scenID' GROUP BY winning_side ORDER BY winning_side ASC and then processed into the percentages and such. Sorry for the long setup. My problem is this: several of my users play each other, and record their mutual results. So these battles are being doubly represented in the victory percentages and result counts. Though this happens infrequently, the userbase isn't large and the double entries do have a noticeable effect on the data. Given the table and query above - does anyone have any suggestions for how I can "collapse" records that have the same play_date & game & scenario_id & winning_side so that they're only counted once?

    Read the article

  • jQuery re-checking check boxes

    - by Jacob
    Hi, I having problems with re-checking some checkboxes after and ajax call updates a table. I have a table of what the project calls 'shares'. I want to: 1) check for and save any checked shares to an array 2) Do my ajax call to update the table of shares 3) Re-check any there we checked before the ajax update. My code is not working and I can't see why? Any tips, help or advise much appreciated. // Array to hold our checked share ids var savedShareIDs = new Array(); // Add checked share ids into array $("input:checkbox[name=share_ids]:checked").each(function() { savedShareIDs.push($(this).val()); }); // Do ajax update Dajaxice.pagination_shares('Dajax.process',{'id':1, 'page':1}) // Re-check any that were checked before ajax update $("input:checkbox[name=share_ids]").each(function() { if ( $.inArray( $(this).val(), savedShareIDs) > -1 ) { $(this).attr('checked',true) } }); The problem is the checkboxes are not checking. I'm pretty sure the loop is working and the inArray check works. Just not checking the checkboxes. Can anyone see where I'm going wrong? Thanks.

    Read the article

  • Mysql many to many problem (leaderborad/scoreboard)

    - by zoko2902
    Hi all! I'm working on a small project in regards of the upcoming World Cup. I'm building a roster/leaderboard/scoredboard based on groups with national teams. The idea is to have information on all upcoming matches within the group or in the knockout phase (scores, time of the match, match stats etc.). Currently I'm stuck with the DB in that I can't come up with a query that would return paired teams in a row. I have these 3 tables: CREATE TABLE IF NOT EXISTS `wc_team` ( `id` INT NOT NULL AUTO_INCREMENT , `name` VARCHAR(45) NULL , `description` VARCHAR(250) NULL , `flag` VARCHAR(45) NULL , `image` VARCHAR(45) NULL , `added` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`id`) , CREATE TABLE IF NOT EXISTS `wc_match` ( `id` INT NOT NULL AUTO_INCREMENT , `score` VARCHAR(6) NULL , `date` DATE NULL , `time` VARCHAR(45) NULL , `added` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`id`) , CREATE TABLE IF NOT EXISTS `wc_team_has_match` ( `wc_team_id` INT NOT NULL , `wc_match_id` INT NOT NULL , PRIMARY KEY (`wc_team_id`, `wc_match_id`) , I've simplified the tables so we don't go in the wrong direction. Now I've tried al kinds of joins and groupings I could think of, but I never seem to get. Example guery: SELECT t.wc_team_id,t.wc_match_id,c.id.c.name,d.id,d.name FROM wc_team_has_match AS t LEFT JOIN wc_match AS s ON t.wc_match_id = s.id LEFT JOIN wc_team AS c ON t.wc_team_id = c.id LEFT JOIN wc_team AS d ON t.wc_team_id = d.id Which returns: wc_team_id wc_match_id id name id name 16 5 16 Brazil 16 Brazil 18 5 18 Argentina 18 Argentina But what I really want is: wc_team_id wc_match_id id name id name 16 5 16 Brazil 18 Argentina Keep in mind that a group has more matches I want to see all those matches not only one. Any pointer or suggestion would be extremly appreciated since I'm stuck like a duck on this one :).

    Read the article

  • [C#] Linq doesn't insert associated entity on insert.

    - by Tomek
    Hello! I have simple mapping: [Table(Name="Person")] public class Person { private int id; private int state_id; private EntityRef<PersonState> state = new EntityRef<PersonState>(); [Column(IsPrimaryKey = true, Storage = "id", Name="id", IsDbGenerated = true, CanBeNull = false)] public int ID { get { return id; } set { id = value; } } [Column(Storage="state_id", Name="state_id")] public int StateID { get{ return state_id;} set{ state_id = value;} } [Association( Storage = "state", ThisKey = "StateID", IsForeignKey=true)] public PersonState State { get { return state.Entity; } set { state.Entity = value; } } } [Table(Name = "PersonState")] public class PersonState { private int id; private State state; [Column(Name="id", Storage="id", IsDbGenerated=true, IsPrimaryKey=true)] public int ID { get { return id; } set { id = value; } } [Column(Name = "date", Storage = "date")] public DateTime Date { get { return date; } set { date = value; } } [Column(Name = "type", Storage = "state")] public State State { get { return state; } set { state = value; } } } I use this code to insert new person with default state: private static Person NewPerson() { Person p = new Person(); p.State = DefaultState(p); return p; } private static PersonState DefaultState() { PersonState state = new PersonState(); state.Date = DateTime.Now; state.State = State.NotNotified; state.Comment = "Default State!"; return state; } Leater in code: db.Persons.InsertOnSubmit(NewPerson()); db.SubmitChanges(); In database(sqlite) I have all new persons, but state_id of all persons is set to 0, and PersonState table is empty. Why Linq did not insert any State object to database?

    Read the article

  • Does a Postgresql dump create sequences that start with - or after - the last key?

    - by bennylope
    I recently created a SQL dump of a database behind a Django project, and after cleaning the SQL up a little bit was able to restore the DB and all of the data. The problem was the sequences were all mucked up. I tried adding a new user and generated the Python error IntegrityError: duplicate key violates unique constraint. Naturally I figured my SQL dump didn't restart the sequence. But it did: DROP SEQUENCE "auth_user_id_seq" CASCADE; CREATE SEQUENCE "auth_user_id_seq" INCREMENT 1 START 446 MAXVALUE 9223372036854775807 MINVALUE 1 CACHE 1; ALTER TABLE "auth_user_id_seq" OWNER TO "db_user"; I figured out that a repeated attempt at creating a user (or any new row in any table with existing data and such a sequence) allowed for successful object/row creation. That solved the pressing problem. But given that the last user ID in that table was 446 - the same start value in the sequence creation above - it looks like Postgresql was simply trying to start creating rows with that key. Does the SQL dump provide the wrong start key by 1? Or should I invoke some other command to start sequences after the given start ID? Keenly curious.

    Read the article

  • failed to connect to internet ?

    - by raj
    New wifiadapter that is able to identify the router but connection cannot be established? OS Win 7 Another laptop with wifi is able to connect to internet through the router but not using this wifi adapter.

    Read the article

  • select from multiple tables but ordering by a datetime field

    - by Chris Mccabe
    I have 3 tables that are unrelated (related that each contains data for a different social network). Each has a datetime field dated- I'm already grouping by hour as you can see below (this one below for linked_in) SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."' GROUP BY hour I would like to know how to do a total across all 3 networks- the tables for the three are CREATE TABLE IF NOT EXISTS `upd8r_facebook_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `fb_id` bigint(30) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=80 ; CREATE TABLE IF NOT EXISTS `upd8r_linked_in_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `linked_in` varchar(200) NOT NULL, `oauth_secret` varchar(100) NOT NULL, `first_count` int(11) NOT NULL, `second_count` int(11) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=200 ; CREATE TABLE IF NOT EXISTS `upd8r_twitter_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `twitter` varchar(200) NOT NULL, `twitter_secret` varchar(100) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; something like this ? (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_facebook_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_twitter_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL GROUP BY hour

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • Enabling "USB Printing Support" in Windows 7

    - by Kevin Dente
    I'm trying to use an old parallel-port based printer with a USB-to-parallel port adapter on Windows 7. When I plug it into the USB port on the computer it's listed as an unrecognized device. I know that these cables typically use the "USB Printing Support" driver with makes USB ports show up as printer ports in the printer dialog. Is there a way to manually add USB Printing Support to Windows 7, since it isn't being added automatically?

    Read the article

  • WCF data services (OData), query with inheritance limitation?

    - by Mathieu Hétu
    Project: WCF Data service using internally EF4 CTP5 Code-First approach. I configured entities with inheritance (TPH). See previous question on this topic: Previous question about multiple entities- same table The mapping works well, and unit test over EF4 confirms that queries runs smoothly. My entities looks like this: ContactBase (abstract) Customer (inherits from ContactBase), this entity has also several Navigation properties toward other entities Resource (inherits from ContactBase) I have configured a discriminator, so both Customer and Resource map to the same table. Again, everythings works fine on the Ef4 point of view (unit tests all greens!) However, when exposing this DBContext over WCF Data services, I get: - CustomerBases sets exposed (Customers and Resources sets seems hidden, is it by design?) - When I query over Odata on Customers, I get this error: Navigation Properties are not supported on derived entity types. Entity Set 'ContactBases' has a instance of type 'CodeFirstNamespace.Customer', which is an derived entity type and has navigation properties. Please remove all the navigation properties from type 'CodeFirstNamespace.Customer'. Stacktrace: at System.Data.Services.Serializers.SyndicationSerializer.WriteObjectProperties(IExpandedResult expanded, Object customObject, ResourceType resourceType, Uri absoluteUri, String relativeUri, SyndicationItem item, DictionaryContent content, EpmSourcePathSegment currentSourceRoot) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, ResourceType expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__b.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) Seems like a limitation of WCF Data services... is it? Not much documentation can be found on the web about WCF Data services (OData) and inheritance specifications. How can I overpass this exception? I need these navigation properties on derived entities, and inheritance seems the only way to provide mapping of 2 entites on the same table with Ef4 CTP5... Any thoughts?

    Read the article

  • Database schema for Product Properties

    - by Chemosh
    As so many people I'm looking for a Products /Product Properties database schema. I'm using Ruby on Rails and (Thinking) Sphinx for faceted searches. Requirements: Adding new product types and their options should not require a change to the database schema Support faceted searches using Sphinx. Solutions I've come across: (See Bill Karwin's answer) Option 1: Single Table Inheritance Not an option really. The table would contain way to many columns. Option 2: Class Table Inheritance Ruby on Rails caches the database schema on start-up which means a restart whenever a new type of product is introduced. If you have a size able product catalog this could mean hundreds of tables. Option 3: Serialized LOB Kills being able to do faceted searches without heavy application logic. Option 4: Entity-Attribute-Value For testing purposes, EAV worked fine. However it could quickly become a mess and a maintenance hell as you add more and more options (e.g. when an option increase the prices or delivery time). What option should I go with? What other solutions are out there? Is there a silver bullet (ha) I overlooked?

    Read the article

  • Enabling "USB Printing Support" in Windows 7

    - by Kevin Dente
    I'm trying to use an old parallel-port based printer with a USB-to-parallel port adapter on Windows 7. When I plug it into the USB port on the computer it's listed as an unrecognized device. I know that these cables typically use the "USB Printing Support" driver with makes USB ports show up as printer ports in the printer dialog. Is there a way to manually add USB Printing Support to Windows 7, since it isn't being added automatically?

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • customer.name joining transactions.name vs. customer.id [serial] joining transactions.id [integer]

    - by Frank Computer
    INFORMIX-SQL 7.32 Pawnshop Application: one-to-many relationship where each customer (master) can have many transactions (detail). customer( id serial, pk_name char(30), {PATERNAL-NAME MATERNAL-NAME, FIRST-NAME MIDDLE-NAME} [...] ); unique index on id; unique cluster index on name; transaction( fk_name char(30), ticket_number serial, [...] ); dups cluster index on fk_name; unique index on ticket_number; Several people have told me this is not the correct way to join master to detail. They said I should always join customer.id[serial] to transactions.id[integer]. When a customer pawns merchandise, clerk queries the master using wildcards on name. The query usually returns several customers, clerk scrolls until locating the right name, enters a 'D' to change to detail transactions table, all transactions are automatically queried, then clerk enters an 'A' to add a new transaction. The problem with using customer.id joining transaction.id is that although the customer table is maintained in sorted name order, clustering the transaction table by fk_id groups the transactions by fk_id, but they are not in the same order as the customer name, so when clerk is scrolling through customer names in the master, the system has to jump allover the place to locate the clustered transactions belonging to each customer. As each new customer is added, the next id is assigned to that customer, but new customers dont show up in alphabetical order. I experimented using id joins and confirmed the decrease in performance. How can I use id joins instead of name joins and still preserve the clustered transaction order by name if transactions has no name column?

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >