Search Results

Search found 33758 results on 1351 pages for 'primary key design'.

Page 263/1351 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Why is Gnome-Shell thinking my "d" key minimizes all windows?

    - by Limitless
    I am on Ubuntu 12.04 LTS (in gnome-shell) and I am unable to use my "d" key. Regular gnome-2 works perfectly fine, but for some reason I cannot use my "d" key in gnome-3. I recently installed it and have been trying to figure this out, but I have no clue what's going on. I attempted to disable the keyboard shortcut that minimizes active windows, but that did not work. (My only option right now is to use CTRL-V to paste my "d"s.) On top of this, my arrow keys automatically move my windows and I cannot move through text using them. What's going on here, folks?

    Read the article

  • mysql Delete and Database Relationships

    - by Colin
    If I'm trying to delete multiple rows from a table and one of those rows can't be deleted because of a database relationship, what will happen? Will the rows that aren't constrained by a relationship still be deleted? Or will the entire delete fail? Thanks, Colin

    Read the article

  • MVC: Model View Controller -- does the View call the Model?

    - by Gary Green
    I've been reading about MVC design for a while now and it seems officially the View calls objects and methods in the Model, builds and outputs a view. I think this is mainly wrong. The Controller should act and retrieve/update objects inside the Model, select an appropriate View and pass the information to it so it may display. Only crude and rudiementary PHP variables/simple if statements should appear inside the View. If the View gets the information it needs to display from the Model, surely there will be a lot of PHP inside the View -- completely violating the point of seperating presentation logic.

    Read the article

  • Advantages of Thread pooling in embedded systems

    - by Microkernel
    I am looking at the advantages of threadpooling design pattern in Embedded systems. I have listed few advantages, please go through them, comment and please suggest any other possible advantages that I am missing. Scalability in systems like ucos-2 where there is limit on number of threads. Increasing capability of any task when necessary like Garbage collection (say in normal systems if garbage collection is running under one task, its not possible to speed it up, but in threadpooling we can easily speed it up). Can set limit on the max system load. Please suggest if I am missing anything.

    Read the article

  • multi_index composite_key replace with iterator

    - by Rohit
    Is there anyway to loop through an index in a boost::multi_index and perform a replace? #include <iostream> #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/composite_key.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> using namespace boost::multi_index; using namespace std; struct name_record { public: name_record(string given_name_,string family_name_,string other_name_) { given_name=given_name_; family_name=family_name_; other_name=other_name_; } string given_name; string family_name; string other_name; string get_name() const { return given_name + " " + family_name + " " + other_name; } void setnew(string chg) { given_name = given_name + chg; family_name = family_name + chg; } }; struct NameIndex{}; typedef multi_index_container< name_record, indexed_by< ordered_non_unique< tag<NameIndex>, composite_key < name_record, BOOST_MULTI_INDEX_MEMBER(name_record,string, name_record::given_name), BOOST_MULTI_INDEX_MEMBER(name_record,string, name_record::family_name) > > > > name_record_set; typedef boost::multi_index::index<name_record_set,NameIndex>::type::iterator IteratorType; typedef boost::multi_index::index<name_record_set,NameIndex>::type NameIndexType; void printContainer(name_record_set & ns) { cout << endl << "PrintContainer" << endl << "-------------" << endl; IteratorType it1 = ns.begin(); IteratorType it2 = ns.end (); while (it1 != it2) { cout<<it1->get_name()<<endl; it1++; } cout << "--------------" << endl << endl; } void modifyContainer(name_record_set & ns) { cout << endl << "ModifyContainer" << endl << "-------------" << endl; IteratorType it3; IteratorType it4; NameIndexType & idx1 = ns.get<NameIndex>(); IteratorType it1 = idx1.begin(); IteratorType it2 = idx1.end(); while (it1 != it2) { cout<<it1->get_name()<<endl; name_record nr = *it1; nr.setnew("_CHG"); bool res = idx1.replace(it1,nr); cout << "result is: " << res << endl; it1++; } cout << "--------------" << endl << endl; } int main() { name_record_set ns; ns.insert( name_record("Joe","Smith","ENTRY1") ); ns.insert( name_record("Robert","Brown","ENTRY2") ); ns.insert( name_record("Robert","Nightingale","ENTRY3") ); ns.insert( name_record("Marc","Tuxedo","ENTRY4") ); printContainer (ns); modifyContainer (ns); printContainer (ns); return 0; } PrintContainer ------------- Joe Smith ENTRY1 Marc Tuxedo ENTRY4 Robert Brown ENTRY2 Robert Nightingale ENTRY3 -------------- ModifyContainer ------------- Joe Smith ENTRY1 result is: 1 Marc Tuxedo ENTRY4 result is: 1 Robert Brown ENTRY2 result is: 1 -------------- PrintContainer ------------- Joe_CHG Smith_CHG ENTRY1 Marc_CHG Tuxedo_CHG ENTRY4 Robert Nightingale ENTRY3 Robert_CHG Brown_CHG ENTRY2 --------------

    Read the article

  • Postgresql - one database for everyone, or one-database per customer

    - by user337876
    I'm working on a web-based business application where each customer will need to have their own data (think basecamphq.com type model) For scalability and ease-of-upgrades, I'd prefer to have a single database where each customer gets a filtered version of the data. The problem is how to guarantee that they stay sandboxed to their own data. Trying to enforce it in code seems like a disaster waiting to happen. I know Oracle has a way to append a where clause to every query based on a login id, but does Postgresql have anything similar? If not, is there a different design pattern I could use (like creating a view of each table for each customer that filters)? Worse case scenario, what is the performance/memory overhead of having 1000 100M databases vs having a single 1Tb database? I will need to provide backup/restore functionality on a per-customer basis which is dead-simple on a single database but quite a bit trickier if they are sharing the database with other customers.

    Read the article

  • Storing secret keys on iPhone source and project resources

    - by hgpc
    Is storing secret keys (internal use passwords and such) on iPhone source code and project resources (such as plist files) secure? Obviously nothing is 100% secure, but can this information be extracted easily from an installed app? How do you recommend storing these keys to use them in the source code? Just in case, this question is not about storing user passwords.

    Read the article

  • How can I enjoy or avoid designing every web application I make ?

    - by schmrz
    I know this sounds silly, but I'm having huge problems (ok, not that huge, but still...) problems when I get an idea for a web project, small or big. The instant turn off is when I remember that I have to code the html/css by hand again and again. I like programming a lot more that designing web sites, and I simply don't enjoy designing them as much as I enjoy programming them. With that said, I also prefer simple and minimalistic designs. What is your approach in web design, how do you make it enjoyable (at least a little bit)?

    Read the article

  • Wondering how Facebook does the "Mutual friends" feature

    - by Pierre
    Hello, I'm currently developing an application to allow students to manage their courses, and I don't really know how to design the database for a specific feature. The client wants, a lot like Facebook, that when a student displays the list of people currently in a specific course, the people with the most mutual courses are displayed first. As an additional feature, I would like to add a search feature to allow students to search for another one, and displaying first in the search results the people with most mutual courses. I currently use MySQL, I plan to use Cassandra for some other features, and I also use Memcached for result caching. Thanks.

    Read the article

  • Organizing memcache keys

    - by Industrial
    Hi! Im trying to find a good way to handle memcache keys for storing, retrieving and updating data to/from the cache layer in a more civilized way. Found this pattern, which looks great, but how do I turn it into a functional part of a PHP application? The Identity Map pattern: http://martinfowler.com/eaaCatalog/identityMap.html Thanks!

    Read the article

  • Artistic aspects of UI?

    - by anon
    Consider a single button. At one extreme, we have a black OpenGL window, with: outline (in white) of a rectangle bitmap remdered font inside of it, saying "Ok" At the other extreme, we have Mac OS X, a button that is: well rounded has some gradient showing light effects on it nice antialiased "OK" soft shadow of some sort These two UIs present very very different user experiences. The former says "This is from the 80s" the latter says "this is professional". This is something I do not understand well as a programmer (and don't know where to learn about this). Does anyone know of a good technical resource for this? [I'd prefer things that draws upon psychology / perception literature to say why to do something rather than design books that just says "use color XYZ with a gradient of blah"]

    Read the article

  • e-shop implementation: Status for Orders?

    - by Guillermo
    Hello Again my fellow programmers out there, I'm designing and programming from scratch a online shop. It has a Module to manage "Orders" that are recieved via the frontend. I'm needing to have a status to know whats happening with an order in s certain moment, let's say the statuses are: Pending Payment Confirmed - Awaiting shipment Shipped Cancelled My question is a simple one, but is very important to decide on the store design, and is: What would you do so store this status: Would you create a column for it in the Orders table or would you just "calculate" the status of each order depending if payments has been recieved or shipments has been made for every order? (except I suppose for a is_cancelled column) What would be the best approach to model this kind of problem? PD: I even wish in the future to have these statuses configurable buy other clientes using the same software..

    Read the article

  • Views performance in MySQL for denormalization

    - by Gianluca Bargelli
    I am currently writing my truly first PHP Application and i would like to know how to project/design/implement MySQL Views properly; In my particular case User data is spread across several tables (as a consequence of Database Normalization) and i was thinking to use a View to group data into one large table: CREATE VIEW `Users_Merged` ( name, surname, email, phone, role ) AS ( SELECT name, surname, email, phone, 'Customer' FROM `Customer` ) UNION ( SELECT name, surname, email, tel, 'Admin' FROM `Administrator` ) UNION ( SELECT name, surname, email, tel, 'Manager' FROM `manager` ); This way i can use the View's data from the PHP app easily but i don't really know how much this can affect performance. For example: SELECT * from `Users_Merged` WHERE role = 'Admin'; Is the right way to filter view's data or should i filter BEFORE creating the view itself? (I need this to have a list of users and the functionality to filter them by role). EDIT Specifically what i'm trying to obtain is Denormalization of three tables into one. Is my solution correct? See Denormalization on wikipedia

    Read the article

  • How do i start with Gomoku?

    - by firstTry
    I read about Gomoku that it can be implemented using Minimax and Alpha-Beta Pruning algorithms. So, i read these algorithms and now understand how the game will be solved. But when i sat to down to code, I am facing problem how to approach it. As in , How to design the prototype functions like getNextMove or Max(Move) ? How will the next move searched? Till when should i apply the minimax algorithm. I know i can find the code online, but i want to do it myself. Can anyone please point me in the right direction?

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • How to optimize this mysql query - explain output included

    - by Sandeepan Nath
    This is the query (a search query basically, based on tags):- select SUM(DISTINCT(ttagrels.id_tag in (2105,2120,2151,2026,2046) )) as key_1_total_matches, td.*, u.* from Tutors_Tag_Relations AS ttagrels Join Tutor_Details AS td ON td.id_tutor = ttagrels.id_tutor JOIN Users as u on u.id_user = td.id_user where (ttagrels.id_tag in (2105,2120,2151,2026,2046)) group by td.id_tutor HAVING key_1_total_matches = 1 And following is the database dump needed to execute this query:- CREATE TABLE IF NOT EXISTS `Users` ( `id_user` int(10) unsigned NOT NULL auto_increment, `id_group` int(11) NOT NULL default '0', PRIMARY KEY (`id_user`), KEY `Users_FKIndex1` (`id_group`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=730 ; INSERT INTO `Users` (`id_user`, `id_group`) VALUES (303, 1); CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) unsigned NOT NULL auto_increment, `id_user` int(10) NOT NULL default '0', PRIMARY KEY (`id_tutor`), KEY `Users_FKIndex1` (`id_user`), KEY `id_user` (`id_user`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=58 ; INSERT INTO `Tutor_Details` (`id_tutor`, `id_user`) VALUES (26, 303); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2957 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (2026, 'Brendan.\nIn'), (2046, 'Brendan.'), (2105, 'Brendan'), (2120, 'Brendan''s'), (2151, 'Brendan)'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) unsigned default NULL, `tutor_field` varchar(255) default NULL, `cdate` timestamp NOT NULL default CURRENT_TIMESTAMP, `udate` timestamp NULL default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`), KEY `id_tutor_2` (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`, `tutor_field`, `cdate`, `udate`) VALUES (2105, 26, 'firstname', '2010-06-17 17:08:45', NULL); ALTER TABLE `Tutors_Tag_Relations` ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_2` FOREIGN KEY (`id_tutor`) REFERENCES `Tutor_Details` (`id_tutor`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_1` FOREIGN KEY (`id_tag`) REFERENCES `Tags` (`id_tag`) ON DELETE NO ACTION ON UPDATE NO ACTION; What the query does? This query actually searches tutors which contain "Brendan"(as their name or biography or something). The id_tags 2105,2120,2151,2026,2046 are nothing but the tags which are LIKE "%Brendan%". My question is :- 1.In the explain of this query, the reference column shows NULL for ttagrels, but there are possible keys (Tutors_Tag_Relations,id_tutor,id_tag,id_tutor_2). So, why is no key being taken. How to make the query take references. Is it possible at all? 2. The other two tables td and u are using references. Any indexing needed in those? I think not. Check the explain query output here http://www.test.examvillage.com/explain.png

    Read the article

  • C# - How to override GetHashCode with Lists in object

    - by Christian
    Hi, I am trying to create a "KeySet" to modify UIElement behaviour. The idea is to create a special function if, eg. the user clicks on an element while holding a. Or ctrl+a. My approach so far, first lets create a container for all possible modifiers. If I would simply allow a single key, it would be no problem. I could use a simple Dictionary, with Dictionary<Keys, Action> _specialActionList If the dictionary is empty, use the default action. If there are entries, check what action to use depending on current pressed keys And if I wasn't greedy, that would be it... Now of course, I want more. I want to allow multiple keys or modifiers. So I created a wrapper class, wich can be used as Key to my dictionary. There is an obvious problem when using a more complex class. Currently two different instances would create two different key, and thereby he would never find my function (see code to understand, really obvious) Now I checked this post: http://stackoverflow.com/questions/638761/c-gethashcode-override-of-object-containing-generic-array which helped a little. But my question is, is my basic design for the class ok. Should I use a hashset to store the modifier and normal keyboardkeys (instead of Lists). And If so, how would the GetHashCode function look like? I know, its a lot of code to write (boring hash functions), some tips would be sufficient to get me started. Will post tryouts here... And here comes the code so far, the Test obviously fails... public class KeyModifierSet { private readonly List<Key> _keys = new List<Key>(); private readonly List<ModifierKeys> _modifierKeys = new List<ModifierKeys>(); private static readonly Dictionary<KeyModifierSet, Action> _testDict = new Dictionary<KeyModifierSet, Action>(); public static void Test() { _testDict.Add(new KeyModifierSet(Key.A), () => Debug.WriteLine("nothing")); if (!_testDict.ContainsKey(new KeyModifierSet(Key.A))) throw new Exception("Not done yet, help :-)"); } public KeyModifierSet(IEnumerable<Key> keys, IEnumerable<ModifierKeys> modifierKeys) { foreach (var key in keys) _keys.Add(key); foreach (var key in modifierKeys) _modifierKeys.Add(key); } public KeyModifierSet(Key key, ModifierKeys modifierKey) { _keys.Add(key); _modifierKeys.Add(modifierKey); } public KeyModifierSet(Key key) { _keys.Add(key); } }

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • Is It Possible To Use Javascript/CSS To Swap Style Sheets When A Mobile Device Rotates?

    - by Sean M
    I am working on a site that must be designed with mobile accessibility in mind. As part of our brainstorming, we wondered whether it's possible to detect, for a mobile browser (i.e. Mobile Safari or the Android browser), when the viewing device has changed orientation, and to use that as a trigger to change page content? As the title of this question implies, our best-case scenario is the ability to detect the orientation change and use it to alter the CSS on the fly so as to present a slightly different page for landscape versus portrait. Of course we can just design for a page that looks good one way and make it obvious that it's supposed to be viewed that way, but the cool-stuff factor of a page that looks good either way is pretty appealing. Is this idea implementable? Practical?

    Read the article

  • Help to choose NoSQL database for project

    - by potapuff
    There is a table: doc_id(integer)-value(integer) Approximate 100k doc_id and 27?? rows. Majority query on this table - searching documents similar to current document: select 10 documents with maximum of (count common to current document value)/(count ov values in document). Nowadays we use PostgreSQL. Table weight (with index) ~1,5 GB. Average query time ~0.5s. Should I transfer all this to NoSQL base, if so, what?

    Read the article

  • How do you use Linq2Sql in your applications ?

    - by this. __curious_geek
    I'm recently migrating to Linq2Sql and all my future projects would be done in Linq2Sql. Having said that, I researched a lot on how to properly plug-in Linq2Sql in application design. what to put at what layer ? Should I use DTOs over Linq2Sql entities ? I did not find any rock-solid material that really talked about one single thing and everyone had their own opinions and I found all of them justified right from their arguments. I'm looking forward to your ideas on how to integrate/use Linq2Sql in projects. My priority is maintenance[it should be maintenable and when multiple people work on same project] and scalabilty [it should have scope of evolution]. Thanks.

    Read the article

  • How to part single SVG image containing many fonts

    - by Nachiket
    Hello guys, First of all I am not a designer. so this question is may be too easy OR even worse not a proper question at all. Anyways, I have downloaded free fonts from GoSquared (link) which provides all fonts in single SVG file. So, is there any standard way or tool to convert this single SVG files into individual icon's svg/png file? (I have GIMP) By the way, It also gives .ai file, but I don't have Adobe Illustrator (in fact, I don;t have any Adobe Design too).. Cheers,

    Read the article

  • Help to chouse NoSQL database for project

    - by potapuff
    There is a table: doc_id(integer)-value(integer) Approximate 100k doc_id and 27?? rows. Majority query on this table - searching documents similar to current document: select 10 documents with maximum of (count common to current document value)/(count ov values in document). Nowadays we use PostgreSQL. Table weight (with index) ~1,5 GB. Average query time ~0.5s. Should I transfer all this to NoSQL base, if so, what?

    Read the article

  • Django foreign keys cascade deleting and "related_name" parameter (bug?)

    - by Wiseman
    In this topic I found a good way to prevent cascade deleting of relating objects, when it's not neccessary. class Factures(models.Model): idFacture = models.IntegerField(primary_key=True) idLettrage = models.ForeignKey('Lettrage', db_column='idLettrage', null=True, blank=True) class Paiements(models.Model): idPaiement = models.IntegerField(primary_key=True) idLettrage = models.ForeignKey('Lettrage', db_column='idLettrage', null=True, blank=True) class Lettrage(models.Model): idLettrage = models.IntegerField(primary_key=True) def delete(self): """Dettaches factures and paiements from current lettre before deleting""" self.factures_set.clear() self.paiements_set.clear() super(Lettrage, self).delete() But this method seems to fail when we are using ForeignKey field with "related_name" parameter. As it seems to me, "clear()" method works fine and saves the instance of "deassociated" object. But then, while deleting, django uses another memorized copy of this very object and since it's still associated with object we are trying to delete - whooooosh! ...bye-bye to relatives :) Database was arcitectured before me, and in somewhat odd way, so I can't escape these "related_names" in reasonable amount of time. Anybody heard about workaround for such a trouble?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >