Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 71/443 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • Is there such a thing as "closure" with software work?

    - by Bobby Tables
    I burned out last year (after a decade of fulltime programming jobs) and am on a sabbatical now. With all the self-examination I've started to figure out some of the root causes of my burnout, and one of the major ones is basically this: there was never any real closure in any of the work I've ever done. It was always a case of getting into an open-ended support/maintenance grind and going stale. When I first entered the industry, I had this image of programming work being very project-based. And I expected projects to have a start, beginning, and END. And then you move on and start on something totally new and fresh. Basically I never expected that a lot (most) of software work involves supporting and maintaining the same code base for open-ended long periods of time - years and even decades. That, combined with generally having itchy feet makes me think that burnout is inevitable for me, after 2-3 years, in ANY fulltime software job. All this sounds like I probably should have been a contractor instead of a fulltimer. But when I discuss this with people, a lot of them say that even THEN you can't really escape having to go back and maintain/support the stuff you worked on, over and over (eg. Coming back on support contracts, for example). The nature of software work is simply like that. There is no project closure, unlike in many other engineering fields. So my question is - Is there ANY programming work out there which is based on short to mid term projects/stints and then moving on cleanly? And is there any particular industry domain or specialization where this kind of project work is typical?

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Google Analytics - TOS section pertaining to privacy

    - by Eike Pierstorff
    The Google Analytics terms of service does do not allow to track "data that personally identifies an individual (such as a name, email address or billing information), or other data which can be reasonably linked to such information by Google". Does anybody have first-hand knowledge if this includes user ids which cannot be resolved by Google but can be linked to actual persons via an Analytics Users CRM system (e.g. a CRM linked to Analytics via API access) ? I used to think so, but if that where the case many ecommerce implementations would be illegal (since they store transactions id which can be linked to client's purchases). If anybody has insights about the intended meaning of the paragraph (preferably with a reliable source) it would be great if he/she could share :-)

    Read the article

  • Can't print, CUPS package corrupted and hangs on re-install

    - by Little Bobby Tables
    When I upgraded to Ubuntu 10.4 (Maverick), the upgrade process got stuck on the post-installation of the CUPS package. I had to kill processes and run several forced updates before I could finally get regular updated. Ever since I can't print - The printed file gets messed up and crashes the printer. I also can't re-install CUPS, as each time the installation hangs and I have to kill it before it completes. I tried to find a workaround for this problem, but in vain. Does anyone know how to bypass this? Or at least why can the post-installation hang, and how to re-install a problematic package? Some system specs and other hints: Dell D630 laptop running Ubuntu 10.4, Gnome desktop, standard LAN network, printing to an LPD server. Everything worked fine on 9.10. Also, the printed files themselves are not corrupted. The problem does not seem to be Evince-specific, but common to all printouts.

    Read the article

  • Looking for an old classic book about Unix command-line tools

    - by Little Bobby Tables
    I am looking for a book about the Unix command-line toolkit (sh, grep, sed, awk, cut, etc.) that I read some time ago. It was an excellent book, but I totally forgot its name. The great thing about this specific book was the running example. It showed how to implement a university bookkeeping system using only text-processing tools. You would find a student by name with grep, update grades with sed, calculate average grades with awk, attach grades to IDs with cut, and so on. If my memory serve, this book had a black cover, and was published circa 1980. Does anyone remember this book? I would appreciate any help in finding it.

    Read the article

  • Web Crawler for Learnign Topics on Wikipedia

    - by Chris Okyen
    When I want to learn a vast topic on wikipedia, I don't know where to start. For instance say I want to learn about Binary Stars, I then have to know other things linked on that pages and linked pages on all the linked pages and so on for the specified number of levels. I want to write a web crawler like HTTracker or something similiar, that will display a heiarchy of the links on a certain page and the links on those linked pages.I wish to use as much prewritten code as possible. Here is an example: Pretending we are bending the rules by grabing links from only the first sentence of each pages The example archives and "processes" two levels deep The page is Ternary operation The First Level In mathematics a ternary operation is an N-ary operation The Second Level Under Mathmatics: Mathematics (from Greek µ???µa máthema, “knowledge, study, learning”) is the abstract study of topics encompassing quantity, structure, space, change and others; it has no generally accepted definition. Under N-ary In logic,mathematics, and computer science, the arity i/'ær?ti/ of a function or operation is the number of arguments or operands that the function takes Under Operation In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values ------------------------------------------------------------------------- I need some way to determine what oder to approach all these wiki pages to learn the concept ( in this case ternary operations )... Following along with this exmpakle, one way to show the path to read would a printout flowout like so: This shows that the first sentence of the Mathematics page doesn't link to the first sentence of pages linked on ternary page two levels deep. (Please tell me how I should explain this ) --- In otherwords, the child node of the top pages first sentence, ternary_operation, does not have any child nodes that reference the children of the top pages other children nodes- N-ary and operation. Thus it is safe to read this first. Since N-ary has a link to operations we shoudl read the operation page second and finally read the N-ary page last. Again, I wish to use as much prewritten code as possible, and was wondering what language to use and what would be the simpliest way to go about doing this if there isn't already somethign out there? Thank You!

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • What exactly are Link Relation Values?

    - by bckpwrld
    From REST in Practice: Hypermedia and Systems Architecture: For computer-to-computer interactions, we advertise protocol information by embedding links in representations, much as we do with the human Web. To describe a link's purpose, we annotate it. Annotations indicate what the linked resource means to the current resource: “status of your coffee order” “payment” and so on. We call such annotated links hypermedia controls, reflecting their enhanced capabilities over raw URIs. ... link relation values, which describe the roles of linked resources ... Link relation values help consumers understand why they might want to activate a hypermedia control. They do so by indicating the role of the linked resource in the context of the current representation. I interpret the above quotes as saying that Hypermedia control contains both a link to a resource and an annotation describing the role of linked resource in the context of the current representation. And we call this annotation ( which describes the role of linked resource ) a link relation value. Is my assumption correct or does the term link relation value actually describe something different? Thank you

    Read the article

  • Is this possible to join tables in doctrine ORM without using relations?

    - by piemesons
    Suppose there are two tables. Table X-- Columns: id x_value Table Y-- Columns: id x_id y_value Now I dont want to define relationship in doctrine classes and i want to retrieve some records using these two tables using a query like this: Select x_value from x, y where y.id="variable_z" and x.id=y.x_id; I m not able to figure out how to write query like this in doctrine orm EDIT: Table structures: Table 1: CREATE TABLE IF NOT EXISTS `image` ( `id` int(11) NOT NULL AUTO_INCREMENT, `random_name` varchar(255) NOT NULL, `user_id` int(11) NOT NULL, `community_id` int(11) NOT NULL, `published` varchar(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=259 ; Table 2: CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `city` varchar(20) DEFAULT NULL, `state` varchar(20) DEFAULT NULL, `school` varchar(50) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=20 ; Query I am using: $q = new Doctrine_RawSql(); $q ->select('{u.*}, {img.*}') ->from('users u LEFT JOIN image img ON u.id = img.user_id') ->addComponent('u', 'Users u') ->addComponent('img', 'u.Image img') ->where("img.community_id='$community_id' AND img.published='y' AND u.state='$state' AND u.city='$city ->orderBy('img.id DESC') ->limit($count+12) ->execute(); Error I am getting: Fatal error: Uncaught exception 'Doctrine_Exception' with message 'Couldn't find class u' in C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php:290 Stack trace: #0 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php(240): Doctrine_Table- >initDefinition() #1 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Connection.php(1127): Doctrine_Table->__construct('u', Object(Doctrine_Connection_Mysql), true) #2 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\RawSql.php(425): Doctrine_Connection- >getTable('u') #3 C:\xampp\htdocs\fanyer\doctrine\models\Image.php(33): Doctrine_RawSql- >addComponent('img', 'u.Image imga') #4 C:\xampp\htdocs\fanyer\community_images.php(31): Image->get_community_images_gallery_filter(4, 0, 'AL', 'ALBERTVILLE') #5 {main} thrown in C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php on line 290

    Read the article

  • .NET - Is there a way to programmatically fill all tables in a strongly-typed dataset?

    - by Mike Loux
    Hello, all! I have a SQL Server database for which I have created a strongly-typed DataSet (using the DataSet Designer in Visual Studio 2008), so all the adapters and select commands and whatnot were created for me by the wizard. It's a small database with largely static data, so I would like to pull the contents of this DB in its entirety into my application at startup, and then grab individual pieces of data as needed using LINQ. Rather than hard-code each adapter Fill call, I would like to see if there is a way to automate this (possibly via Reflection). So, instead of: Dim _ds As New dsTest dsTestTableAdapters.Table1TableAdapter.Fill(_ds.Table1) dsTestTableAdapters.Table2TableAdapter.Fill(_ds.Table2) <etc etc etc> I would prefer to do something like: Dim _ds As New dsTest For Each tableName As String In _ds.Tables Dim adapter as Object = <routine to grab adapter associated with the table> adapter.Fill(tableName) Next Is that even remotely doable? I have done a fair amount of searching, and I wouldn't think this would be an uncommon request, but I must be either asking the wrong question, or I'm just weird to want to do this. I will admit that I usually prefer to use unbound controls and not go with strongly-typed datasets (I prefer to write SQL directly), but my company wants to go this route, so I'm researching it. I think the idea is that as tables are added, we can just refresh the DataSet using the Designer in Visual Studio and not have to make too many underlying DB code changes. Any help at all would be most appreciated. Thanks in advance!

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • sql statement question. Need to query 3 tables in one go!

    - by Stefan
    Hey there, I have an sql database. In this database is 3 tables I need to query. The first table has all the item info called item and the other two tables has data for votes and comments called userComment and the third for votes called userItem I currently have a function which uses this sql query to get the latest more popular (in terms of both votes and comments): $sql = "SELECT itemID, COUNT(*) AS cnt FROM ( SELECT `itemID` FROM `userItem` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY UNION ALL SELECT `itemID` FROM `userComment` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY AND `itemID` > 0 ) q GROUP BY `itemID` ORDER BY cnt DESC"; I know how to change this for either by votes alone or comments.... HOWEVER - I need to query the database to only return the itemID's of the ones which have specific conditions in only the item table these are WHERE categoryID = 'xx' AND typeID = 'xx' If the sql ninja could please help me on this one? Do I have to first return the results from the above query and the for each in the array fetched then check each against the item table and see if it fits the conditions to build a new array - or is that overkill? Thanks, Stefan

    Read the article

  • How to structure the tables of a very simple blog in MySQL?

    - by Programmer
    I want to add a very simple blog feature on one of my existing LAMP sites. It would be tied to a user's existing profile, and they would be able to simply input a title and a body for each post in their blog, and the date would be automatically set upon submission. They would be allowed to edit and delete any blog post and title at any time. The blog would be displayed from most recent to oldest, perhaps 20 posts to a page, with proper pagination above that. Other users would be able to leave comments on each post, which the blog owner would be allowed to delete, but not pre-moderate. That's basically it. Like I said, very simple. How should I structure the MySQL tables for this? I'm assuming that since there will be blog posts and comments, I would need a separate table for each, is that correct? But then what columns would I need in each table, what data structures should I use, and how should I link the two tables together (e.g. any foreign keys)? I could not find any tutorials for something like this, and what I'm looking to do is really offer my users the simplest version of a blog possible. No tags, no moderation, no images, no fancy formatting, etc. Just a simple diary-type, pure-text blog with commenting by other users.

    Read the article

  • Can I make an identity field span multiple tables in SQL Server?

    - by johnnycakes
    Can I have an "identity" (unique, non-repeating) column span multiple tables? For example, let's say I have two tables: Books and Authors. Authors AuthorID AuthorName Books BookID BookTitle The BookID column and the AuthorID column are identity columns. I want the identity part to span both columns. So, if there is an AuthorID with a value of 123, then there cannot be a BookID with a value of 123. And vice versa. I hope that makes sense. Is this possible? Thanks. Why do I want to do this? I am writing an APS.NET MVC app. I am creating a comment section. Authors can have comments. Books can have comments. I want to be able to pass an entity ID (a book ID or an author ID) to an action and have the action pull up all the corresponding comments. The action won't care if it's a book or an author or whatever. Sound reasonable?

    Read the article

  • DB Architecture : Linking to intersection or to main tables?

    - by Jean-Nicolas
    Hi, I'm creating fantasy football system on my website but i'm very confuse about how I should link some of my table. Tables The main table is Pool which have all the info about the ruling of the fantasy draft. A standard table User, which contains the usual stuff. Intersection table called pools_users which contains id,pool_id,user_id because a user could be in more than one pool, and a pool contains more than 1 user. The problem Table Selections = that's the table that is causing problem. That's the selection that the user choose for his pool. This is related to the Player table but thats not relevant for this problem. Should I link this table to the table Pools_users or should I link it with both main table Pool and User. This table contains id,pool_id,user_id,player_id,... What is the best way link my tables? When I want to retrieve my data, I normally want the information to be divided BY users. "This user have those selections, this one those selections, etc).

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • How to join multiple tables using LINQ-to-SQL?

    - by user603245
    Hi! I'm quite new to linq, so please bear with me. I'm working on a asp.net webpage and I want to add a "search function" (textbox where user inputs name or surname or both or just parts of it and gets back all related information). I have two tables ("Person" and "Application") and I want to display some columns from Person (name and surname) and some from Application (score, position,...). I know how I could do it using sql, but I want to learn more about linq and thus I want to do it using linq. For now I got two main ideas: 1.) var person = dataContext.GetTable<Person>(); var application = dataContext.GetTable<Application>(); var p1 = from p in Person where(p.Name.Contains(tokens[0]) || p.Surname.Contains(tokens[1])) select new {Id = p.Id, Name = p.Name, Surname = p.Surname}; //or maybe without this line //I don't know how to do the following properly var result = from a in Application where a.FK_Application.Equals(index) //just to get the "right" type of application //this is not right, but I don't know how to do it better join p1 on p1.Id == a.FK_Person 2.) The other idea is just to go through "Application" and instead of "join p1 ..." to use var result = from a in Application where a.FK_Application.Equals(index) //just to get the "right" type of application join p from Person on p.Id == a.FK_Person where p.Name.Contains(tokens[0]) || p.Surname.Contains(tokens[1]) I think that first idea is better for queries without the first "where" condition, which I also intended to use. Regardless of what is better (faster), I still don't know how to do it using linq. Also in the end I wanted to display / select just some parts (columns) of the result (joined tables + filtering conditions). I really want to know how to do such things using linq as I'll be dealing also with some similar problems with local data, where I can use only linq. Could somebody please explain me how to do it, I spent days trying to figure it out and searching on the internet for answers. Thank you for your time.

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • Looking for MSSQL Table Design Sanity Check for Profile Tables with Dynamic Columns.

    - by Code Sherpa
    I just want a general sanity check regarding database design. We are building a web system that has both Teachers and Students. Both have accounts in the system. Both have profiles in the system. My question is about the table design of those Profile tables. The Teacher profile is pretty static regarding the metadata associated with it. Each teacher has a set number of fields that exposes information about that individual (schools, degrees, etc). The students, however, are a different case. We are using a windows service to pull varying data about the students from an endless stream of excel spreadsheets. The data gets moved into our database and then the fields appear in association with the student's profile. Accordingly, each and every student may have very different fields in their profile. I originally started with the concept of three tables: Accounts ---------- AccountID TeacherProfiles ---------- TeacherProfileID AccountID SecondarySchool University YearsTeaching Etc... StudentProfiles ---------- StudentProfileID AccountID Header Value The StudentProfiles table would hold the name of the column headers from the excel spreadsheets and the associated values. I have since evolved the design a little to treat Profiles more generically per the attached ERD image. The Teacher and Student "Headers" are stored in a table called "ProfileAttributeTypes" and responses (either from the excel document or via input fields on the web form) are put in a ProfileAttributes table. This way both Student and Teacher profiles can be associated with a dynamic flow of profile fields. The "Permissions" table tells us whether we are dealing with a Student or a Teacher. Since this system is likely to grow quickly, I want to make sure the foundation is solid. Can you please provide feedback about this design and let me know if it seems sound or if you could see problems it might create and, if so, what might be a better approach? Thanks in advance.

    Read the article

  • [Rails3] How to do multiple many to many relationships between the same two tables.

    - by Kurt
    Hi. I have a model of a club where I want to model the two entities Meeting and Member. There are actually two many-to-many relationships between these entities though, as for any meeting a Member can either be a Speaker or a Guest. Now I am an OO thinker, so would normally just create the two classes and each one would just have two arrays of the other inside, but rails is making me think a bit more data centric here, so I realise I need to break these M2M relationships up with join tables Speakers and Guests which I have done, but now I am having trouble describing the relationships in the models. The two join table models both have "belongs_to :meeting" and "belongs_to :member" and I think that should be sufficient. I am not however sure about the Meeting and Member models though. Each one has "has_many :guests" and "has_many: speakers" but I am not sure if I also want to go: has_many :members, :through = :guests has_many :members, :through = :speakers But I suspect that this is like declaring two "members" that will clash. I also thought about: has_many :guests, :through = :guests has_many :speakers, :through = :speakers Does that make sense? How would ActiveRecord know that they are in fact Members? I have found heaps of examples of polymorphic m2m relationships and m2m relationships where 1 table references itself, but no good examples to help me mode this situation where two separate tables have two different m2m relationships. Anyone got any tips?

    Read the article

  • Creating a join based on data from other tables...

    - by Workshop Alex
    I'm dealing with a database structure that can be defined as "illogical". It has about 100 different schema's with all different table structures per schema. Only one common factor is a "Version" table in each schema containing about 4 fields. (Thus, there are about 100 Version tables in the database.) There's also another table (view, actually) containing a list of all the schema's in the database that have a version table. I need a stored procedure that walks through all the schema's and selects all data from the Version table, adding the schema name as a fifth field to the result. Basically, this stored procedure is to return a list of all version records per schema. My idea: first walk through the schema list to create one new SQL statements that will JOIN all the schema.version tables into one SQL statement. Then I return the result of that query. How to do this? Or does anyone have a better suggestion? (No, redesigning the structure is NOT an option.)

    Read the article

  • Bridge or Factory and How

    - by Chris
    I'm trying to learn patterns and I've got a job that is screaming for a pattern, I just know it but I can't figure it out. I know the filter type is something that can be abstracted and possibly bridged. I'M NOT LOOKING FOR A CODE REWRITE JUST SUGGESTIONS. I'm not looking for someone to do my job. I would like to know how patterns could be applied to this example. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.IO; using System.Xml; using System.Text.RegularExpressions; namespace CopyTool { class CopyJob { public enum FilterType { TextFilter, RegExFilter, NoFilter } public FilterType JobFilterType { get; set; } private string _jobName; public string JobName { get { return _jobName; } set { _jobName = value; } } private int currentIndex; public int CurrentIndex { get { return currentIndex; } } private DataSet ds; public int MaxJobs { get { return ds.Tables["Job"].Rows.Count; } } private string _filter; public string Filter { get { return _filter; } set { _filter = value; } } private string _fromFolder; public string FromFolder { get { return _fromFolder; } set { if (Directory.Exists(value)) { _fromFolder = value; } else { throw new DirectoryNotFoundException(String.Format("Folder not found: {0}", value)); } } } private List<string> _toFolders; public List<string> ToFolders { get { return _toFolders; } } public CopyJob() { Initialize(); } private void Initialize() { if (ds == null) { ds = new DataSet(); } ds.ReadXml(Properties.Settings.Default.ConfigLocation); LoadValues(0); } public void Execute() { ExecuteJob(FromFolder, _toFolders, Filter, JobFilterType); } public void ExecuteAll() { string OrigPath; List<string> DestPaths; string FilterText; FilterType FilterWay; foreach (DataRow rw in ds.Tables["Job"].Rows) { OrigPath = rw["FromFolder"].ToString(); FilterText = rw["FilterText"].ToString(); switch (rw["FilterType"].ToString()) { case "TextFilter": FilterWay = FilterType.TextFilter; break; case "RegExFilter": FilterWay = FilterType.RegExFilter; break; default: FilterWay = FilterType.NoFilter; break; } DestPaths = new List<string>(); foreach (DataRow crw in rw.GetChildRows("Job_ToFolder")) { DestPaths.Add(crw["FolderPath"].ToString()); } ExecuteJob(OrigPath, DestPaths, FilterText, FilterWay); } } private void ExecuteJob(string OrigPath, List<string> DestPaths, string FilterText, FilterType FilterWay) { FileInfo[] files; switch (FilterWay) { case FilterType.RegExFilter: files = GetFilesByRegEx(new Regex(FilterText), OrigPath); break; case FilterType.TextFilter: files = GetFilesByFilter(FilterText, OrigPath); break; default: files = new DirectoryInfo(OrigPath).GetFiles(); break; } foreach (string fld in DestPaths) { CopyFiles(files, fld); } } public void MoveToJob(int RecordNumber) { Save(); LoadValues(RecordNumber - 1); } public void AddToFolder(string folderPath) { if (Directory.Exists(folderPath)) { _toFolders.Add(folderPath); } else { throw new DirectoryNotFoundException(String.Format("Folder not found: {0}", folderPath)); } } public void DeleteToFolder(int index) { _toFolders.RemoveAt(index); } public void Save() { DataRow rw = ds.Tables["Job"].Rows[currentIndex]; rw["JobName"] = _jobName; rw["FromFolder"] = _fromFolder; rw["FilterText"] = _filter; switch (JobFilterType) { case FilterType.RegExFilter: rw["FilterType"] = "RegExFilter"; break; case FilterType.TextFilter: rw["FilterType"] = "TextFilter"; break; default: rw["FilterType"] = "NoFilter"; break; } DataRow[] ToFolderRows = ds.Tables["Job"].Rows[currentIndex].GetChildRows("Job_ToFolder"); for (int i = 0; i <= ToFolderRows.GetUpperBound(0); i++) { ToFolderRows[i].Delete(); } foreach (string fld in _toFolders) { DataRow ToFolderRow = ds.Tables["ToFolder"].NewRow(); ToFolderRow["JobId"] = ds.Tables["Job"].Rows[currentIndex]["JobId"]; ToFolderRow["Job_Id"] = ds.Tables["Job"].Rows[currentIndex]["Job_Id"]; ToFolderRow["FolderPath"] = fld; ds.Tables["ToFolder"].Rows.Add(ToFolderRow); } } public void Delete() { ds.Tables["Job"].Rows.RemoveAt(currentIndex); LoadValues(currentIndex++); } public void MoveNext() { Save(); currentIndex++; LoadValues(currentIndex); } public void MovePrevious() { Save(); currentIndex--; LoadValues(currentIndex); } public void MoveFirst() { Save(); LoadValues(0); } public void MoveLast() { Save(); LoadValues(ds.Tables["Job"].Rows.Count - 1); } public void CreateNew() { Save(); int MaxJobId = 0; Int32.TryParse(ds.Tables["Job"].Compute("Max(JobId)", "").ToString(), out MaxJobId); DataRow rw = ds.Tables["Job"].NewRow(); rw["JobId"] = MaxJobId + 1; ds.Tables["Job"].Rows.Add(rw); LoadValues(ds.Tables["Job"].Rows.IndexOf(rw)); } public void Commit() { Save(); ds.WriteXml(Properties.Settings.Default.ConfigLocation); } private void LoadValues(int index) { if (index > ds.Tables["Job"].Rows.Count - 1) { currentIndex = ds.Tables["Job"].Rows.Count - 1; } else if (index < 0) { currentIndex = 0; } else { currentIndex = index; } DataRow rw = ds.Tables["Job"].Rows[currentIndex]; _jobName = rw["JobName"].ToString(); _fromFolder = rw["FromFolder"].ToString(); _filter = rw["FilterText"].ToString(); switch (rw["FilterType"].ToString()) { case "TextFilter": JobFilterType = FilterType.TextFilter; break; case "RegExFilter": JobFilterType = FilterType.RegExFilter; break; default: JobFilterType = FilterType.NoFilter; break; } if (_toFolders == null) _toFolders = new List<string>(); _toFolders.Clear(); foreach (DataRow crw in rw.GetChildRows("Job_ToFolder")) { AddToFolder(crw["FolderPath"].ToString()); } } private static FileInfo[] GetFilesByRegEx(Regex rgx, string locPath) { DirectoryInfo d = new DirectoryInfo(locPath); FileInfo[] fullFileList = d.GetFiles(); List<FileInfo> filteredList = new List<FileInfo>(); foreach (FileInfo fi in fullFileList) { if (rgx.IsMatch(fi.Name)) { filteredList.Add(fi); } } return filteredList.ToArray(); } private static FileInfo[] GetFilesByFilter(string filter, string locPath) { DirectoryInfo d = new DirectoryInfo(locPath); FileInfo[] fi = d.GetFiles(filter); return fi; } private void CopyFiles(FileInfo[] files, string destPath) { foreach (FileInfo fi in files) { bool success = false; int i = 0; string copyToName = fi.Name; string copyToExt = fi.Extension; string copyToNameWithoutExt = Path.GetFileNameWithoutExtension(fi.FullName); while (!success && i < 100) { i++; try { if (File.Exists(Path.Combine(destPath, copyToName))) throw new CopyFileExistsException(); File.Copy(fi.FullName, Path.Combine(destPath, copyToName)); success = true; } catch (CopyFileExistsException ex) { copyToName = String.Format("{0} ({1}){2}", copyToNameWithoutExt, i, copyToExt); } } } } } public class CopyFileExistsException : Exception { public string Message; } }

    Read the article

  • How to show linked file size and type in title attributes using jquery?

    - by jitendra
    For example: Before <a target="_blank" href="http://www.adobe.com/devnet/acrobat/pdfs/reader_overview.pdf"> Adobe Reader JavaScript specification </a> Bcoz file is PDF so title should be title="PDF, 93KB, opens in a new window" <a title="PDF, 93KB, opens in a new window" target="_blank" href="http://www.adobe.com/devnet/acrobat/pdfs/reader_overview.pdf" > Adobe Reader JavaScript specification </a>

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >