Search Results

Search found 10602 results on 425 pages for 'media queries'.

Page 397/425 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • PHP - JSON Steam API query

    - by Hunter
    First time using "JSON" and I've just been working away at my dissertation and I'm integrating a few features from the steam API.. now I'm a little bit confused as to how to create arrays. function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530'); $test = decode_url($api); var_dump($test['response']['players'][0]['personaname']['steamid']); } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $data = file_get_contents($url); $data_output = json_decode($data, true); return $data_output; } So ea I've wrote a simple method to decode Json as I'll be doing a fair bit.. But just wondering the best way to print out arrays.. I can't for the life of me get it to print more than 1 element without it retunring an error e.g. Warning: Illegal string offset 'steamid' in /opt/lampp/htdocs/lan/lan-includes/scripts/class.steam.php on line 48 string(1) "R" So I can print one element, and if I add another it returns errors. EDIT -- Thanks for help, So this was my solution: function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530,76561197960435530'); $data = decode_url($api); foreach($data ['response']['players'] as $player) { echo "Steam id:" . $player['steamid'] . "\n"; echo "Community visibility :" . $player['communityvisibilitystate'] . "\n"; echo "Player profile" . $player['profileurl'] ."\n"; } } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $json = file_get_contents($decodeURL); $data_output = json_decode($json, true); return $data_output; } Worked this out by taking a look at the data.. and a couple json examples, this returns an array based on the Steam API URL (It works for multiple queries.... just FYI) and you can insert loops inside for items etc.. (if anyone searches for this).

    Read the article

  • Remote Postgresql - extremely slow

    - by Muffinbubble
    Hi, I have setup PostgreSQL on a VPS I own - the software that accesses the database is a program called PokerTracker. PokerTracker logs all your hands and statistics whilst playing online poker. I wanted this accessible from several different computers so decided to installed it on my VPS and after a few hiccups I managed to get it connecting without errors. However, the performance is dreadful. I have done tons of research on 'remote postgresql slow' etc and am yet to find an answer so am hoping someone is able to help. Things to note: The query I am trying to execute is very small. Whilst connecting locally on the VPS, the query runs instantly. While running it remotely, it takes about 1 minute and 30 seconds to run the query. The VPS is running 100MBPS and then computer I'm connecting to it from is on an 8MB line. The network communication between the two is almost instant, I am able to remotely connect fine with no lag whatsoever and am hosting several websites running MSSQL and all the queries run instantly, whether connected remotely or locally so it seems specific to PostgreSQL. I'm running their newest version of the software and the newest compatible version of PostgreSQL with their software. The database is a new database, containing hardly any data and I've ran vacuum/analyze etc all to no avail, I see no improvements. I don't understand how MSSQL can query almost instantly yet PostgreSQL struggles so much. I am able to telnet to the post 5432 on the VPS IP with no problems, and as I say the query does execute it just takes an extremely long time. What I do notice is on the router when the query is running that hardly any bandwidth is being used - but then again I wouldn't expect it to for a simple query but am not sure if this is the issue. I've tried connecting remotely on 3 different networks now (including different routers) but the problem remains. Connecting remotely via another machine via the LAN is instant. I have also edited the postgre conf file to allow for more memory/buffers etc but I don't think this is the problem - what I am asking it to do is very simple - it shouldn't be intensive at all. Thanks, Ricky

    Read the article

  • T-SQL generated from LINQ to SQL is missing a where clause

    - by Jimmy W
    I have extended some functionality to a DataContext object (called "CodeLookupAccessDataContext") such that the object exposes some methods to return results of LINQ to SQL queries. Here are the methods I have defined: public List<CompositeSIDMap> lookupCompositeSIDMap(int regionId, int marketId) { var sidGroupId = CompositeSIDGroupMaps.Where(x => x.RegionID.Equals(regionId) && x.MarketID.Equals(marketId)) .Select(x => x.CompositeSIDGroup); IEnumerator<int> sidGroupIdEnum = sidGroupId.GetEnumerator(); if (sidGroupIdEnum.MoveNext()) return lookupCodeInfo<CompositeSIDMap, CompositeSIDMap>(x => x.CompositeSIDGroup.Equals(sidGroupIdEnum.Current), x => x); else return null; } private List<TResult> lookupCodeInfo<T, TResult>(Func<T, bool> compLambda, Func<T, TResult> selectLambda) where T : class { System.Data.Linq.Table<T> dataTable = this.GetTable<T>(); var codeQueryResult = dataTable.Where(compLambda) .Select(selectLambda); List<TResult> codeList = new List<TResult>(); foreach (TResult row in codeQueryResult) codeList.Add(row); return codeList; } CompositeSIDGroupMap and CompositeSIDMap are both tables in our database that are represented as objects in my DataContext object. I wrote the following code to call these methods and display the T-SQL generated after calling these methods: using (CodeLookupAccessDataContext codeLookup = new CodeLookupAccessDataContext()) { codeLookup.Log = Console.Out; List<CompositeSIDMap> compList = codeLookup.lookupCompositeSIDMap(5, 3); } I got the following results in my log after invoking this code: SELECT [t0].[CompositeSIDGroup] FROM [dbo].[CompositeSIDGroupMap] AS [t0] WHERE ([t0].[RegionID] = @p0) AND ([t0].[MarketID] = @p1) -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [5] -- @p1: Input Int (Size = 0; Prec = 0; Scale = 0) [3] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.1 SELECT [t0].[PK_CSM], [t0].[CompositeSIDGroup], [t0].[InputSID], [t0].[TargetSID], [t0].[StartOffset], [t0].[EndOffset], [t0].[Scale] FROM [dbo].[CompositeSIDMap] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.1 The first T-SQL statement contains a where clause as specified and returns one column as expected. However, the second statement is missing a where clause and returns all columns, even though I did specify which rows I wanted to view and which columns were of interest. Why is the second T-SQL statement generated the way it is, and what should I do to ensure that I filter out the data according to specifications via the T-SQL? Also note that I would prefer to keep lookupCodeInfo() and especially am interested in keeping it enabled to accept lambda functions for specifying which rows/columns to return.

    Read the article

  • Moving from Windows to Ubuntu.

    - by djzmo
    Hello there, I used to program in Windows with Microsoft Visual C++ and I need to make some of my portable programs (written in portable C++) to be cross-platform, or at least I can release a working version of my program for both Linux and Windows. I am total newcomer in Linux application development (and rarely use the OS itself). So, today, I installed Ubuntu 10.04 LTS (through Wubi) and equipped Code::Blocks with the g++ compiler as my main weapon. Then I compiled my very first Hello World linux program, and I confused about the output program. I can run my program through the "Build and Run" menu option in Code::Blocks, but when I tried to launch the compiled application externally through a File Browser (in /media/MyNTFSPartition/MyProject/bin/Release; yes, I saved it in my NTFS partition), the program didn't show up. Why? I ran out of idea. I need to change my Windows and Microsoft Visual Studio mindset to Linux and Code::Blocks mindset. So I came up with these questions: How can I execute my compiled linux programs externally (outside IDE)? In Windows, I simply run the generated executable (.exe) file How can I distribute my linux application? In Windows, I simply distribute the executable files with the corresponding DLL files (if any) What is the equivalent of LIBs (static library) and DLLs (dynamic library) in linux and how to use them? In Windows/Visual Studio, I simply add the required libraries to the Additional Dependencies in the Project Settings, and my program will automatically link with the required static library(-ies)/DLLs. Is it possible to use the "binary form" of a C++ library (if provided) so that I wouldn't need to recompile the entire library source code? In Windows, yes. Sometimes precompiled *.lib files are provided. If I want to create a wxWidgets application in Linux, which package should I pick for Ubuntu? wxGTK or wxX11? Can I run wxGTK program under X11? In Windows, I use wxMSW, Of course. If question no. 4 is answered possible, are precompiled wxX11/wxGTK library exists out there? Haven't tried deep google search. In Windows, there is a project called "wxPack" (http://wxpack.sourceforge.net/) that saves a lot of my time. Sorry for asking many questions, but I am really confused on these linux development fundamentals. Any kind of help would be appreciated =) Thanks.

    Read the article

  • SQL Server 2005: Update rows in a specified order (like ORDER BY)?

    - by JMTyler
    I want to update rows of a table in a specific order, like one would expect if including an ORDER BY clause, but SQL Server does not support the ORDER BY clause in UPDATE queries. I have checked out this question which supplied a nice solution, but my query is a bit more complicated than the one specified there. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) ORDER BY Parent.Depth DESC; So, what I'm hoping that you'll notice is that a single table (TableA) contains a hierarchy of rows, wherein one row can be the parent or child of any other row. The rows need to be updated in order from the deepest child up to the root parent. This is because TableA.ColA must contain an up-to-date concatenation of its own current value with the values of its children (I realize this query only concats with one child, but that is for the sake of simplicity - the purpose of the example in this question does not necessitate any more verbosity), therefore the query must update from the bottom up. The solution suggested in the question I noted above is as follows: UPDATE messages SET status=10 WHERE ID in (SELECT TOP (10) Id FROM Table WHERE status=0 ORDER BY priority DESC ); The reason that I don't think I can use this solution is because I am referencing column values from the parent table inside my subquery (see WHERE Child.ParentColB = Parent.ColB), and I don't think two sibling subqueries would have access to each others' data. So far I have only determined one way to merge that suggested solution with my current problem, and I don't think it works. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) WHERE Parent.Id IN (SELECT Id FROM TableA ORDER BY Parent.Depth DESC); The WHERE..IN subquery will not actually return a subset of the rows, it will just return the full list of IDs in the order that I want. However (I don't know for sure - please tell me if I'm wrong) I think that the WHERE..IN clause will not care about the order of IDs within the parentheses - it will just check the ID of the row it currently wants to update to see if it's in that list (which, they all are) in whatever order it is already trying to update... Which would just be a total waste of cycles, because it wouldn't change anything. So, in conclusion, I have looked around and can't seem to figure out a way to update in a specified order (and included the reason I need to update in that order, because I am sure I would otherwise get the ever-so-useful "why?" answers) and I am now hitting up Stack Overflow to see if any of you gurus out there who know more about SQL than I do (which isn't saying much) know of an efficient way to do this. It's particularly important that I only use a single query to complete this action. A long question, but I wanted to cover my bases and give you guys as much info to feed off of as possible. :) Any thoughts?

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Returning currently displayed index of an array Javascript...

    - by Jeff Kindred
    I have a simple array with x number of items. I am displaying them individually via a link click... I want to update a number that say 1 of 10. when the next one is displayed i want it to display 2 of 10 etc... I have looked all around and my brain is fried right now... I know its simple I just cant get it out. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Page Title</title> <link rel="stylesheet" href="style.css" type="text/css" media="screen" charset="utf-8"/> <script type="text/javascript"> var quotations = new Array() quotations[0]= "1" quotations[1]= "2" quotations[2]= "3" quotations[3]= "4" quotations[4]= "5" quotations[5]= "6" quotations[6]= "7" numQuotes = quotations.length; curQuote = 1; function move( xflip ) { curQuote = curQuote + xflip; if (curQuote > numQuotes) { curQuote = 1 ; } if (curQuote == 0) { curQuote = numQuotes ; } document.getElementById('quotation').innerHTML=quotations[curQuote - 1]; } var curPage = //this is where the current index should go </script> </head> <body> <div id="quotation"> <script type="text/javascript">document.write(quotations[0]);</script> </div> <div> <p><a href="javascript();" onclick="move(-1)">GO back</a> <script type="text/javascript">document.write(curPage + " of " + numQuotes)</script> <a href="javascript();" onclick="move(1)">GO FORTH</a></p> </div> </body> </html>

    Read the article

  • fancybox image sometimes renders outside box

    - by Colleen
    I have the following django template: <script type="text/javascript" src="{{ STATIC_URL }}js/ jquery.fancybox-1.3.4.pack.js"></script> <link rel="stylesheet" href="{{ STATIC_URL }}css/ jquery.fancybox-1.3.4.css" type="text/css" media="screen" /> {% include "submission-form.html" with section="photos" %} <div class="commentables"> {% load thumbnail %} {% for story in objects %} <div class="image {% if forloop.counter|add:"-1"| divisibleby:picsinrow %}left{% else %}{% if forloop.counter| divisibleby:picsinrow %}right{% else %}middle{% endif %}{% endif %}"> {% if story.image %} {% thumbnail story.image size crop="center" as full_im %} <a rel="gallery" href="{% url post slug=story.slug %}"> <img class="preview" {% if story.title %} alt="{{ story.title }}" {% endif %} src="{{ full_im.url }}" full- image="{% if story.image_url %}{{ story.image_url }}{% else %} {{ story.image.url }}{% endif %}"> </a> {% endthumbnail %} {% else %} {% if story.image_url %} {% thumbnail story.image_url size crop="center" as full_im %} <a rel="gallery" href="{% url post slug=story.slug %}"> <img class="preview" {% if story.title %} alt="{{ story.title }}" {% endif %} src="{{ full_im.url }}" full- image="{{ story.image_url }}"> </a> {% endthumbnail %} {% endif %} {% endif %} </div> {% endfor %} {% if rowid != "last" %} <br style="clear: both" /> {% endif %} {% if not no_more_button %} <p style="text-align: right;" class="more-results"><a href="{% url images school_slug tag_slug %}">more...</a></p> {% endif %} </div> <script> $(document).ready(function(){ function changeattr(e){ var f = $(e.clone()); $(f.children()[0]).attr('src', $(f.children() [0]).attr("full-image")); $(f.children()[0]).attr('height', '500px'); return f[0].outerHTML; } $('.image a').each(function(idx, elem) { var e = $(elem); e.fancybox({ title: $(e.children()[0]).attr('alt'), content: changeattr(e) }); }); }); </script> and I'm occasionally getting weird display errors where the box will either not render anything at all (so it will show up as just a thin white bar, basically) or it will render only about 30 px wide, and position itself halfway down the page. In both cases, if I inspect element, I can see the "shadow" of the full picture, at the right size, with the right url. Image source doesnt' seem to make a difference, I'm getting no errors, and this is happening in both chrome and firefox. Does anyone have any ideas?

    Read the article

  • Can't iterate over nestled dict in django

    - by fredrik
    Hi, Im trying to iterate over a nestled dict list. The first level works fine. But the second level is treated like a string not dict. In my template I have this: {% for product in Products %} <li> <p>{{ product }}</p> {% for partType in product.parts %} <p>{{ partType }}</p> {% for part in partType %} <p>{{ part }}</p> {% endfor %} {% endfor %} </li> {% endfor %} It's the {{ part }} that just list 1 char at the time based on partType. And it seams that it's treated like a string. I can however via dot notation reach all dict but not with a for loop. The current output looks like this: Color C o l o r Style S ..... The Products object looks like this in the log: [{'product': <models.Products.Product object at 0x1076ac9d0>, 'parts': {u'Color': {'default': u'Red', 'optional': [u'Red', u'Blue']}, u'Style': {'default': u'Nice', 'optional': [u'Nice']}, u'Size': {'default': u'8', 'optional': [u'8', u'8.5']}}}] What I trying to do is to pair together a dict/list for a product from a number of different SQL queries. The web handler looks like this: typeData = Products.ProductPartTypes.all() productData = Products.Product.all() langCode = 'en' productList = [] for product in productData: typeDict = {} productDict = {} for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } productDict['product'] = product productDict['parts'] = typeDict defaultPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.defaultParts) optionalPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.optionalParts) for defaultPart in defaultPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = defaultPart.partLabelList, langCode = langCode).get() productDict['parts'][defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in optionalPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = optionalPart.partLabelList, langCode = langCode).get() productDict['parts'][optionalPart.type.typeId]['optional'].append(label.partLangLabel) productList.append(productDict) logging.info(productList) templateData = { 'Languages' : Settings.Languges.all().order('langCode'), 'ProductPartTypes' : typeData, 'Products' : productList } I've tried making the dict in a number of different ways. Like first making a list, then a dict, used tulpes anything I could think of. Any help is welcome! Bouns: If someone have an other approach to the SQL quires, that is more then welcome. I feel that it kinda stupid to run that amount of quires. What is happening that each product part has a different label base on langCode. ..fredrik

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • What does Apache need to support both mysqli and PDO?

    - by Nathan Long
    I'm considering changing some PHP code to use PDO for database access instead of mysqli (because the PDO syntax makes more sense to me and is database-agnostic). To do that, I'd need both methods to work while I'm making the changeover. My problem is this: so far, either one or the other method will crash Apache. Right now I'm using XAMPP in Windows XP, and PHP Version 5.2.8. Mysqli works fine, and so does this: $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database'; $sql = "SELECT * FROM `employee`"; But this line makes Apache crash: $dbc->query($sql); I don't want to redo my entire Apache or XAMPP installation, but I'd like for PDO to work. So I tried updating libmysql.dll from here, as oddvibes recommended here. That made my simple PDO query work, but then mysqli queries crashed Apache. (I also tried the suggestion after that one, to update php_pdo_mysql.dll and php_pdo.dll, to no effect.) Test Case I created this test script to compare PDO vs mysqli. With the old copy of libmysql.dll, it crashes if $use_pdo is true and doesn't if it's false. With the new copy of libmysql.dll, it's the opposite. if ($use_pdo){ $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database<br />'; $sql = "SELECT * FROM `employee`"; $dbc->query($sql); foreach ($dbc->query($sql) as $row){ echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } else { $dbc = @mysqli_connect($hostname, $username, $password, $dbname) OR die('Could not connect to MySQL: ' . mysqli_connect_error()); $sql = "SELECT * FROM `employee`"; $result = @mysqli_query($dbc, $sql) or die(mysqli_error($dbc)); while ($row = mysqli_fetch_array($result,MYSQLI_ASSOC)) { echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } What does Apache need in order to support both methods of database query?

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

  • Can't figure out the color of an svg element

    - by yass
    I have a very simple HTML page, viewable on gh-pages with the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <link href="style.css" media="screen" rel="stylesheet" type="text/css"> <script src='http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js'> </script> </head> <body> <div id="page"> <div id="content"> <div id="heatmap"> <svg style="margin-left: 80px; color: rgb(255, 255, 255);" width="800" height="880"> <g transform="translate(80,80)"> <rect style="color: rgb(255, 255, 255);" height="720" width="720"> </rect> </g> </svg> </div> </div> </div> </body> </html> with the following CSS: body { background-color: #FFFFFF; font-family: Arial, Helvetica, sans-serif; font-size: 14px; color: #FFFFFF; } #wrapper { margin: 0 auto; padding: 0; } #page { width: 1000px; margin: 0 auto; padding: 40px 0px 0px 0px; } #content { float: left; width: 660px; padding: 0px 0px 0px 0px; background: #FFFFFF; } I expect the rectangle, 'svg rect' to be the color #FFFFFF, or white. But it shows up as some other color. I opened it up in firebug, and it shows the computed color to be #FFFFF:

    Read the article

  • Printing a variable only when it changes?

    - by user1781639
    First off, my question was a little vague or confusing since I'm not really sure how to phrase my question to be specific. I'm trying to query a database of stockists for a Knitting company (school project using PHP) but I'm looking to print the city as a heading instead of with each stockists information. Here is what I have at the moment: $sql = "SELECT * FROM mc16korustockists where locale = 'south'"; $result = pg_exec($sql); $nrows = pg_numrows($result); print $nrows; $items = pg_fetch_all($result); print_r($items); for ($i=0; $i<$nrows2; $i++) { print "<h2>"; print $items[$i]['city']; print "</h2>"; print $items[$i]['name']; print $items[$i]['address']; print $items[$i]['city']; print $items[$i]['phone']; print "<br />"; print "<br />"; } I'm querying the database for all of the data in it, the rows being ref, name, address, city and phone, and executing it. Querying the number of rows then using that to determine how many iterations for the loop to run is all fine but what I'd like to have is for the h2 heading to appear above the for ($i=0;) line. Trying just breaks my page so that might be out of the question. I figure I'd have to count the number of entries in 'city' until it detects a change then change the heading to that name I think? That or make a heap of queries and set a variable for each name but at point, I might as well do it manually (and I highly doubt it would be best practice). Oh, and I'd welcome any critiques to my PHP since I'm just starting out. Thanks and if you need any more information, just ask! P.S. Our class is learning with PostgreSQL instead of MySQL as you can see in the tags.

    Read the article

  • Jquery Var Returned As object

    - by alex
    I'm trying to pass a variable from one function to another, but the var elmId is being returned as an object and giving an error. When we click on any of the generated divs we should be able to change the size of the div by choosing a width / height value from the drop down menus. I'm trying to pass the clicked div id which is elmId to function displayVals but it is not working. If we replace "#"+elmId in the function displayVals with the actual id of the first div created with is "#divid1" then it works. Why is the value of var elmId not being passed to displayVals <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.7/themes/base/jquery-ui.css" type="text/css" media="all" /> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" type="text/javascript"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.7/jquery-ui.min.js" type="text/javascript"></script> <style> .aaa{width:100px; height:100px; background-color:#ccc;} button{width:100px; height:20px;} </style> <button class="idiv">div</button> <select id="width"> <option>100px</option> <option>200px</option> <option>300px</option> </select> <select id="height"> <option>100px</option> <option>200px</option> <option>300px</option> </select> <p></p> <script type="text/javascript"> var divId = 1; $("button").click(function(){ var elm = $('<div id=divid' + divId + ' class=aaa></div>'); elm.appendTo('p'); divId++; }); $("p").click(function(e){ var elmType = $(e.target)[0].nodeName, elmId = $(e.target)[0].id; return displayVals(elmId); }); function displayVals(elmId) { var iwidth = $("#width").val(); var iheight = $("#height").val(); $("#"+elmId).css({width:iwidth, height:iheight}); console.log(elmId); } $("select").change(displayVals); displayVals(); </script>

    Read the article

  • How to setup Lucene/Solr for a B2B web app?

    - by Bill Paetzke
    Given: 1 database per client (business customer) 5000 clients Clients have between 2 to 2000 users (avg is ~100 users/client) 100k to 10 million records per database Users need to search those records often (it's the best way to navigate their data) Possibly relevant info: Several new clients each week (any time during business hours) Multiple web servers and database servers (users can login via any web server) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. They use an index per client and store it in the client's database. I'm not sure on the details. And I'm not sure if this is a serious mod to Lucene. The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Where do you store the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Reloading a Flickr request from author

    - by user1797325
    I have my flickr gallery coded, but I want to be able to click on the Author's name and the page will reload just with the images from that author. Here is my current code <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Outbrain Test Gallery</title> <link href="styles.css" rel="stylesheet" type="text/css" /> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script src="jquery.masonry.min.js"></script> <script type="text/javascript"> $(window).load(function(){ $('#flkr').masonry({ // options itemSelector : '.tiles', isResizable: true, }); }); </script> <script type="text/javascript">$(document).ready(function() { $('<ul />').prependTo('#flkr'); $.getJSON('http://api.flickr.com/services/feeds/photos_public.gne?lang=en-us&format=json&jsoncallback=?', function(data) { $.each(data.items, function(i,item) { var squares = (item.media.m).replace('_m.jpg', '_q.jpg'); if(i <= 20){ $('<img/>').attr({ alt: item.title, src: squares, height: '150', width: '150' }).appendTo('#flkr ul').wrap('<li class="tiles"><a href="' + item.link + '"></a><p class="title">' + item.title + '</p><p class="author"><strong>By:</strong> ' + item.author + '</p><p class="date"><strong>Uploaded: </strong>' + item.published + '</p></li>'); } }); }); }); </script> </head> <body> <ul id="flkr"></ul> </body> </html>

    Read the article

  • c# video equivalent to image.fromstream? Or changing the scope of the following script to allow vide

    - by Daniel
    The following is a part of an upload class in a c# script. I'm a php programmer, I've never messed with c# much but I'm trying to learn. This upload script will not handle anything except images, I need to adapt this class to handle other types of media also, or rewrite it all together. If I'm correct, I realize that using (Image image = Image.FromStream(file.InputStream)) basically says that the scope of the following is Image, only an image can be used or the object is discarded? And also that the variable image is being created from an Image from the file stream, which I understand to be, like... the $_FILES array in php? I dunno, I don't really care about making thumbnails right now either way, so if this can be taken out and still process the upload I'm totally cool with that, I just haven't had any luck getting this thing to take anything but images, even when commenting out that whole part of the class... protected void Page_Load(object sender, EventArgs e) { string dir = Path.Combine(Request.PhysicalApplicationPath, "files"); if (Request.Files.Count == 0) { // No files were posted Response.StatusCode = 500; } else { try { // Only one file at a time is posted HttpPostedFile file = Request.Files[0]; // Size limit 100MB if (file.ContentLength > 102400000) { // File too large Response.StatusCode = 500; } else { string id = Request.QueryString["userId"]; string[] folders = userDir(id); foreach (string folder in folders) { dir = Path.Combine(dir, folder); if (!Directory.Exists(dir)) Directory.CreateDirectory(dir); } string path = Path.Combine(dir, String.Concat(Request.QueryString["batchId"], "_", file.FileName)); file.SaveAs(path); // Create thumbnail int dot = path.LastIndexOf('.'); string thumbpath = String.Concat(path.Substring(0, dot), "_thumb", path.Substring(dot)); using (Image image = Image.FromStream(file.InputStream)) { // Find the ratio that will create maximum height or width of 100px. double ratio = Math.Max(image.Width / 100.0, image.Height / 100.0); using (Image thumb = new Bitmap(image, new Size((int)Math.Round(image.Width / ratio), (int)Math.Round(image.Height / ratio)))) { using (Graphics graphic = Graphics.FromImage(thumb)) { // Make sure thumbnail is not crappy graphic.SmoothingMode = SmoothingMode.HighQuality; graphic.InterpolationMode = InterpolationMode.High; graphic.CompositingQuality = CompositingQuality.HighQuality; // JPEG ImageCodecInfo codec = ImageCodecInfo.GetImageEncoders()[1]; // 90% quality EncoderParameters encode = new EncoderParameters(1); encode.Param[0] = new EncoderParameter(Encoder.Quality, 90L); // Resize graphic.DrawImage(image, new Rectangle(0, 0, thumb.Width, thumb.Height)); // Save thumb.Save(thumbpath, codec, encode); } } } // Success Response.StatusCode = 200; } } catch { // Something went wrong Response.StatusCode = 500; } } }

    Read the article

  • Is order of parameters for database Command object really important?

    - by nawfal
    I was debugging a database operation code and I found that proper UPDATE was never happening though the code never failed as such. This is the code: condb.Open(); OleDbCommand dbcom = new OleDbCommand("UPDATE Word SET word=?,sentence=?,mp3=? WHERE id=? AND exercise_id=?", condb); dbcom.Parameters.AddWithValue("id", wd.ID); dbcom.Parameters.AddWithValue("exercise_id", wd.ExID); dbcom.Parameters.AddWithValue("word", wd.Name); dbcom.Parameters.AddWithValue("sentence", wd.Sentence); dbcom.Parameters.AddWithValue("mp3", wd.Mp3); But after some tweaking this worked: condb.Open(); OleDbCommand dbcom = new OleDbCommand("UPDATE Word SET word=?,sentence=?,mp3=? WHERE id=? AND exercise_id=?", condb); dbcom.Parameters.AddWithValue("word", wd.Name); dbcom.Parameters.AddWithValue("sentence", wd.Sentence); dbcom.Parameters.AddWithValue("mp3", wd.Mp3); dbcom.Parameters.AddWithValue("id", wd.ID); dbcom.Parameters.AddWithValue("exercise_id", wd.ExID); Why is it so important that the parameters in WHERE clause has to be given the last in case of OleDb connection? Having worked with MySQL previously, I could (and usually do) write parameters of WHERE clause first because that's more logical to me. Is parameter order important when querying database in general? Some performance concern or something? Is there a specific order to be maintained in case of other databases like DB2, Sqlite etc? Update: I got rid of ? and included proper names with and without @. The order is really important. In both cases only when WHERE clause parameters was mentioned last, actual update happened. To make matter worse, in complex queries, its hard to know ourselves which order is Access expecting, and in all situations where order is changed, the query doesnt do its intended duty with no warning/error!!

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • MS Access 07 - Q re lookup column vs many-to-many; Q re checkboxes in many-to-many forms

    - by TBinLondon
    Hello, I'm creating a database with Access. This is just a test database, similar to my requirements, so I can get my skills up before creating one for work. I've created a database for a fictional school as this is a good playground and rich data (many students have many subjects have many teachers, etc). Question 1 What is the difference, if any, between using a Lookup column and a many-to-many associate table? Example: I have Tables 'Teacher' and 'Subject'. Many teachers have many subjects. I can, and have, created a table 'Teacher_Subject' and run queries with this. I have then created a lookup column in teachers table with data from subjects. The lookup column seems to take the place of the teacher_subject table. (though the data on relationships is obviously duplicated between lookup table and teacher_subject and may vary). Which one is the 'better' option? Is there a snag with using lookup tables? (I realize that this is a very 'general' question. Links to other resources and answers saying 'that depends...' are appreciated) Question 2 What attracts me to lookup tables is the following: When creating a form for entering subjects for teachers, with lookup I can simply create checkboxes and click a subject for a teacher 'on' or 'off'. Each click on/off creates/removes a record in the lookup column (which replaces teacher_subject). If I use a form from a query from teacher subject with teacher as main form and subject as subform I run into this problem: In the subform I can either select each subject that teacher has in a bombo box, i.e. click, scroll down, select, go to next row, click, scroll down, etc. (takes too long) OR I can create a list box listing all available subjects in each row but allowing me to select only one. (takes up too much space). Is it possible to have a click on/off list box for teacher_subject, creating/removing a record there with each click? Note - I know zero SQL or VB. If the correct answer is "you need to know SQL for this" then that's cool. I just need to know. Thanks!

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • Model self referential collections in Rails

    - by Najitaka
    I have written an application for an online clothing store in Rails 2.3.5. I want to show related Products when a customer views the Product Detail page. For example, if the customer views the detail page for a suit, I'd like to display the accessory products that match the dress such as a vest, shoes, and belt. I have named the related products an Ensemble. However, the vest, shoes, and belts are also Products which is what has me struggling. I have it working as follows but I know it's not the Rails way. I have a Products table for all of the products. Not important here but I also have a ProductDetails table. I have an Ensembles table that has the following columns: product_id - the main or origination product, the one displayed on the detail page outfit_id - the related or accessory product In setting up the data, on the Products list, for each Product I have an Ensemble link. This link takes you to the index action in the Ensembles controller. Using the id from the "main" Product, I find all of the associated Ensemble rows by product_id or I create a new ensemble and assign the id from the main product as the product_id. I'd like to just be able to do @product.related_products to get an Ensemble collection. Also on the index page I list the columns of the main product so the user can be sure their main product was the one they selected from the list. I also have a select list of the other products, with an Add to Ensemble action. Finally on the same index page, I have a table that displays the products that are already in the ensemble and in that list each row has a destroy link to remove a particular product from the ensemble. It would be nice if given a single Ensemble row @ensemble I could do @ensemble.product to get the Product related to the outfit_id of the ensemble row. I've got it working without associations but I have to run queries in the controller to build my own @product, @ensemble, and @ensembles collections. Also the only way I found to destroy an ensemble row is by Ensemble.connection.delete(sql to delete), simple @ensemble.destroy doesn't work. Anyone know how I would set up the associations or have a link to a site explaining a similar setup. None of the examples I found use the same table. They have A related to B through C. I want A related to other A through B.

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >