Search Results

Search found 8161 results on 327 pages for 'django queries'.

Page 314/327 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Codeigniter validation help

    - by Drew McGhie
    I'm writing a system where users can generate/run queries on demand based on the values of 4 dropdown lists. The lists are dynamically generated based on a number of factors, but at this point, I'm having problems validating the input using codeigniter's built in validation classes. I think I have things out of order, and I've tried looking at the codeigniter site, but I think I'm tripping myself up. in my view(/dashboard/dashboard_index.php), I have the following code block: <?=form_open('dashboard/dashboard_add');?> <select ... name='selMetric'> <select ... name='selPeriod'> <select ... name='selSpan'> <select ... name='selTactic'> <input type="submit" name="submit_new_query" value="Add New Graph" class="minbutton" ></input> <?=form_close();?> Then in my controller, I have the following 2 methods: function index() { $this->load->helper(array('form', 'url')); $this->load->library('validation'); //population of $data $this->load->tile('dashboard/dashboard_index', $data); } function dashboard_add() { $rules['selMetric'] = "callback_sel_check"; $rules['selPeriod'] = "callback_sel_check"; $rules['selSpan'] = "callback_sel_check"; $rules['selTactic'] = "callback_sel_check"; $this->validation->set_rules($rules); $fields['selMetric'] = "Metric"; $fields['selPeriod'] = "Time Period"; $fields['selSpan'] = "Time Span"; $fields['selTactic'] = "Tactic"; $this->validation->set_fields($fields); if ($this->validation->run() == false) { $this->index(); } else { //do stuff with validation information } } Here's my issue. I can get the stuff to validate correctly, but for the number of errors I have, I get Unable to access an error message corresponding to your field name. as the error message for everything. I think my issue that I have the $rules and $fields stuff in the wrong place, but I've tried a few permutations and I just keep getting it wrong. I was hoping I could get some advice on the correct place to put things.

    Read the article

  • How to select all parent objects into DataContext using single LINQ query ?

    - by too
    I am looking for an answer to a specific problem of fetching whole LINQ object hierarchy using single SELECT. At first I was trying to fill as much LINQ objects as possible using LoadOptions, but AFAIK this method allows only single table to be linked in one query using LoadWith. So I have invented a solution to forcibly set all parent objects of entity which of list is to be fetched, although there is a problem of multiple SELECTS going to database - a single query results in two SELECTS with the same parameters in the same LINQ context. For this question I have simplified this query to popular invoice example: public static class Extensions { public static IEnumerable<T> ForEach<T>(this IEnumerable<T> collection, Action<T> func) { foreach(var c in collection) { func(c); } return collection; } } public IEnumerable<Entry> GetResults(AppDataContext context, int CustomerId) { return ( from entry in context.Entries join invoice in context.Invoices on entry.EntryInvoiceId equals invoice.InvoiceId join period in context.Periods on invoice.InvoicePeriodId equals period.PeriodId // LEFT OUTER JOIN, store is not mandatory join store in context.Stores on entry.EntryStoreId equals store.StoreId into condStore from store in condStore.DefaultIfEmpty() where (invoice.InvoiceCustomerId = CustomerId) orderby entry.EntryPrice descending select new { Entry = entry, Invoice = invoice, Period = period, Store = store } ).ForEach(x => { x.Entry.Invoice = Invoice; x.Invoice.Period = Period; x.Entry.Store = Store; } ).Select(x => x.Entry); } When calling this function and traversing through result set, for example: var entries = GetResults(this.Context); int withoutStore = 0; foreach(var k in entries) { if(k.EntryStoreId == null) withoutStore++; } the resulting query to database looks like (single result is fetched): SELECT [t0].[EntryId], [t0].[EntryInvoiceId], [t0].[EntryStoreId], [t0].[EntryProductId], [t0].[EntryQuantity], [t0].[EntryPrice], [t1].[InvoiceId], [t1].[InvoiceCustomerId], [t1].[InvoiceDate], [t1].[InvoicePeriodId], [t2].[PeriodId], [t2].[PeriodName], [t2].[PeriodDateFrom], [t4].[StoreId], [t4].[StoreName] FROM [Entry] AS [t0] INNER JOIN [Invoice] AS [t1] ON [t0].[EntryInvoiceId] = [t1].[InvoiceId] INNER JOIN [Period] AS [t2] ON [t2].[PeriodId] = [t1].[InvoicePeriodId] LEFT OUTER JOIN ( SELECT 1 AS [test], [t3].[StoreId], [t3].[StoreName] FROM [Store] AS [t3] ) AS [t4] ON [t4].[StoreId] = ([t0].[EntryStoreId]) WHERE (([t1].[InvoiceCustomerId]) = @p0) ORDER BY [t0].[InvoicePrice] DESC -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [186] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 SELECT [t0].[EntryId], [t0].[EntryInvoiceId], [t0].[EntryStoreId], [t0].[EntryProductId], [t0].[EntryQuantity], [t0].[EntryPrice], [t1].[InvoiceId], [t1].[InvoiceCustomerId], [t1].[InvoiceDate], [t1].[InvoicePeriodId], [t2].[PeriodId], [t2].[PeriodName], [t2].[PeriodDateFrom], [t4].[StoreId], [t4].[StoreName] FROM [Entry] AS [t0] INNER JOIN [Invoice] AS [t1] ON [t0].[EntryInvoiceId] = [t1].[InvoiceId] INNER JOIN [Period] AS [t2] ON [t2].[PeriodId] = [t1].[InvoicePeriodId] LEFT OUTER JOIN ( SELECT 1 AS [test], [t3].[StoreId], [t3].[StoreName] FROM [Store] AS [t3] ) AS [t4] ON [t4].[StoreId] = ([t0].[EntryStoreId]) WHERE (([t1].[InvoiceCustomerId]) = @p0) ORDER BY [t0].[InvoicePrice] DESC -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [186] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 The question is why there are two queries and how can I fetch LINQ objects without such hacks?

    Read the article

  • Simaltaneous connections with PHP and SOAP?

    - by Dov
    I'm new to using SOAP and understanding the utmost basics of it. I create a client resource/connection, I then run some queries in a loop and I'm done. The issue I am having is when I increase the iterations of the loop, ie: from 100 to 1000, it seems to run out of memory and drops an internal server error. How could I possibly run either a) multiple simaltaneous connections or b) create a connection, 100 iterations, close connection, create connection.. etc. "a)" looks to be the better option but I have no clue as to how to get it up and running whilst keeping memory (I assume opening and closing connections) at a minimum. Thanks in advance! index.php <?php // set loops to 0 $loops = 0; // connection credentials and settings $location = 'https://theconsole.com/'; $wsdl = $location.'?wsdl'; $username = 'user'; $password = 'pass'; // include the console and client classes include "class_console.php"; include "class_client.php"; // create a client resource / connection $client = new Client($location, $wsdl, $username, $password); while ($loops <= 100) { $dostuff; } ?> class_console.php <?php class Console { // the connection resource private $connection = NULL; /** * When this object is instantiated a connection will be made to the console */ public function __construct($location, $wsdl, $username, $password, $proxyHost = NULL, $proxyPort = NULL) { if(is_null($proxyHost) || is_null($proxyPort)) $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password)); else $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password, 'proxy_host' => $proxyHost, 'proxy_port' => $proxyPort)); $connection->__setLocation($location); $this->connection = $connection; return $this->connection; } /** * Will print any type of data to screen, where supported by print_r * * @param $var - The data to print to screen * @return $this->connection - The connection resource **/ public function screen($var) { print '<pre>'; print_r($var); print '</pre>'; return $this->connection; } /** * Returns a server / connection resource * * @return $this->connection - The connection resource */ public function srv() { return $this->connection; } } ?>

    Read the article

  • Building a QueryExpression where name field is either A or B

    - by Mike
    I'm trying to build a Dynamics CRM 4 query so that I can get calendar events that are named either "Event A" or "Event B". A QueryByAttribute doesn't seem to do the job as I cannot specify a condition where the field called "event_name" = "Event A" of "event_name" = "Event B". When using the QueryExpression, I've found the FilterExpression applies to the Referencing Entity. I don't know if the FilterExpression can be used on the Referenced Entity at all. The example below is something like what I want to achieve, though this would return an empty result set as it will go looking in the entity called "my_event_response" for a "name" attribute. It's starting to look like I will need to run several queries to get this but this is less efficient than if I can submit it all at once. ColumnSet columns = new ColumnSet(); columns.Attributes = new string[]{ "event_name", "eventid", "startdate", "city" }; ConditionExpression eventname1 = new ConditionExpression(); eventname1.AttributeName = "event_name"; eventname1.Operator = ConditionOperator.Equal; eventname1.Values = new string[] { "Event A" }; ConditionExpression eventname2 = new ConditionExpression(); eventname2.AttributeName = "event_name"; eventname2.Operator = ConditionOperator.Equal; eventname2.Values = new string[] { "Event B" }; FilterExpression filter = new FilterExpression(); filter.FilterOperator = LogicalOperator.Or; filter.Conditions = new ConditionExpression[] { eventname1, eventname2 }; LinkEntity link = new LinkEntity(); link.LinkCriteria = filter; link.LinkFromEntityName = "my_event"; link.LinkFromAttributeName = "eventid"; link.LinkToEntityName = "my_event_response"; link.LinkToAttributeName = "eventid"; QueryExpression query = new QueryExpression(); query.ColumnSet = columns; query.EntityName = EntityName.mbs_event.ToString(); query.LinkEntities = new LinkEntity[] { link }; RetrieveMultipleRequest request = new RetrieveMultipleRequest(); request.Query = query; return (RetrieveMultipleResponse)crmService.Execute(request); I'd appreciate some advice on how to get the data I need.

    Read the article

  • .NET: Is there a way to finagle a default namespace in an XPath 1.0 query?

    - by Cheeso
    I'm building a tool that performs xpath 1.0 queries on XHTML documents. The requirement to use a namespace prefix in the query is killing me. The query looks like this: html/body/div[@class='contents']/div[@class='body']/ div[@class='pgdbbyauthor']/h2[a[@name][starts-with(.,'Quick')]]/ following-sibling::ul[1]/li/a (all on one line) ...which is bad enough, except because it's xpath 1.0, I need to use an explicit namespace prefix on each QName, so it looks like this: ns1:html/ns1:body/ns1:div[@class='contents']/ns1:div[@class='body']/ ns1:div[@class='pgdbbyauthor']/ns1:h2[ns1:a[@name][starts-with(.,'Quick')]]/ following-sibling::ns1:ul[1]/ns1:li/ns1:a To set up the query, I do something like this: var xpathDoc = new XPathDocument(new StringReader(theText)); var nav = xpathDoc.CreateNavigator(); var xmlns = new XmlNamespaceManager(nav.NameTable); foreach (string prefix in xmlNamespaces.Keys) xmlns.AddNamespace(prefix, xmlNamespaces[prefix]); XPathNodeIterator selection = nav.Select(xpathExpression, xmlns); But what I want is for the xpathExpression to use the implicit default namespace. Is there a way for me to transform the unadorned xpath expression, after it's been written, to inject a namespace prefix for each element name in the query? I'm thinking, anything between two slashes, I could inject a prefix there. Excepting of course axis names like "parent::" and "preceding-sibling::" . And wildcards. That's what I mean by "finagle a default namespace". Is this hack gonna work? Addendum Here's what I mean. suppose I have an xpath expression, and before passing it to nav.Select(), I transform it. Something like this: string FixupWithDefaultNamespace(string expr) { string s = expr; s = Regex.Replace(s, "^(?!::)([^/:]+)(?=/)", "ns1:$1"); // beginning s = Regex.Replace(s, "/([^/:]+)(?=/)", "/ns1:$1"); // stanza s = Regex.Replace(s, "::([A-Za-z][^/:*]*)(?=/)", "::ns1:$1"); // axis specifier s = Regex.Replace(s, "\\[([A-Za-z][^/:*\\(]*)(?=[\\[\\]])", "[ns1:$1"); // predicate s = Regex.Replace(s, "/([A-Za-z][^/:]*)(?!<::)$", "/ns1:$1"); // end s = Regex.Replace(s, "^([A-Za-z][^/:]*)$", "ns1:$1"); // edge case s = Regex.Replace(s, "([-A-Za-z]+)\\(([^/:\\.,\\)]+)(?=[,\\)])", "$1(ns1:$2"); // xpath functions return s; } This actually works for simple cases I tried. To use the example from above - if the input is the first xpath expression, the output I get is the 2nd one, with all the ns1 prefixes. The real question is, is it hopeless to expect this Regex.Replace approach to work, as the xpath expressions get more complicated?

    Read the article

  • Copying metadata over a database link in Oracle 10g

    - by Tunde
    Thanks in advance for your help experts. I want to be able to copy over database objects from database A into database B with a procedure created on database B. I created a database link between the two and have tweaked the get_ddl function of the dbms_metadata to look like this: create or replace function GetDDL ( p_name in MetaDataPkg.t_string p_type in MetaDataPkg.t_string ) return MetaDataPkg.t_longstring is -- clob v_clob clob; -- array of long strings c_SYSPrefix constant char(4) := 'SYS_'; c_doublequote constant char(1) := '"'; v_longstrings metadatapkg.t_arraylongstring; v_schema metadatapkg.t_string; v_fullength pls_integer := 0; v_offset pls_integer := 0; v_length pls_integer := 0; begin SELECT DISTINCT OWNER INTO v_schema FROM all_objects@ENTORA where object_name = upper(p_name); -- get DDL v_clob := dbms_metadata.get_ddl(p_type, upper(p_name), upper(v_schema)); -- get CLOB length v_fullength := dbms_lob.GetLength(v_clob); for nIndex in 1..ceil(v_fullength / 32767) loop v_offset := v_length + 1; v_length := least(v_fullength - (nIndex - 1) * 32767, 32767); dbms_lob.read(v_clob, v_length, v_offset, v_longstrings(nIndex)); -- Remove table’s owner from DDL string: v_longstrings(nIndex) := replace( v_longstrings(nIndex), c_doublequote || user || c_doublequote || '.', '' ); -- Remove the following from DDL string: -- 1) "new line" characters (chr(10)) -- 2) leading and trailing spaces v_longstrings(nIndex) := ltrim(rtrim(replace(v_longstrings(nIndex), chr(10), ''))); end loop; -- close CLOB if (dbms_lob.isOpen(v_clob) > 0) then dbms_lob.close(v_clob); end if; return v_longstrings(1); end GetDDL; so as to remove the schema prefix that usually comes with metadata. I get a null value whenever I run this function over the database link with the following queries. select getddl( 'TABLE', 'TABLE1') from user_tables@ENTORA where table_name = 'TABLE1'; select getddl( 'TABLE', 'TABLE1') from dual@ENTORA; t_string is varchar2(30) t_longstring is varchar2(32767) and type t_ArrayLongString is table of t_longstring I would really appreciate it if any one could help. Many thanks.

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • for loop with count from array, limit output? PHP

    - by Philip
    print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; $news_comments is a 3 diemensional array from mysqli_fetch_assoc returned from a function elsewhere, for some reason my for loop returns the total of the array sets such as [0][2] etc until it reaches the max amount from the counted $news_comments var which is a return function of LIMIT 10. my problem is if I add any text/html/icons inside the for loop it prints it in this case 11 times even though only array sets 1 and 2 have data inside them. How do I get around this? My function query is as follows: function news_comments() { require_once '../data/queries.php'; // get newsID from the url $urlID = $_GET['news_id']; // run our query for newsID information $news_comments = selectQuery('*', 'news_comments', 'WHERE news_id='.$urlID.'', 'ORDER BY comment_date', 'DESC', '10'); // requires 6 params // check query for results if(!$news_comments) { // loop error session and initiate var foreach($_SESSION['errors'] as $error=>$err) { print htmlentities($err) . 'for News Comments, be the first to leave a comment!'; } } else { print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; } }// End function Any help is greatly appreciated.

    Read the article

  • MSSQL 2005: Update rows in a specified order (like ORDER BY)?

    - by JMTyler
    I want to update rows of a table in a specific order, like one would expect if including an ORDER BY clause, but MS SQL does not support the ORDER BY clause in UPDATE queries. I have checked out this question which supplied a nice solution, but my query is a bit more complicated than the one specified there. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) ORDER BY Parent.Depth DESC; So, what I'm hoping that you'll notice is that a single table (TableA) contains a hierarchy of rows, wherein one row can be the parent or child of any other row. The rows need to be updated in order from the deepest child up to the root parent. This is because TableA.ColA must contain an up-to-date concatenation of its own current value with the values of its children (I realize this query only concats with one child, but that is for the sake of simplicity - the purpose of the example in this question does not necessitate any more verbosity), therefore the query must update from the bottom up. The solution suggested in the question I noted above is as follows: UPDATE messages SET status=10 WHERE ID in (SELECT TOP (10) Id FROM Table WHERE status=0 ORDER BY priority DESC ); The reason that I don't think I can use this solution is because I am referencing column values from the parent table inside my subquery (see WHERE Child.ParentColB = Parent.ColB), and I don't think two sibling subqueries would have access to each others' data. So far I have only determined one way to merge that suggested solution with my current problem, and I don't think it works. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) WHERE Parent.Id IN (SELECT Id FROM TableA ORDER BY Parent.Depth DESC); The WHERE..IN subquery will not actually return a subset of the rows, it will just return the full list of IDs in the order that I want. However (I don't know for sure - please tell me if I'm wrong) I think that the WHERE..IN clause will not care about the order of IDs within the parentheses - it will just check the ID of the row it currently wants to update to see if it's in that list (which, they all are) in whatever order it is already trying to update... Which would just be a total waste of cycles, because it wouldn't change anything. So, in conclusion, I have looked around and can't seem to figure out a way to update in a specified order (and included the reason I need to update in that order, because I am sure I would otherwise get the ever-so-useful "why?" answers) and I am now hitting up Stack Overflow to see if any of you gurus out there who know more about SQL than I do (which isn't saying much) know of an efficient way to do this. It's particularly important that I only use a single query to complete this action. A long question, but I wanted to cover my bases and give you guys as much info to feed off of as possible. :) Any thoughts?

    Read the article

  • Problem Fetching JSON Result with jQuery in Firefox and Chrome (IE8 Works)

    - by senfo
    I'm attempting to parse JSON using jQuery and I'm running into issues. Using the code below, the data keeps coming back null: <!DOCTYPE html> <html> <head> <title>JSON Test</title> </head> <body> <div id="msg"></div> <script src="http://code.jquery.com/jquery-latest.js"></script> <script> $.ajax({ url: 'http://datawarehouse.hrsa.gov/ReleaseTest/HGDWDataWebService/HGDWDataService.aspx?service=HC&zip=20002&radius=10&filter=8357&format=JSON', type: 'GET', dataType: 'json', success: function(data) { $('#msg').html(data[0].title); // Always null in Firefox/Chrome. Works in IE8. }, error: function(data) { alert(data); } }); </script> </body> </html> The JSON results look like the following: {"title":"HEALTHPOINT TYEE CAMPUS","link":"http://www.healthpointchc.org","id":"tag:datawarehouse.hrsa.gov,2010-04-29:/8357","org":"HEALTHPOINT TYEE CAMPUS","address":{"street-address":"4424 S. 188TH St.","locality":"Seatac","region":"Washington","postal-code":"98188-5028"},"tel":"206-444-7746","category":"Service Delivery Site","location":"47.4344818181818 -122.277672727273","update":"2010-04-28T00:00:00-05:00"} If I replace my URL with the Flickr API URL (http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?), I get back a valid JSON result that I am able to make use of. I have successfully validated my JSON at JSONLint, so I've run out of ideas as to what I might be doing wrong. Any thoughts? Update: I had the client switch the content type to application/json. Unfortunately, I'm still experiencing the exact same problem. I also updated my HTML and included the live URL I've been working with. Update 2: I just gave this a try in IE8 and it works fine. For some reason, it doesn't work in either Firefox 3.6.3 or Chrome 4.1.249.1064 (45376). I did notice a mistake with the data being returned (the developer is returning a collection of data, even for queries that will always return a single record), but it still baffles me why it doesn't work in other browsers. It might be important to note that I am working from an HTML file on my local file system. I thought it might be a XSS issue, but that doesn't explain why Flickr works.

    Read the article

  • SQL Server 2008: If Multiple Values Set In Other Mutliple Values Set

    - by AJH
    In SQL, is there anyway to accomplish something like this? This is based off a report built in SQL Server Report Builder, where the user can specify multiple text values as a single report parameter. The query for the report grabs all of the values the user selected and stores them in a single variable. I need a way for the query to return only records that have associations to EVERY value the user specified. -- Assume there's a table of Elements with thousands of entries. -- Now we declare a list of properties for those Elements to be associated with. create table #masterTable ( ElementId int, Text varchar(10) ) insert into #masterTable (ElementId, Text) values (1, 'Red'); insert into #masterTable (ElementId, Text) values (1, 'Coarse'); insert into #masterTable (ElementId, Text) values (1, 'Dense'); insert into #masterTable (ElementId, Text) values (2, 'Red'); insert into #masterTable (ElementId, Text) values (2, 'Smooth'); insert into #masterTable (ElementId, Text) values (2, 'Hollow'); -- Element 1 is Red, Coarse, and Dense. Element 2 is Red, Smooth, and Hollow. -- The real table is actually much much larger than this; this is just an example. -- This is me trying to replicate how SQL Server Report Builder treats -- report parameters in its queries. The user selects one, some, all, -- or no properties from a list. The written query treats the user's -- selections as a single variable called @Properties. -- Example scenario 1: User only wants to see Elements that are BOTH Red and Dense. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Dense in (select Text from #masterTable where ElementId = e.Id) --ideally a set containing only Red, Coarse, and Dense --Both Red and Dense are within Element 1's properties (Red, Coarse, Dense), so Element 1 gets returned, but not Element 2. -- Example scenario 2: User only wants to see Elements that are BOTH Red and Hollow. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Hollow in (select Text from #masterTable where ElementId = e.Id) --Both Red and Hollow are within Element 2's properties (Red, Smooth, Hollow), so Element 2 gets returned, but not Element 1. --Example Scenario 3: User only picked the Red option. select e.* from Elements e where (@Properties) --ideally a set containing only Red in (select Text from #masterTable where ElementId = e.Id) --Red is within both Element 1 and Element 2's properties, so both Element 1 and Element 2 get returned. The above syntax doesn't actually work because SQL doesn't seem to allow multiple values on the left side of the "in" comparison. Error that returns: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. Am I even on the right track here? Sorry if the example looks long-winded or confusing.

    Read the article

  • Hibernate Communications Link Failure in Restlet-Hibernate Based Java application powered by MySQL

    - by Vatsala
    Let me describe my question - I have a Java application - Hibernate as the DB interfacing layer over MySQL. I get the communications link failure error in my application. The occurence of this error is a very specific case. I get this error , When I leave mysql server unattended for more than approximately 6 hours (i.e. when there are no queries issued to MySQL for more than approximately 6 hours). I am pasting a top 'exception' level description below, and adding a pastebin link for a detailed stacktrace description. javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: java.net.ConnectException: Connection refused: connect the link to the pastebin for further investigation - http://pastebin.com/4KujAmgD What I understand from these exception statements is that MySQL is refusing to take in any connections after a period of idle/nil activity. I have been reading up a bit about this via google search, and came to know that one of the possible ways to overcome this is to set values for c3p0 properties as c3p0 comes bundled with Hibernate. Specifically, I read from here http://www.mchange.com/projects/c3p0/index.html that setting two properties idleConnectionTestPeriod and preferredTestQuery will solve this for me. But these values dont seem to have had an effect. Is this the correct approach to fixing this? If not, what is the right way to get over this? The following are related Communications Link Failure questions at stackoverflow.com, but I've not found a satisfactory answer in their answers. http://stackoverflow.com/questions/2121829/java-db-communications-link-failure http://stackoverflow.com/questions/298988/how-to-handle-communication-link-failure Note 1 - i dont get this error when I am using my application continuosly. Note 2 - I use JPA with Hibernate and hence my hibernate.dialect,etc hibernate properties reside within the persistence.xml in the META-INF folder (does that prevent the c3p0 properties from working?)

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • Perl - Calling subclass constructor from superclass (OO)

    - by Emmel
    This may turn out to be an embarrassingly stupid question, but better than potentially creating embarrassingly stupid code. :-) This is an OO design question, really. Let's say I have an object class 'Foos' that represents a set of dynamic configuration elements, which are obtained by querying a command on disk, 'mycrazyfoos -getconfig'. Let's say that there are two categories of behavior that I want 'Foos' objects to have: Existing ones: one is, query ones that exist in the command output I just mentioned (/usr/bin/mycrazyfoos -getconfig`. Make modifications to existing ones via shelling out commands. Create new ones that don't exist; new 'crazyfoos', using a complex set of /usr/bin/mycrazyfoos commands and parameters. Here I'm not really just querying, but actually running a bunch of system() commands. Affecting changes. Here's my class structure: Foos.pm package Foos, which has a new($hashref-{name = 'myfooname',) constructor that takes a 'crazyfoo NAME' and then queries the existence of that NAME to see if it already exists (by shelling out and running the mycrazyfoos command above). If that crazyfoo already exists, return a Foos::Existing object. Any changes to this object requires shelling out, running commands and getting confirmation that everything ran okay. If this is the way to go, then the new() constructor needs to have a test to see which subclass constructor to use (if that even makes sense in this context). Here are the subclasses: Foos/Existing.pm As mentioned above, this is for when a Foos object already exists. Foos/Pending.pm This is an object that will be created if, in the above, the 'crazyfoo NAME' doesn't actually exist. In this case, the new() constructor above will be checked for additional parameters, and it will go ahead and, when called using -create() shell out using system() and create a new object... possibly returning an 'Existing' one... OR As I type this out, I am realizing it is perhaps it's better to have a single: (an alternative arrangement) Foos class, that has a -new() that takes just a name -create() that takes additional creation parameters -delete(), -change() and other params that affect ones that exist; that will have to just be checked dynamically. So here we are, two main directions to go with this. I'm curious which would be the more intelligent way to go.

    Read the article

  • remove data layer and put into it's own domain

    - by user334768
    I have a SL4 application that uses EF4 & RIA Services. DB is SQL 2008. All is working well. Now I want to put the Database and web services on one domain (A.com) with the web service exposing the same methods available in my working project. (one listed at top of message) Then put a Silverlight application (same one as above) on domain(B.com) and call the web services on A.com. I thought I had a fair understanding of RIA Services. Enough to get the above application working. Now when I say "working" I do mean on my local dev machine. I have yet to deployed as SL4 & .NET 4 application to my hosting site. But I don't think I understand it well enough. I normally create a new business app, add EF then create the RIA DomainService. Add any [Includes] I need, modify my linq queries and run application. And it works. Now I need to break off my data layer and put it on another hosting site (A.com) And put my UI and business logic on another hosting site (B.com) I think I need to do the following : On the Database & web service site: domain(A.com) create application, create EF4, create RIA Services and deploy. At this time, are the methods exposed available as a "WEB SERVICE" to other applications calling by http:// a.com/serviceName.svc address? I think I need to do the following : On the application site : domain(B.com) create a business application (later will need authentication and navigation). How can I create an EF when I don't have access to the database? (I know I do have access but I want know what happens here when I do not have access to the database, but only data provided by a web service) If I can not create an EF how do I create my RIA Service? I hope any one who takes time to help me understands what I'm asking. Sorry so long.

    Read the article

  • Why won't C# accept a (seemingly) perfectly good Sql Server CE Query?

    - by VoidKing
    By perfectly good sql query, I mean to say that, inside WebMatrix, if I execute the following query, it works to perfection: SELECT page AS location, (len(page) - len(replace(UPPER(page), UPPER('o'), ''))) / len('o') AS occurences, 'pageSettings' AS tableName FROM PageSettings WHERE page LIKE '%o%' UNION SELECT pageTitle AS location, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER('o'), ''))) / len('o') AS occurences, 'ExternalSecondaryPages' AS tableName FROM ExternalSecondaryPages WHERE pageTitle LIKE '%o%' UNION SELECT eventTitle AS location, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER('o'), ''))) / len('o') AS occurences, 'MainStreetEvents' AS tableName FROM MainStreetEvents WHERE eventTitle LIKE '%o%' Here i am using 'o' as a static search string to search upon. No problem, but not exeactly very dynamic. Now, when I write this query as a string in C# and as I think it should be (and even as I have done before) I get a server-side error indicating that the string was not in the correct format. Here is a pic of that error: And (although I am only testing the output, should I get it to quit erring), here is the actual C# (i.e., the .cshtml) page that queries the database: @{ Layout = "~/Layouts/_secondaryMainLayout.cshtml"; var db = Database.Open("Content"); string searchText = Request.Unvalidated["searchText"]; string selectQueryString = "SELECT page AS location, (len(page) - len(replace(UPPER(page), UPPER(@0), ''))) / len(@0) AS occurences, 'pageSettings' AS tableName FROM PageSettings WHERE page LIKE '%' + @0 + '%' "; selectQueryString += "UNION "; selectQueryString += "SELECT pageTitle AS location, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER(@0), ''))) / len(@0) AS occurences, 'ExternalSecondaryPages' AS tableName FROM ExternalSecondaryPages WHERE pageTitle LIKE '%' + @0 + '%' "; selectQueryString += "UNION "; selectQueryString += "SELECT eventTitle AS location, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER(@0), ''))) / len(@0) AS occurences, 'MainStreetEvents' AS tableName FROM MainStreetEvents WHERE eventTitle LIKE '%' + @0 + '%'"; @:beginning <br/> foreach (var row in db.Query(selectQueryString, searchText)) { @:entry @:@row.location &nbsp; @:@row.occurences &nbsp; @:@row.tableName <br/> } } Since it is erring on the foreach (var row in db.Query(selectQueryString, searchText)) line, that heavily suggests that something is wrong with my query, however, everything seems right to me about the syntax here and it even executes to perfection if I query the database (mind you, un-parameterized) directly. Logically, I would assume that I have erred somewhere with the syntax involved in parameterizing this query, however, my double and triple checking (as well as, my past experience at doing this) insists that everything looks fine here. Have I messed up the syntax involved with parameterizing this query, or is something else at play here that I am overlooking? I know I can tell you, for sure, as it has been previously tested, that the value I am getting from the query string is, indeed, what I would expect it to be, but as there really isn't much else on the .cshtml page yet, that is about all I can tell you.

    Read the article

  • What classes should I map against with NHibernate?

    - by apollodude217
    Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object. What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem. I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs. What are the best solutions? Currently, I will, on a per-property basis, create a "back door" just for NHibernate: public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}} protected virtual int _Blah {get {return blah;} set {blah = value;}} private int blah; I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer. There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings). Has anyone solved these problems?!

    Read the article

  • Function returning MYSQL_ROW

    - by Gabe
    I'm working on a system using lots of MySQL queries and I'm running into some memory problems I'm pretty sure have to do with me not handling pointers right... Basically, I've got something like this: MYSQL_ROW function1() { string query="SELECT * FROM table limit 1;"; MYSQL_ROW return_row; mysql_init(&connection); // "connection" is a global variable if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&connection); else{ resp = mysql_store_result(&connection); //"resp" is also global if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error: " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And function2(): MYSQL_ROW function2(MYSQL_ROW row) { string query = "select * from table2 where code = '" + string(row[2]) + "'"; MYSQL_ROW retorno; mysql_init(&connection); if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&conexao); else{ // My "debugging" shows me at this point `row[2]` is already fubar resp = mysql_store_result(&connection); if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error : " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And main() is an infinite loop basically like this: int main( int argc, char* args[] ){ MYSQL_ROW row = NULL; while (1) { row = function1(); if(row != NULL) function2(row); } } (variable and function names have been generalized to protect the innocent) But after the 3rd or 4th call to function2, that only uses row for reading, row starts losing its value coming to a segfault error... Anyone's got any ideas why? I'm not sure the amount of global variables in this code is any good, but I didn't design it and only got until tomorrow to fix and finish it, so workarounds are welcome! Thanks!

    Read the article

  • App losing db connection

    - by DaveKub
    I'm having a weird issue with an old Delphi app losing it's database connection. Actually, I think it's losing something else that then makes the connection either drop or be unusable. The app is written in Delphi 6 and uses the Direct Oracle Access component (v4.0.7.1) to connect to an Oracle 9i database. The app runs as a service and periodically queries the db using a TOracleQuery object (qryAlarmList). The method that is called to do this looks like this: procedure TdmMain.RefreshAlarmList; begin try qryAlarmList.Execute; except on E: Exception do begin FStatus := ssError; EventLog.LogError(-1, 'TdmMain.RefreshAlarmList', 'Message: ' + E.Message); end; end; end; It had been running fine for years, until a couple of Perl scripts were added to this machine. These scripts run every 15 minutes and look for datafiles to import into the db, and then they do a some calculations and a bunch of reads/writes to/from the db. For some reason, when they are processing large amounts of data, and then the Delphi app tries to query the db, the Delphi app throws an exception at the "qryAlarmList.Execute" line in the above code listing. The exception is always: Access violation at address 00000000. read of address 00000000 HOW can something that the Perl scripts are doing cause this?? There are other Perl scripts on this machine that load data using the same modules and method calls and we didn't have problems. To make it even weirder, there are two other apps that will also suddenly lose their ability to talk to the database at the same time as the Perl stuff is running. Neither of those apps run on this machine, but both are Delphi 6 apps that use the same DOA component to connect to the same database. We have other apps that connect to the same db, written in Java or C# and they don't seem to have any problems. I've tried adding code before the '.Execute' method is called to: check the session's connection (session.CheckConnection(true); always comes back as 'ccOK'). see whether I can access a field of the qryAlarmList object to see if maybe it's become null; can access it fine. check the state of the qryAlarmList; always says it's qsIdle. Does anyone have any suggestions of something to try? This is driving me nuts! Dave

    Read the article

  • How do I add a filter button to this pagination?

    - by ClarkSKent
    Hey, I want to add a button(link), that when clicked will filter the pagination results. I'm new to php (and programming in general) and would like to add a button like 'Automotive' and when clicked it updates the 2 mysql queries in my pagination script, seen here: As you can see, the category automotive is hardcoded in, I want it to be dynamic, so when a link is clicked it places whatever the id or class is in the category part of the query. 1: $record_count = mysql_num_rows(mysql_query("SELECT * FROM explore WHERE category='automotive'")); 2: $get = mysql_query("SELECT * FROM explore WHERE category='automotive' LIMIT $start, $per_page"); This is the entire current php pagination script that I am using: <?php //connecting to the database $error = "Could not connect to the database"; mysql_connect('localhost','root','root') or die($error); mysql_select_db('ajax_demo') or die($error); //max displayed per page $per_page = 2; //get start variable $start = $_GET['start']; //count records $record_count = mysql_num_rows(mysql_query("SELECT * FROM explore WHERE category='automotive'")); //count max pages $max_pages = $record_count / $per_page; //may come out as decimal if (!$start) $start = 0; //display data $get = mysql_query("SELECT * FROM explore WHERE category='automotive' LIMIT $start, $per_page"); while ($row = mysql_fetch_assoc($get)) { // get data $name = $row['id']; $age = $row['site_name']; echo $name." (".$age.")<br />"; } //setup prev and next variables $prev = $start - $per_page; $next = $start + $per_page; //show prev button if (!($start<=0)) echo "<a href='pagi_test.php?start=$prev'>Prev</a> "; //show page numbers //set variable for first page $i=1; for ($x=0;$x<$record_count;$x=$x+$per_page) { if ($start!=$x) echo " <a href='pagi_test.php?start=$x'>$i</a> "; else echo " <a href='pagi_test.php?start=$x'><b>$i</b></a> "; $i++; } //show next button if (!($start>=$record_count-$per_page)) echo " <a href='pagi_test.php?start=$next'>Next</a>"; ?>

    Read the article

  • How to combine a list of choices to determine which select statement

    - by Larry
    I have a mysql db and am using php 5.2 What I am trying to do is offer a list of options for a person to select (only 1). The chosen option will cause a select, update, or delete statement to be ran. The results of the statement do not need to be shown, although, showing the old and then the new would be nice (no problems with that part tho'.). Pseudo-Code: Assign $choice = 0 Check the value of $choice // This way, if it = 100, we do a break Select a Choice:<br> 1. Adjust Status Value (+60) // $choice = 1<br> 2. Show all Ships <br> // $choice = 2 3. Show Ships in Port <br> // $choice = 3 ... 0. $choice="100" // if the value =100, quit this part Use either case (switch) or if/else statements to run the users choice1 If the choice is 1, then run the "Select" statement with the variable of $sql1 -- "SELECT .... If the choice is 2, then run the "Select" statement with the variable of $sql2 --- SELECT * FROM Ships If the choice is 3, then run the "Select" statement with the variable of $sql3 <br> .... If the choice is 0, then we are done. I figured the (3) statements would be assigned in php as: $sql1="...". $sql2="SELECT * FROM Ships" $sql3="SELECT * FROM Ships WHERE nPort="1" My idea was to use the switch statement, but got lost on it. :( I would like the options to be available over and over again, until a variable ($choice) is selected. In which case, this particular page is done and goes back to the "Main Menu"? The coding and display, if I use it, I can do. Just not sure how to write the way to select which one I want. It is possible that I would run all of the queries, and other times, only one, so that is why I would like the choice. An area I get confused in is the proper forms to use such as -- ' ' " " and ...?? Not sure the # of options I will end up with, but it will be more than 5 but less than 20 / page. So if I get the system down for 2-3 choices, I can replicate it for as many as I may need. And, as always, if a better way exists, I am willing to try it. Thanks again... Larry

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • PHP - JSON Steam API query

    - by Hunter
    First time using "JSON" and I've just been working away at my dissertation and I'm integrating a few features from the steam API.. now I'm a little bit confused as to how to create arrays. function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530'); $test = decode_url($api); var_dump($test['response']['players'][0]['personaname']['steamid']); } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $data = file_get_contents($url); $data_output = json_decode($data, true); return $data_output; } So ea I've wrote a simple method to decode Json as I'll be doing a fair bit.. But just wondering the best way to print out arrays.. I can't for the life of me get it to print more than 1 element without it retunring an error e.g. Warning: Illegal string offset 'steamid' in /opt/lampp/htdocs/lan/lan-includes/scripts/class.steam.php on line 48 string(1) "R" So I can print one element, and if I add another it returns errors. EDIT -- Thanks for help, So this was my solution: function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530,76561197960435530'); $data = decode_url($api); foreach($data ['response']['players'] as $player) { echo "Steam id:" . $player['steamid'] . "\n"; echo "Community visibility :" . $player['communityvisibilitystate'] . "\n"; echo "Player profile" . $player['profileurl'] ."\n"; } } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $json = file_get_contents($decodeURL); $data_output = json_decode($json, true); return $data_output; } Worked this out by taking a look at the data.. and a couple json examples, this returns an array based on the Steam API URL (It works for multiple queries.... just FYI) and you can insert loops inside for items etc.. (if anyone searches for this).

    Read the article

  • Remote Postgresql - extremely slow

    - by Muffinbubble
    Hi, I have setup PostgreSQL on a VPS I own - the software that accesses the database is a program called PokerTracker. PokerTracker logs all your hands and statistics whilst playing online poker. I wanted this accessible from several different computers so decided to installed it on my VPS and after a few hiccups I managed to get it connecting without errors. However, the performance is dreadful. I have done tons of research on 'remote postgresql slow' etc and am yet to find an answer so am hoping someone is able to help. Things to note: The query I am trying to execute is very small. Whilst connecting locally on the VPS, the query runs instantly. While running it remotely, it takes about 1 minute and 30 seconds to run the query. The VPS is running 100MBPS and then computer I'm connecting to it from is on an 8MB line. The network communication between the two is almost instant, I am able to remotely connect fine with no lag whatsoever and am hosting several websites running MSSQL and all the queries run instantly, whether connected remotely or locally so it seems specific to PostgreSQL. I'm running their newest version of the software and the newest compatible version of PostgreSQL with their software. The database is a new database, containing hardly any data and I've ran vacuum/analyze etc all to no avail, I see no improvements. I don't understand how MSSQL can query almost instantly yet PostgreSQL struggles so much. I am able to telnet to the post 5432 on the VPS IP with no problems, and as I say the query does execute it just takes an extremely long time. What I do notice is on the router when the query is running that hardly any bandwidth is being used - but then again I wouldn't expect it to for a simple query but am not sure if this is the issue. I've tried connecting remotely on 3 different networks now (including different routers) but the problem remains. Connecting remotely via another machine via the LAN is instant. I have also edited the postgre conf file to allow for more memory/buffers etc but I don't think this is the problem - what I am asking it to do is very simple - it shouldn't be intensive at all. Thanks, Ricky

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >