Search Results

Search found 18245 results on 730 pages for 'recursive query'.

Page 583/730 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • php mysql database users connection handling

    - by aviv
    What is the best way to handle mysql database users connection in PHP? I have a web server running a PHP application on MySQL. I have created a database user for the application: dbuser1 with limited access - only for query, insert and update tables. No alter table. Now the question is, should i use the same dbuser1 widely in my scripts, so if there are 100 current people using my system and hence 100 scripts running parallel they all connect to the database with the same dbuser1? or should i create a few users and assign each script a different user or load-balance between the dbusers ?

    Read the article

  • CodePlex Daily Summary for Sunday, September 09, 2012

    CodePlex Daily Summary for Sunday, September 09, 2012Popular ReleasesMishra Reader: Mishra Reader Beta 4: Additional bug fixes and logging in this release to try to find the reason some users aren't able to see the main window pop up. Also, a few UI tweaks to tighten up the feed item list. This release requires the final version of .NET 4.5. If the ClickOnce installer doesn't work for you, please try the additional setup exe.Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.9.0: Release Notes Imporved framework architecture Improved the framework security More import/export formats and operations New WebPortal application which includes forum, new, blog, catalog, etc. UIs Improved WebAdmin app. Reports, navigation and search Perfomance optimization Improve Xenta.Catalog domain More plugin interfaces and plugin implementations Refactoring Windows Azure support and much more... Package Guide Source Code - package contains the source code Binaries...Json.NET: Json.NET 4.5 Release 9: New feature - Added JsonValueConverter Fix - Fixed DefaultValueHandling.Ignore not igoring default values of non-nullable properties Fix - Fixed DefaultValueHandling.Populate error with non-nullable properties Fix - Fixed error when writing JSON for a JProperty with no value Fix - Fixed error when calling ToList on empty JObjects and JArrays Fix - Fixed losing decimal precision when writing decimal JValuesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.66: Just going to bite the bullet and rip off the band-aid... SEMI-BREAKING CHANGE! Well, it's a BREAKING change to those who already adjusted their projects to use the previous breaking change's ill-conceived renamed DLLs (versions 4.61-4.65). For those who had not adapted and were still stuck in this-doesn't-work-please-fix-me mode, this is more like a fixing change. The previous breaking change just broke too many people, I'm sorry to say. Renaming the DLL from AjaxMin.dll to AjaxMinLibrary.dl...DotNetNuke® Community Edition CMS: 07.00.00 CTP (Not for Production Use): NOTE: New Minimum Requirementshttp://www.dotnetnuke.com/Portals/25/Blog/Files/1/3418/Windows-Live-Writer-1426fd8a58ef_902C-MinimumVersionSupport_2.png Simplified InstallerThe first thing you will notice is that the installer has been updated. Not only have we updated the look and feel, but we also simplified the overall install process. You shouldn’t have to click through a series of screens in order to just get your website running. With the 7.0 installer we have taken an approach that a...BIDS Helper: BIDS Helper 1.6.1: In addition to fixing a number of bugs that beta testers reported, this release includes the following new features for Tabular models in SQL 2012: New Features: Tabular Display Folders Tabular Translations Editor Tabular Sync Descriptions Fixed Issues: Biml issues 32849 fixing bug in Tabular Actions Editor Form where you type in an invalid action name which is a reserved word like CON or which is a duplicate name to another action 32695 - fixing bug in SSAS Sync Descriptions whe...Umbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...menu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)Lightweight Fluent Workflow: objectflow 1.4.0.1: Changes in this release;Exception handling policies Install on NuGet console; Install-Package objectflow.core -pre Supported work-flow patterns Exception policies Sequence Parallel split Simple merge Exclusive choice Retry Arbitrary cycles Framework Features Handle exceptions with workflow.Configure().On<Exception>() Repeat operations and functions an arbitrary number of times Retry failed lamda functions Generic interface for concise workflow registration Operations t...Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...LibXmlSocket: Binary: .net4.5,.netCore???????Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comThe Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.New ProjectsAjax based Multi Forms ASP.NET Framework (Amfan): Ajax based Multi Forms ASP.NET Framework (Amfan) reduces the limitations of the Asp.Net by creating multiple sub-forms of the page as separate aspx pages.APO-CS: A straight port of Apophysis to C#.BaceCP / ???? ????? ?? ???????? ????: BaceCP e ??????????? ???????? ?? ????????, ??????????? ? ????????? ?? ???? ????? ?? ???????? ????.CafeAdm Messenger: CafeAdm Messenger Allows a server connection for CafeAdm Schedule and MessengerDECnet 2.0 Router: A user mode DECnet 2.0 router that will interoperate with the HECnet bridge.DriveManager: This is a common utility to provide an interface for all cloud based drive providers like google drive, dropbox, MS Sky Drive. Email Tester: This utility will help you check your SMTP settings for SharePoint. It is best for Pre-PROD and PROD environment where you can't modify code.EvoGame: The evo game that im working onforebittims: Project Owner : ForeBitt Inc. Sri Lanka Project Manager : Chackrapani Wickramarathne Technologies : VS 2010, C#, Linq to Sql, Sql server 2008 R2, Dev ExpressMCServe: MCServe is the minecraft-server which takes advantage of the performance of the .NET FrameworkMental: Ðang trong quá trình phát tri?nSalaryManagementSys: SalaryManagementSysVKplay: This application is audio player, launches on windows phone platform and uses music from vk.com social network.

    Read the article

  • Subquery works in 9i but not in 11g

    - by Zsuetam
    Statement below is working on Oracle 9i but not on Oracle 11g SELECT * FROM ( SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL UNION ALL SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL ) WHERE zz IS NOT NULL AND TO_CHAR (hh) NOT IN ( SELECT DECODE ( scrnfail_rate, 0, -1, ROUND (LEVEL * 1 / (scrnfail_rate / 100)) - ROUND (1 / (2 * (scrnfail_rate / 100))) ) AS nno FROM DUAL WHERE NVL (scrnfail_rate, 0) > 0 CONNECT BY LEVEL <= ROUND(9 * scrnfail_rate / 100) ) It looks like Oracle 11g is ignoring where decode or even where clause in the subquery. This query should return two rows as it does on Oracle 9i, but results ORA-01476: divisor is equal to zero on Oracle 11g EE 11.2.0.1.0 - 64bit. Can anyone help? Thanks!

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • more ruby way of gsub from array

    - by aharon
    My goal is to let there be x so that x("? world. what ? you say...", ['hello', 'do']) returns "hello world. what do you say...". I have something that works, but seems far from the "Ruby way": def x(str, arr, rep='?') i = 0 query.gsub(rep) { i+=1; arr[i-1] } end Is there a more idiomatic way of doing this? (Let me note that speed is the most important factor, of course.)

    Read the article

  • Why would a TableAdapter populate a DataSet with "1/1/2000" for an entire timestamp column?

    - by Rob
    I have a TableAdapter filling a DataSet, and for some reason every select query populates my timestamp column with the value 1/1/2000 for every selected row. I first verified that original values are intact on the DB side; for the most part, they are, although it seems a few rows lost their original timestamp because of update queries performed programmatically before the issue was discovered. The DataColumn type is DateType, while the database (Postgres) column type is timestamp. Up until recently, this was all playing very nicely. I noticed the issue in a bound DataGridView control, and verified that this is not related to data binding by utilizing the 'Preview Data' option in the VS DataSet Editor. Usually when I notice unexpected values popping up in my application it's related to a mis-configured property, type conflict, or another silly mistake I've made. So after checking properties and types, and even recreating the TableAdapter from scratch, to say I'm a little baffled is an understatement. Does anyone have any ideas of what I could do to fix the issue and/or diagnose the cause?

    Read the article

  • CREATE USER in MS Access 2010

    - by Anakela
    I have been searching for several hours regarding how to create a user using SQL for a database I am building in Access. I found several sources on Microsoft's website that say I can use the CREATE USER command to do this. However, whenever I attempt to run the query, an error saying Syntax error in CREATE TABLE statement pops up. What am I doing wrong? Thank you in advance for your help! If you're interested, the code format I am attempting to use is as follows: CREATE USER username, password, pid.

    Read the article

  • How to make a thread try to reconnect to the Database x times using JDBCTemplate

    - by gillJ
    Hi, I have a single thread trying to connect to a database using JDBCTemplate as follows: JDBCTemplate jdbcTemplate = new JdbcTemplate(dataSource); try{ jdbcTemplate.execute(new CallableStatementCreator() { @Override public CallableStatement createCallableStatement(Connection con) throws SQLException { return con.prepareCall(query); } }, new CallableStatementCallback() { @Override public Object doInCallableStatement(CallableStatement cs) throws SQLException { cs.setString(1, subscriberID); cs.execute(); return null; } }); } catch (DataAccessException dae) { throw new CougarFrameworkException( "Problem removing subscriber from events queue: " + subscriberID, dae); } I want to make sure that if the above code throws DataAccessException or SQLException, the thread waits a few seconds and tries to re-connect, say 5 more times and then gives up. How can I achieve this? Also, if during execution the database goes down and comes up again, how can i ensure that my program recovers from this and continues running instead of throwing an exception and exiting?

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • LINQ-to-XML to DataGridView: Cannot edit fields -- How to fix?

    - by Pretzel
    I am currently doing LINQ-to-XML and populating a DataGridView with my query just fine. The trouble I am running into is that once loaded into the DataGridView, the values appear to be Un-editable (ReadOnly). Here's my code: var barcodes = (from src in xmldoc.Descendants("Container") where src.Descendants().Count() > 0 select new { Id = (string)src.Element("Id"), Barcode = (string)src.Element("Barcode"), Quantity = float.Parse((string)src.Element("Quantity").Attribute("value")) }).Distinct(); dataGridView1.DataSource = barcodes.ToList(); I read somewhere that the "DataGridView will be in ReadOnly mode when you use Anonymous types." But I couldn't find an explanation why or exactly what to do about it. Any ideas?

    Read the article

  • can oracle types be updated like tables?

    - by Omnipresent
    I am converting GTT's to oracle types as explained in an excellent answer by APC. however, some GTT's are being updated based on a select query from another table. For example: UPDATE my_gtt_1 c SET (street, city, STATE, zip) = (SELECT src.unit_address, src.unit_city, src.unit_state, src.unit_zip_code FROM (SELECT mbr.ROWID row_id, unit_address, RTRIM(a.unit_city) unit_city, RTRIM(a.unit_state) unit_state, RTRIM(a.unit_zip_code) unit_zip_code FROM table_1 b, table_2 a, my_gtt_1 mbr WHERE type = 'ABC' AND id = b.ssn_head AND a.h_id = b.h_id AND row_id >= v_start_row AND row_id <= v_end_row) src WHERE c.ROWID = src.row_id) WHERE state IS NULL OR state = ' '; if my_gtt_1 was not a global temporary table but an oracle collection type then is it possible to do updates this complex? Or in these cases we are better off using the global temporary table?

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • MySQL ALTER TABLE on very large table - is it safe to run it?

    - by Timothy Mifsud
    I have a MySQL database with one particular MyISAM table of above 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then perform the following statement: ALTER TABLE x ORDER BY PK DESC i.e. I order the table in question by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory), but, even though 3 times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup. I have started to wonder whether a 512MB server can cope with that statement (on such a large table) as I have read that a temporary table is created to perform the ALTER TABLE command?! And, if it can be safely run, what should be the expected time for the alteration of the table? Thanks in advance, Tim

    Read the article

  • Repeatedly querying xml using python

    - by Jack
    I have some xml documents I need to run queries on. I've created some python scripts (using ElementTree) to do this, since I'm vaguely familiar with using it. The way it works is I run the scripts several times with different arguments, depending on what I want to find out. These files can be relatively large (10MB+) and so it takes rather a long time to parse them. On my system, just running: tree = ElementTree.parse(document) takes around 30 seconds, with a subsequent findall query only adding around a second to that. Seeing as the way I'm doing this requires me to repeatedly parse the file, I was wondering if there was some sort of caching mechanism I can use so that the ElementTree.parse computation can be reduced on subsequent queries. I realise the smart thing to do here may be to try and batch as many queries as possible together in the python script, but I was hoping there might be another way. Thanks.

    Read the article

  • Doing a join across two databases with different collations on SQL Server and getting an error.

    - by Andrew G. Johnson
    I know, I know with what I wrote in the question I shouldn't be surprised. But my situation is slowly working on an inherited POS system and my predecessor apparently wasn't aware of JOINs so when I looked into one of the internal pages that loads for 60 seconds I see that it's a fairly quick, rewrite these 8 queries as one query with JOINs situation. Problem is that besides not knowing about JOINs he also seems to have had a fetish for multiple databases and surprise, surprise they use different collations. Fact of the matter is we use all "normal" latin characters that English speaking people would consider the entire alphabet and this whole thing will be out of use in a few months so a bandaid is all I need. Long story short is I need some kind of method to cast to a single collation so I can compare two fields from two databases. Exact error is: Cannot resolve the collation conflict between "SQL_Latin1_General_CP850_CI_AI" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

    Read the article

  • Using RDL files in Web ReportViewer

    - by user54064
    I want to use a rdl file with the query information stored in it. I don't want to have to convert it to a rdlc file. I have an ASP.NET app that I want to show the report. I thought I would use a ReportViewer on my page and then have it use the rdl file. However, I get an error and in researching it appears that I have to convert the file to an rdlc file. I don't want to strip out the data contained in the report. How can I show the report to the user by running the rdl report?

    Read the article

  • GAE transaction exceptions

    - by bach
    Hi, In this example IS the exception being thrown if ANY of the Table elements are being changed by another client OR only if the element that we changed has been changed by another client? Just to verify - the exception is thrown from the commit() isn't it? PersistenceManager pm = PMF.get().getPersistenceManager(); try { pm.currentTransaction().begin(); List<Row> Table = (List<Row>) pm.newQuery(query).execute(); Table.get(0).setReserved(true); // <----- we change only this element pm.currentTransaction().commit(); } catch (JDOCanRetryException ex) { pm.currentTransaction().rollback() // <----- if Table.get(1) was changed by another client do we get to this point??? }

    Read the article

  • More XML Problems - Undeclared Entity 'nbsp'

    - by Gogster
    I'm getting the error: Line 49: xml = r.ReadToEnd(); Line 50: Line 51: System.Xml.Linq.XDocument xmlDoc = System.Xml.Linq.XDocument.Parse(xml); Line 52: Line 53: var query = from p in xmlDoc.Descendants("member") On my XML. When I run the code to generate the XML in an empty page, it runs without error, if I call the code within my webpage it throws this error. The only 'nbsp' on the page is a doctype declaration at the top of the XSLT: <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#x00A0;"> ]> I'm at a loss as to where this error is coming from and I am looking for suggestions please! Thanks.

    Read the article

  • IN statement performance in PostgreSQL (and in general)

    - by Vasil
    I know this has probably been asked before, but I can't find it with SO's search. Lets say i've TABLE1 and TABLE2, who should I expect the performance of a query such as this: SELECT * FROM TABLE1 WHERE id IN SUBQUERY_ON_TABLE2; as the number of rows in TABLE1 and TABLE2 grow and id is a primary key on TABLE1. Yes, I know using IN is such a n00b mistake, but TABLE2 has a generic relation (django generic relation) to multiple other tables so I can't think of another way to filter the data. At what (aproximate) ammount of rows in TABLE1 and TABLE2 should I expect to notice performance issues because of this? Will performance degrade linearly, exponentially etc. depending on the number of rows?

    Read the article

  • SQL optional parameters through VB.net

    - by ScaryJones
    I've a document search page with three listboxes that allow multiple selections. They're: Category A Year Category B Only category A is mandatory, the others are optional parameters and might be empty. Each document can belong to multiple options in Category A and multiple options Category B but each document only has one year associated with it. I've kind of got this working through building up a dynamic SQL string but it's messy and I hate using it so I thought I'd ask here if anyone could see an easier way of doing this. An example of the kind of dynamic SQL query i end up with follows: select * from library where libraryID in (select distinct libraryID from categoryAdocs where categoryAdocID in (4)) or year in (2004)

    Read the article

  • Using C# Type as generic

    - by I Clark
    I'm trying to create a generic list from a specific Type that is retrieved from elsewhere: Type listType; // Passed in to function, could be anything var list = _service.GetAll<listType>(); However I get a build error of: The type or namespace name 'listType' could not be found (are you missing a using directive or an assembly reference?) Is this even possible or am I setting foot onto C# 4 Dynamic territory? As a background: I want to automatically load all lists with data from the repository. The code below get's passed a Form Model whose properties are iterated for any IEnum (where T inherits from DomainEntity). I want to fill the list with objects of the Type the list made of from the repository. public void LoadLists(object model) { foreach (var property in model.GetType() .GetProperties(BindingFlags.Public | BindingFlags.Instance | BindingFlags.SetProperty)) { if (IsEnumerableOfNssEntities(property.PropertyType)) { var listType = property.PropertyType.GetGenericArguments()[0]; var list = _repository.Query<listType>().ToList(); property.SetValue(model, list, null); } } }

    Read the article

  • PL/SQL bulk collect into associative array with sparse key

    - by Dan
    I want to execute a SQL query inside PL/SQL and populate the results into an associative array, where one of the columns in the SQL becomes the key in the associative array. For example, say I have a table Person with columns PERSON_ID INTEGER PRIMARY KEY PERSON_NAME VARCHAR2(50) ...and values like: PERSON_ID | PERSON_NAME ------------------------ 6 | Alice 15 | Bob 1234 | Carol I want to bulk collect this table into a TABLE OF VARCHAR2(50) INDEX BY INTEGER such that the key 6 in this associative array has the value Alice and so on. Can this be done in PL/SQL? If so, how?

    Read the article

  • Adding custom columns to Propel model?

    - by Hard-Boiled Wonderland
    At the moment I am using the below query: $claims = ClaimQuery::create('c') ->leftJoinUser() ->withColumn('CONCAT(User.Firstname, " ", User.Lastname)', 'name') ->withColumn('User.Email', 'email') ->filterByArray($conditions) ->paginate($page = $page, $maxPerPage = $top); However I then want to add columns manually, so I thought this would simply work: foreach($claims as &$claim){ $claim->actions = array('edit' => array( 'url' => $this->get('router')->generate('hera_claims_edit'), 'text' => 'Edit' ) ); } return array('claims' => $claims, 'count' => count($claims)); However when the data is returned Propel or Symfony2 seems to be stripping the custom data when it gets converted to JSON along with all of the superflous model data. What is the correct way of manually adding data this way?

    Read the article

  • Parallel processing via multithreading in Java

    - by Robz
    There are certain algorithms whose running time can decrease significantly when one divides up a task and gets each part done in parallel. One of these algorithms is merge sort, where a list is divided into infinitesimally smaller parts and then recombined in a sorted order. I decided to do an experiment to test whether or not I could I increase the speed of this sort by using multiple threads. I am running the following functions in Java on a Quad-Core Dell with Windows Vista. One function (the control case) is simply recursive: // x is an array of N elements in random order public int[] mergeSort(int[] x) { if (x.length == 1) return x; // Dividing the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Sending them off to continue being divided mergeSort(a); mergeSort(b); // Recombining the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } return x; } The other is in the 'run' function of a class that extends thread, and recursively creates two new threads each time it is called: public class Merger extends Thread { int[] x; boolean finished; public Merger(int[] x) { this.x = x; } public void run() { if (x.length == 1) { finished = true; return; } // Divide the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Begin two threads to continue to divide the array Merger ma = new Merger(a); ma.run(); Merger mb = new Merger(b); mb.run(); // Wait for the two other threads to finish while(!ma.finished || !mb.finished) ; // Recombine the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } finished = true; } } It turns out that function that does not use multithreading actually runs faster. Why? Does the operating system and the java virtual machine not "communicate" effectively enough to place the different threads on different cores? Or am I missing something obvious?

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >