Search Results

Search found 9944 results on 398 pages for 'plan explorer'.

Page 374/398 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Importing a large dataset into a database

    - by peaceful
    I'm a beginning programmer in the relevant areas to this question, so if possible, it'd be helpful to avoid assuming I know a lot already. I'm trying to import the OpenLibrary dataset into a local Postgres database. After it's imported, I plan to use it as a starting seed for a Ruby on Rails application that will include information on books. The OpenLibrary datasets are available here, in a modified JSON format: http://openlibrary.org/dev/docs/jsondump I only need very basic information for my application, much less than what is provided in the dumps. I'm only trying to get out book titles, author names, and relationships between books and authors. Below are two typical entries from their dataset, the first for an author, and the second for a book (they seem to have an entry for each edition of a book). The entries seem to lead off with a primary key, and then with a type, before including the actual JSON database dump. /a/OL2A /type/author {"name": "U. Venkatakrishna Rao", "personal_name": "U. Venkatakrishna Rao", "last_modified": {"type": "/type/datetime", "value": "2008-09-10 08:44:01.978456"}, "key": "/a/OL2A", "birth_date": "1904", "type": {"key": "/type/author"}, "id": 99, "revision": 3} /b/OL345M /type/edition {"publishers": ["Social Science Research Project, Dept. of Geography, University of Dacca"], "pagination": "ii, 54 p.", "title": "Land use in Fayadabad area", "lccn": ["sa 65000491"], "subject_place": ["East Pakistan", "Dacca region."], "number_of_pages": 54, "languages": [{"comment": "initial import", "code": "eng", "name": "English", "key": "/l/eng"}], "lc_classifications": ["S471.P162 E23"], "publish_date": "1963", "publish_country": "pk ", "key": "/b/OL345M", "authors": [{"birth_date": "1911", "name": "Nafis Ahmad", "key": "/a/OL302A", "personal_name": "Nafis Ahmad"}], "publish_places": ["Dacca, East Pakistan"], "by_statement": "[by] Nafis Ahmad and F. Karim Khan.", "oclc_numbers": ["4671066"], "contributions": ["Khan, Fazle Karim, joint author."], "subjects": ["Land use -- East Pakistan -- Dacca region."]} The size of the uncompressed dumps are enormous, about 2GB for the authors list, and 18GB for the book editions list. OpenLibrary does not provide any tools for this themselves, they provide a simple unoptimized Python script for reading in sample data (which unlike the actual dumps comes in pure JSON format), but they estimate if that was modified for use on their actual data it would take 2 months (!) to finish loading the data. How can I read this into the database? I assume I'll need to write a program to do this. What language and any guidance on how I should do it to finish in a reasonable amount of time? The only scripting language I have any experience with is Ruby.

    Read the article

  • What might cause the big overhead of making a HttpWebRequest call?

    - by Dimitri C.
    When I send/receive data using HttpWebRequest (on Silverlight, using the HTTP POST method) in small blocks, I measure the very small throughput of 500 bytes/s over a "localhost" connection. When sending the data in large blocks, I get 2 MB/s, which is some 5000 times faster. Does anyone know what could cause this incredibly big overhead? Update: I did the performance measurement on both Firefox 3.6 and Internet Explorer 7. Both showed similar results. Update: The Silverlight client-side code I use is essentially my own implementation of the WebClient class. The reason I wrote it is because I noticed the same performance problem with WebClient, and I thought that the HttpWebRequest would allow to tweak the performance issue. Regrettably, this did not work. The implementation is as follows: public class HttpCommChannel { public delegate void ResponseArrivedCallback(object requestContext, BinaryDataBuffer response); public HttpCommChannel(ResponseArrivedCallback responseArrivedCallback) { this.responseArrivedCallback = responseArrivedCallback; this.requestSentEvent = new ManualResetEvent(false); this.responseArrivedEvent = new ManualResetEvent(true); } public void MakeRequest(object requestContext, string url, BinaryDataBuffer requestPacket) { responseArrivedEvent.WaitOne(); responseArrivedEvent.Reset(); this.requestMsg = requestPacket; this.requestContext = requestContext; this.webRequest = WebRequest.Create(url) as HttpWebRequest; this.webRequest.AllowReadStreamBuffering = true; this.webRequest.ContentType = "text/plain"; this.webRequest.Method = "POST"; this.webRequest.BeginGetRequestStream(new AsyncCallback(this.GetRequestStreamCallback), null); this.requestSentEvent.WaitOne(); } void GetRequestStreamCallback(IAsyncResult asynchronousResult) { System.IO.Stream postStream = webRequest.EndGetRequestStream(asynchronousResult); postStream.Write(requestMsg.Data, 0, (int)requestMsg.Size); postStream.Close(); requestSentEvent.Set(); webRequest.BeginGetResponse(new AsyncCallback(this.GetResponseCallback), null); } void GetResponseCallback(IAsyncResult asynchronousResult) { HttpWebResponse response = (HttpWebResponse)webRequest.EndGetResponse(asynchronousResult); Stream streamResponse = response.GetResponseStream(); Dim.Ensure(streamResponse.CanRead); byte[] readData = new byte[streamResponse.Length]; Dim.Ensure(streamResponse.Read(readData, 0, (int)streamResponse.Length) == streamResponse.Length); streamResponse.Close(); response.Close(); webRequest = null; responseArrivedEvent.Set(); responseArrivedCallback(requestContext, new BinaryDataBuffer(readData)); } HttpWebRequest webRequest; ManualResetEvent requestSentEvent; BinaryDataBuffer requestMsg; object requestContext; ManualResetEvent responseArrivedEvent; ResponseArrivedCallback responseArrivedCallback; } I use this code to send data back and forth to an HTTP server.

    Read the article

  • activeX component in axapta

    - by Nico
    hi folks, i'm struggling with an .net activeX i try to use in ms axapta 2009. using this component on my local machine where it was compiled, it's working quite fine. it can be added as activeX element on a form, the methods and events are listed in the axapta-activeX-explorer and i can interact with it without any problems. but trying to distribute the dll to other clients isn't working as intended. the registration of the dll via regasm /codebase /tlb works properly - getting the message, registration was successful. the component is also listed when selecting an activeX-element to add in ax, but neither functions nor properties are listed. and launching the form results in an errormessage - activeX component CLSID ... not found on system, not installed. the classID is indeed the one, defined in .net. strange things happen, having a look on the task-manager. the activeX-component itself is just a wrapper to interact with a com-application. when launching the ax-form with the not working and _not_installed_!! activeX-thing, the taskmanager shows a new process of the com-application, which is instanciated by the activeX :/ things i tried: using different versions of regasm, eg \Windows\Microsoft.NET\Framework\v2.0.50727 ; C:\Windows\Microsoft.NET\Framework64\v2.0.50727 using new GUIDs in .net, prior removing the old ones from the registry compiling, using different versions of the .net framework doing registration via regasm, regasm /codebase, regasm /codebase /tlb, using a visual-studio-setup running registration via command-line as administrator running setup as administrator running even ax as administrator on client-machine moving dll to a different folder followed by new registration ( windows/system32; ax/client/bin ) installing to GAC ( gacutil /i ) different project-options in visual studio ( COM-Visibility; register for COM-Interop; different targetPlatform ) hoped for the fact, that compiling in visual studio with register for COM-Interop option enabled does something more than just the regasm-registration, i used a registry-monitor-microsoft-tool for logging the registry-activity which happend during compilation. using these logs to create all registry-entries on the target-client in addition didn't work either. any hints or help would be so much appreciated! this thing is blocking me for days now :(

    Read the article

  • Please help with 'System.Data.DataRowView' does not contain a property with the name...

    - by Catalin
    My application works just fine until, in some unidentified conditions, it begins throwing this errors: When these errors start appearing, they appear in all over the application, no matter the code is built using ObjectDataSource or SqlDataSource. However, apparently, if the code uses some "classic", code-behind data-binding, that code does not throw errors (they do not show in the application event log), the page is displayed but it does not contain any data.... Please HELP tracking these very strange errors !!!!!! Here is the error log from the Event Viewer Exception information: Exception type: HttpUnhandledException Exception message: Exception of type 'System.Web.HttpUnhandledException' was thrown. Request information: Request URL: http://SITE/agsis/Default.aspx?tabid=1281&error=DataBinding5/7/2009 10:23:03 AMa+'System.Data.DataRowView'+does+not+contain+a+property+with+the+name+'DenumireMaterie'. Request path: /agsis/Default.aspx User host address: 299.299.299.299 ;) User: georgeta Is authenticated: True Authentication Type: Forms Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 1 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.errorpage_aspx.ProcessRequest(HttpContext context) at System.Web.HttpServerUtility.ExecuteInternal(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage, VirtualPath path, VirtualPath filePath, String physPath, Exception error, String queryStringOverride) Here is the error log from my application: AssemblyVersion: 04.05.01 PortalID: 18 PortalName: Stiinte Economice UserID: 85 UserName: georgeta ActiveTabID: 1281 ActiveTabName: Plan Invatamant RawURL: /agsis/Default.aspx?tabid=1281&error=DataBinding%3a+'System.Data.DataRowView'+does+not+contain+a+property+with+the+name+'DenumireMaterie'. AbsoluteURL: /agsis/Default.aspx AbsoluteURLReferrer: http://SITE/agsis/Default.aspx?tabid=1281&ctl=Login&returnurl=%2fagsis%2fDefault.aspx%3ftabid%3d1281%26ctl%3dNotePlanSemestru%26mid%3d2801%26ID_PlanSemestru%3d518%26ID_PlanInvatamant%3d304%26ID_FC%3d206%26ID_FCForma%3d85%26ID_Domeniu%3d418%26ID_AnStudiu%3d7%26ID_Specializare%3d6522%26ID_AnUniv%3d27 UserAgent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SIMBAR={8C326639-C677-4045-B6D9-F59C330790E7}) DefaultDataProvider: DotNetNuke.Data.SqlDataProvider, DotNetNuke.SqlDataProvider ExceptionGUID: c56dc33e-973f-467a-a90d-2a8fc7193aec InnerException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. FileName: FileLineNumber: 0 FileColumnNumber: 0 Method: System.Web.UI.DataBinder.GetPropertyValue StackTrace: Message: DotNetNuke.Services.Exceptions.PageLoadException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. --- System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. at System.Web.UI.DataBinder.GetPropertyValue(Object container, String propName) at System.Web.UI.WebControls.GridView.CreateChildControls(IEnumerable dataSource, Boolean dataBinding) at System.Web.UI.WebControls.CompositeDataBoundControl.PerformDataBinding(IEnumerable data) at System.Web.UI.WebControls.GridView.PerformDataBinding(IEnumerable data) at System.Web.UI.WebControls.DataBoundControl.OnDataSourceViewSelectCallback(IEnumerable data) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- Source: Server Name: SERVER2 Thank you, Catalin

    Read the article

  • Which version of MSXML should I use?

    - by Cheeso
    Seems like this would be a common question, though I could not find it on SO. Which version of MSXML should I use in my applications, and more importantly, how should I decide? There is MSXML3, 4, 5 and 6. I recently posted some code in calling-wcf-service-by-vbscript that used MSXML v4. AnthonyWJones posted that I shouldn't use 4, but instead 3 or 6, but probably 3. Certainly not v5! Why? I'd like to know more about the criteria for selecting the version of MSXML to use in my apps. Bonus question: Does anyone have a summary of the differences between the various versions of MSXML over time? Summary so far: MSXML6 Should be first choice. was released in 2006, and includes perf and compliance fixes. Use this if you can. It's good. There are no merge modules; to bundle the MSXML6 runtime with your app, MS suggests packaging the MSXML6 msi file. MSXML6 is an upgrade from MSXML3/4 but does not replace them, because it discontinues some features. You can get the MSI here. MSXML3 Second choice. Most widely deployed version. Originally shipped in March 2000. Actively maintained, no new features. Currently supported, if you are on SP5 (shipped in 2005) or later. SP7 is current (also from 2005). MSXML5 was released only as part of MS-Office. Currently supported by Microsoft, but only as part of Office, not for building apps. Don't build apps that depend on MSXML5: Verboten. MSXML4 originally shipped? Currently in "maintenance mode". Microsoft is encouraging people to move off MSXML4 to MSXML6. Currently supported if you are on MSXML4SP2 or later, which shipped in 2003. download MSXML4SP2 here. Can be redisributed. Using the right version of MSXML in Internet Explorer is a good entry on the blog from Microsoft's xmlteam.

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • Wordpress 403/404 Errors: You don't have permission to access /wp-admin/themes.php on this server

    - by Glen
    Some background: I setup six blogs this week, all using Wordpress 2.92, installed with Fantastico on a baby croc plan with Hostgator. I used the same theme (heatmap 2.5.4) and plugins for each blog. They were all up and running, no issues at all. I go to create a new blog this morning, using the same setup, and when I try to change the theme settings, I get the following error: Forbidden You don't have permission to access /wp-admin/themes.php on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8n DAV/2 mod_fcgid/2.3.5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at http://www.mydomain.com Port 80 I tried uninstalling WP and doing a clean install, still the same issue with a clean installation. So I went back and checked the six other blogs that I had setup over the last week or so, and they are also now giving me 403 or 404 errors when trying to change theme settings, and everytime there's an error it points to either themes.php or functions.php At this point I'm at my wits end trying to figure out what the problem is. Hostgator support looked at it and thought maybe it was a permissions issue but they reset those and I'm still having the problem. At first I thought the problem might have been related to a plugin I recently installed on the previous six blogs that morning (ByREV Fix Missed Shedule Plugin) to deal with a missed schedule bug with WP 2.92, and that maybe that had mucked things up. But then I checked a blog I built months ago, also using the same theme and plugins, and now it too is also encountering the same problem. Any ideas? I tried deleting my htaccess, uploading a blank one, uploading one with this snippet I found on the hostgator forum: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Nothing has worked. I still get 403 or 404 errors everytime. Everything was working perfectly yesterday so I know this setup DOES WORK, I've just mucked something up somewhere and I'm clueless what it is. I read a related thread here and tried chmoding the wp-content folder to 0755 and still having the issue. Any thoughts? Thanks!

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • Setting up NCover for NUnit in FinalBuilder

    - by Lasse V. Karlsen
    I am attempting to set up NCover for usage in my FinalBuilder project, for a .NET 4.0 C# project, but my final coverage output file contains no coverage data. I am using: NCover 3.3.2 NUnit 2.5.4 FinalBuilder 6.3.0.2004 All tools are the latest official as of today. I've finally managed to coax FB into running my unit tests under NCover for the .NET 4.0 project, so I get Tests run: 184, ..., which is correct. However, the final Coverage.xml file output from NCover is almost empty, and looks like this: <?xml version="1.0" encoding="utf-8"?> <!-- saved from NCover 3.0 Export url='http://www.ncover.com/' --> <coverage profilerVersion="3.3.2.6211" driverVersion="3.3.2" exportversion="3" viewdisplayname="" startTime="2010-04-22T08:55:33.7471316Z" measureTime="2010-04-22T08:55:35.3462915Z" projectName="" buildid="27c78ffa-c636-4002-a901-3211a0850b99" coveragenodeid="0" failed="false" satisfactorybranchthreshold="95" satisfactorycoveragethreshold="95" satisfactorycyclomaticcomplexitythreshold="20" satisfactoryfunctionthreshold="80" satisfactoryunvisitedsequencepoints="10" uiviewtype="TreeView" viewguid="C:\Dev\VS.NET\LVK.IoC\LVK.IoC.Tests\bin\Debug\Coverage.xml" viewfilterstyle="None" viewreportstyle="SequencePointCoveragePercentage" viewsortstyle="Name"> <rebasedpaths /> <filters /> <documents> <doc id="0" excluded="false" url="None" cs="" csa="00000000-0000-0000-0000-000000000000" om="0" nid="0" /> </documents> </coverage> The output in FB log is: ... ***************** End Program Output ***************** Execution Time: 1,5992 s Coverage Xml: C:\Dev\VS.NET\LVK.IoC\LVK.IoC.Tests\bin\Debug\Coverage.xml NCover Success My configuration of the FB step for NCover: NCover what?: NUnit test coverage Command: C:\Program Files (x86)\NUnit 2.5.4\bin\net-2.0\nunit-console.exe Command arguments: LVK.IoC.Tests.dll /noshadow /framework:4.0.30319 /process=single /nothread Note: I've tried with and without the /process and /nothread options Working directory: %FBPROJECTDIR%\LVK.IoC.Tests\bin\Debug List of assemblies to profile: %FBPROJECTDIR%\LVK.IoC.Tests\bin\Debug\LVK.IoC.dll Note: I've tried just listing the name of the assembly, both with and without the extension. The documentation for the FB step doesn't help, as it only lists minor sentences for each property, and fails to give examples or troubleshooting hints. Since I want to pull the coverage results into NDepend to run build-time analysis, I want that file to contain the information I need. I am also using TestDriven, and if I right-click the solution file and select "Test with NCover", NCover-explorer opens up with coverage data, and if I ask it to show me the folder with coverage files, in there is an .xml file with the same structure as the one above, just with all the data that should be there, so the tools I have is certainly capable of producing it. Has anyone an idea of what I've configured wrong here?

    Read the article

  • Event not bubbling in some Browsers when clicked on Flash

    - by 166_MMX
    Environment: Windows 7, Internet Explorer 8, Flash ActiveX 10.1.53.64, wmode=transparent Just wrote a small test page that you can load in IE and Firefox or any other Browser. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Event bubbling test</title> </head> <body onclick="alert('body');" style="margin:0;border-width:0;padding:0;background-color:#00FF00;"> <div onclick="alert('div');" style="margin:0;border-width:0;padding:0;background-color:#FF0000;"> <span onclick="alert('span');" style="margin:0;border-width:0;padding:0;background-color:#0000FF;"> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://fpdownload.macromedia.com/get/shockwave/cabs/flash/swflash.cab#version=7,0,0,0" width="159" height="91" id="flashAbout_small" align="absmiddle"> <param name="movie" value="http://www.adobe.com/swf/software/flash/about/flashAbout_info_small.swf"/> <param name="quality" value="high"/> <param name="bgcolor" value="#FFFFFF"/> <param name="wmode" value="transparent"/> <embed src="http://www.adobe.com/swf/software/flash/about/flashAbout_info_small.swf" quality="high" bgcolor="#FFFFFF" width="159" height="91" wmode="transparent" name="flashAbout_small" align="absmiddle" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/go/getflashplayer"/> </object> </span> </div> </body> </html> So clicking any colored shape should produce an alert (except for the green one in IE, not sure why but I hope that's off topic and not related to my issue). Clicking the Flash container in Firefox will work Perfectly fine. You should get alert boxes in this order containing: span, div and body. Flash bubbles the event to the HTML. But this is not happening in IE. So why is Flash in IE not bubbling events to HTML? Edit: As mentioned by Andy E this behavior can also bee seen in Google Chrome which to my knowledge is not using ActiveX to embed the flash movie into the page.

    Read the article

  • IronPython and Nodebox in C#

    - by proxylittle
    My plan: I'm trying to setup my C# project to communicate with Nodebox to call a certain function which populates a graph and draws it in a new window. Current situation: [fixed... see Update2] I have already included all python-modules needed, but im still getting a Library 'GL' not found it seems that the pyglet module needs a reference to GL/gl.h, but can't find it due to IronPython behaviour. Requirement: The project needs to stay as small as possible without installing new packages. Thats why i have copied all my modules into the project-folder and would like to keep it that or a similar way. My question: Is there a certain workaround for my problem or a fix for the library-folder missmatch. Have read some articles about Tao-Opengl and OpenTK but can't find a good solution. Update1: Updated my sourcecode with a small pyglet window-rendering example. Problem is in pyglet and referenced c-Objects. How do i include them in my c# project to be called? No idea so far... experimenting alittle now. Keeping you updated. SampleCode C#: ScriptRuntimeSetup setup = Python.CreateRuntimeSetup(null); ScriptRuntime runtime = new ScriptRuntime(setup); ScriptEngine engine = Python.GetEngine(runtime); ScriptSource source = engine.CreateScriptSourceFromFile("test.py"); ScriptScope scope = engine.CreateScope(); source.Execute(scope); SampleCode Python (test.py): from nodebox.graphics import * from nodebox.graphics.physics import Vector, Boid, Flock, Obstacle flock = Flock(50, x=-50, y=-50, width=700, height=400) flock.sight(80) def draw(canvas): canvas.clear() flock.update(separation=0.4, cohesion=0.6, alignment=0.1, teleport=True) for boid in flock: push() translate(boid.x, boid.y) scale(0.5 + boid.depth) rotate(boid.heading) arrow(0, 0, 15) pop() canvas.size = 600, 300 def main(canvas): canvas.run(draw) Update2: Line 139 [pyglet/lib.py] sys.platform is not win32... there was the error. Fixed it by just using the line: from pyglet.gl.lib_wgl import link_GL, link_GLU, link_WGL Now the following Error: 'module' object has no attribute '_getframe' Kind of a pain to fix it. Updating with results... Update3: Fixed by adding following line right after first line in C#-Code: setup.Options["Frames"] = true; Current Problem: No module named unicodedata, but in Python26/DLLs is only a *.pyd file`. So.. how do i implement it now?!

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • Custom property editors do not work for request parameters in Spring MVC?

    - by dvd
    Hello, I'm trying to create a multiaction web controller using Spring annotations. This controller will be responsible for adding and removing user profiles and preparing reference data for the jsp page. @Controller public class ManageProfilesController { @InitBinder public void initBinder(WebDataBinder binder) { binder.registerCustomEditor(UserAccount.class,"account", new UserAccountPropertyEditor(userManager)); binder.registerCustomEditor(Profile.class, "profile", new ProfilePropertyEditor(profileManager)); logger.info("Editors registered"); } @RequestMapping("remove") public void up( @RequestParam("account") UserAccount account, @RequestParam("profile") Profile profile) { ... } @RequestMapping("") public ModelAndView defaultView(@RequestParam("account") UserAccount account) { logger.info("Default view handling"); ModelAndView mav = new ModelAndView(); logger.info(account.getLogin()); mav.addObject("account", account); mav.addObject("profiles", profileManager.getProfiles()); mav.setViewName(view); return mav; } ... } Here is the part of my webContext.xml file: <context:component-scan base-package="ru.mirea.rea.webapp.controllers" /> <context:annotation-config/> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <value> ... /home/users/manageProfiles=users.manageProfilesController </value> </property> </bean> <bean id="users.manageProfilesController" class="ru.mirea.rea.webapp.controllers.users.ManageProfilesController"> <property name="view" value="home\users\manageProfiles"/> </bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" /> However, when i open the mapped url, i get exception: java.lang.IllegalArgumentException: Cannot convert value of type [java.lang.String] to required type [ru.mirea.rea.model.UserAccount]: no matching editors or conversion strategy found I use spring 2.5.6 and plan to move to the Spring 3.0 in some not very distant future. However, according to this JIRA https://jira.springsource.org/browse/SPR-4182 it should be possible already in spring 2.5.1. The debug shows that the InitBinder method is correctly called. What am i doing wrong?

    Read the article

  • Why isn't my query using any indices when I use a subquery?

    - by sfussenegger
    I have the following tables (removed columns that aren't used for my examples): CREATE TABLE `person` ( `id` int(11) NOT NULL, `name` varchar(1024) NOT NULL, `sortname` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `sortname` (`sortname`(255)), KEY `name` (`name`(255)) ); CREATE TABLE `personalias` ( `id` int(11) NOT NULL, `person` int(11) NOT NULL, `name` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `person` (`person`), KEY `name` (`name`(255)) ) Currently, I'm using this query which works just fine: select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; mysql> explain select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | p | index_merge | name,sortname | name,sortname | 767,767 | NULL | 3 | Using sort_union(name,sortname); Using where | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ 1 row in set (0.00 sec) Now I'd like to extend this query to also consider aliases. First, I've tried using a join: select p.* from person p join personalias a where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; mysql> explain select p.* from person p join personalias a on p.id = a.person where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | 1 | SIMPLE | a | ALL | ref,name | NULL | NULL | NULL | 87401 | Using temporary | | 1 | SIMPLE | p | eq_ref | PRIMARY,name,sortname | PRIMARY | 4 | musicbrainz.a.ref | 1 | Using where | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ 2 rows in set (0.00 sec) This looks bad: no index, 87401 rows, using temporary. Using temporary only appears when I use distinct, but as an alias might be the same as the name, I can't really get rid of it. Next, I've tried to replace the join with a subquery: select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select person from personalias a where a.name = 'John Mayer'); mysql> explain select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select id from personalias a where a.name = 'John Mayer'); +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | 1 | PRIMARY | p | ALL | name,sortname | NULL | NULL | NULL | 540309 | Using where | | 2 | DEPENDENT SUBQUERY | a | index_subquery | person,name | person | 4 | func | 1 | Using where | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ 2 rows in set (0.00 sec) Again, this looks pretty bad: no index, 540309 rows. Interestingly, both queries (select p.* from person ... or p.id in (4711,12345) and select id from personalias a where a.name = 'John Mayer') work extremely well. Why doesn't MySQL use any indices for both of my queries? What else could I do? Currently, it looks best to fetch person.ids for aliases and add them statically as an in(...) to the second query. There certainly has to be another way to do this with a single query. I'm currently out of ideas though. Could I somehow force MySQL into using another (better) query plan?

    Read the article

  • IE won't load PDF in a window created with window.open

    - by Dean
    Here's the problem, which only occurs in Internet Explorer (IE). I have a page that has links to several different types of files. Links from these files execute a Javascript function that opens a new window and loads the specific file. This works great, unless the file I need to open in the new window is a PDF in which case the window is blank, even though the URL is in the address field. Refreshing that window using F5 doesn't help. However, if I put the cursor in the address field and press <enter> the PDF loads right up. This problem only occurs in IE. I have seen it in IE 7 and 8 and am using Adobe Acrobat Reader 9. In Firefox (PC and Mac) everything works perfectly. In Chrome (Mac), the PDF is downloaded. In Safari (Mac) it works. In Opera (Mac) it prompts me to open or save. Basically, everything probably works fine, except for IE. I have searched for similar problems and have seen some posts where it was suggested to adjust some of the Internet Options on IE. I have tried this but it doesn't help, and the problem wasn't exactly the same anyway. Here's the Javascript function I use to open the new window. function newwin(url,w,h) { win = window.open(url,"temp","width="+w+",height="+h+",menubar=yes,toolbar=yes,location=yes,status=yes,scrollbars=auto,resizable=yes"); win.focus(); } You can see that I pass in the URL as well as the height, h, and width, w, of the window. I've used a function like this for years and as far as I know have never had a problem. I call the newwin() function using this. <a href="javascript:newwin('/path/document.pdf',400,300)">document.pdf</a> (Yes, I know there are other, better ways than using inline JS, and I've even tried some of them because I've run out of things to try, but nothing works.) So, if anyone has an idea as to what might be causing this problem, I'd love to hear it.

    Read the article

  • debugging JBoss 100% CPU usage

    - by NateS
    Originally posted on Server Fault, where it was suggested this question might better asked here. We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • Fixed point math in c#?

    - by x4000
    Hi there, I was wondering if anyone here knows of any good resources for fixed point math in c#? I've seen things like this (http://2ddev.72dpiarmy.com/viewtopic.php?id=156) and this (http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math), and a number of discussions about whether decimal is really fixed point or actually floating point (update: responders have confirmed that it's definitely floating point), but I haven't seen a solid C# library for things like calculating cosine and sine. My needs are simple -- I need the basic operators, plus cosine, sine, arctan2, PI... I think that's about it. Maybe sqrt. I'm programming a 2D RTS game, which I have largely working, but the unit movement when using floating-point math (doubles) has very small inaccuracies over time (10-30 minutes) across multiple machines, leading to desyncs. This is presently only between a 32 bit OS and a 64 bit OS, all the 32 bit machines seem to stay in sync without issue, which is what makes me think this is a floating point issue. I was aware from this as a possible issue from the outset, and so have limited my use of non-integer position math as much as possible, but for smooth diagonal movement at varying speeds I'm calculating the angle between points in radians, then getting the x and y components of movement with sin and cos. That's the main issue. I'm also doing some calculations for line segment intersections, line-circle intersections, circle-rect intersections, etc, that also probably need to move from floating-point to fixed-point to avoid cross-machine issues. If there's something open source in Java or VB or another comparable language, I could probably convert the code for my uses. The main priority for me is accuracy, although I'd like as little speed loss over present performance as possible. This whole fixed point math thing is very new to me, and I'm surprised by how little practical information on it there is on google -- most stuff seems to be either theory or dense C++ header files. Anything you could do to point me in the right direction is much appreciated; if I can get this working, I plan to open-source the math functions I put together so that there will be a resource for other C# programmers out there. UPDATE: I could definitely make a cosine/sine lookup table work for my purposes, but I don't think that would work for arctan2, since I'd need to generate a table with about 64,000x64,000 entries (yikes). If you know any programmatic explanations of efficient ways to calculate things like arctan2, that would be awesome. My math background is all right, but the advanced formulas and traditional math notation are very difficult for me to translate into code.

    Read the article

  • Getting results in a result set from dynamic SQL in Oracle

    - by msorens
    This question is similar to a couple others I have found on StackOverflow, but the differences are signficant enough to me to warrant a new question, so here it is: I want to obtain a result set from dynamic SQL in Oracle and then display it as a result set in a SqlDeveloper-like tool, just as if I had executed the dynamic SQL statement directly. This is straightforward in SQL Server, so to be concrete, here is an example from SQL Server that returns a result set in SQL Server Management Studio or Query Explorer: EXEC sp_executesql N'select * from countries' Or more properly: DECLARE @stmt nvarchar(100) SET @stmt = N'select * from countries' EXEC sp_executesql @stmt The question "How to return a resultset / cursor from a Oracle PL/SQL anonymous block that executes Dynamic SQL?" addresses the first half of the problem--executing dynamic SQL into a cursor. The question "How to make Oracle procedure return result sets" provides a similar answer. Web search has revealed many variations of the same theme, all addressing just the first half of my question. I found this post explaining how to do it in SqlDeveloper, but that uses a bit of functionality of SqlDeveloper. I am actually using a custom query tool so I need the solution to be self-contained in the SQL code. This custom query tool similarly does not have the capability to show output of print (dbms_output.put_line) statements; it only displays result sets. Here is yet one more possible avenue using 'execute immediate...bulk collect', but this example again renders the results with a loop of dbms_output.put_line statements. This link attempts to address the topic but the question never quite got answered there either. Assuming this is possible, I will add one more condition: I would like to do this without having to define a function or procedure (due to limited DB permissions). That is, I would like to execute a self-contained PL/SQL block containing dynamic SQL and return a result set in SqlDeveloper or a similar tool. So to summarize: I want to execute an arbitrary SQL statement (hence dynamic SQL). The platform is Oracle. The solution must be a PL/SQL block with no procedures or functions. The output must be generated as a canonical result set; no print statements. The output must render as a result set in SqlDeveloper without using any SqlDeveloper special functionality. Any suggestions?

    Read the article

  • VB6 app not executing as scheduled task unless user is logged on

    - by Tedd Hansen
    Hi Would greatly apprechiate some help on this one! It may be a tricky one. :) Problem I have an VB6 application which is set up as scheduled task. It starts every time, but when executing CreateObject it fails if user is not logged on to computer. I am looking for information on what could cause this. Primary suspicion is that some Windows API fails. Key points Behaviour confirmed on Windows 2000, 2003, 2008 and Vista. The application executes as user X at scheduled time, executed by Windows Task Scheduler. It executes every time. Application does start! -- If user X is logged on via RDP it runs perfectly. (Note that user doesn't need to be connected, only logged on) -- If user X is not logged on to computer the application fails. Failure point Application fails when using CreateObject() to instansiate a DCOM object which is also part of the application. The DCOM objects declare .dll-references at startup (globally/on top of .bas-file) and run a small startup function. Failure must be during startup, possibly in one of the .dll-declarations. Thoughts After some Googling my initial suspicion was directed at MAPI. From what I could see MAPI required user to be logged on. The application has MAPI references. But even with all MAPI references removed it still does not work. What is the difference if an user is logged on? Registry mapping? Environment? explorer.exe is running. Isn't the user logged on when application executes as the user? What info would help? A definitive answer would be truly great. Any information regarding any VB6 feature/Windows API that could act differently depending on wether user is logged on or not would definitively help. Similar experiences may lead me in the right direction. Tips on debuggin this. Thanks! :)

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Amusing or Sad? Network Solutions

    - by dbasnett
    When I got sick my email ended up in every drug sellers email list. Some days I get over 200 emails selling everything from Viagra to Xanax. Either they don't know what my condition is or they are telling me you are a goner, might as well chill-ax and have a good time. In order to cut down on the mail being downloaded I thought I would add all of the Junk email senders from Outlook to my Network Solution mail server. Much to my amazement I could not find that import Spammers button, so I submitted a tech support request. Here is the response: Thank you for contacting Network Solutions Customer Service Department. We are committed to creating the best Customer experience possible. One of the first ways we can demonstrate our commitment to this goal is to quickly and efficiently handle your recent request. We apologize for any inconvenience this might have caused you. With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers. An alternative option is for you to create a Custom Filter that filters unwanted e-mails. To create a Custom Filter: Open a Web browser (e.g., Netscape, Microsoft Internet Explorer, etc.). Type mail.[domain name].[ext] in the address line. Login to your Network Solutions email account. Click on the Configuration left menu tab. Click on the Custom Filter link. Type the rule name. blah, blah, blah Basically add them one at a time. "We are committed to creating the best Customer experience possible." No you are not. You are trying to squeeze every nickle you can out of me. "With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers." Maybe I should apply for a job to write those ten complicated lines of code... Maybe I should question my choice of vendors, because if they truly "cannot" then they are to stupid to have my business. It is both amusing and sad. I'll be posting this in every forum I am a member of.

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >