Search Results

Search found 4109 results on 165 pages for 'plan'.

Page 38/165 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Difficulty setting ArrayList to java.sql.Blob to save in DB using hibernate

    - by me_here
    I'm trying to save a java ArrayList in a database (H2) by setting it as a blob, for retrieval later. If this is a bad approach, please say - I haven't been able to find much information on this area. I have a column of type Blob in the database, and Hibernate maps to this with java.sql.Blob. The code I'm struggling with is: Drawings drawing = new Drawings(); try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = null; oos = new ObjectOutputStream(bos); oos.writeObject(plan.drawingPane21.pointList); byte[] buff = bos.toByteArray(); Blob drawingBlob = null; drawingBlob.setBytes(0, buff); drawing.setDrawingObject(drawingBlob); } catch (Exception e){ System.err.println(e); } The object I'm trying to save into a blob (plan.drawingPane21.pointList) is of type ArrayList<DrawingDot>, DrawingDot being a custom class implementing Serializable. My code is failing on the line drawingBlob.setBytes(0, buff); with a NullPointerException. Help appreciated.

    Read the article

  • Project Architecture For Enhancing Legacy project.

    - by vijay.shad
    I am working on legacy project. Now the situation demands project to be divided into parts. What strategy I should follow to do this task. Description: The legacy project (A) is fully functional web application with almost well defined layers. But now i need to extend the project to a further enhancement. This project usage maven as build tool. But it is used only for dependency managements only. (project exported to war form inside eclipse). The new enhancement needs me to add new data table, new UI(jsp, css, js and images). What should be my strategy to enhance to application. My proposed design. I am planing to create two new projects Project B : Main Enhancement works will done in this project. Will have all layers like service layer, dao layer and UI layer in itself. And will be a web application in itself. Project C : Extract some common model and service code form project-A and create this project. This project will be added as dependency to both the projects. If my this approach is okay! Then i presume there will be problem be problem in deployment. These two projects will demand to deploy separately(currently tomcat is used). But I must deploy these two projects as one war. So, i need to have a plan to change the web.xml entries to have configurations for both projects. This will comes with some more complexities with project. What should be my design for the project? Does my plan sounds good.

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • Java remove HTML from String without regular expressions

    - by behrk2
    Hello, I am trying to remove all HTML elements from a String. Unfortunately, I cannot use regular expressions because I am developing on the Blackberry platform and regular expressions are not yet supported. Is there any other way that I can remove HTML from a string? I read somewhere that you can use a DOM Parser, but I couldn't find much on it. Text with HTML: <![CDATA[As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (<a href="http://www.netflix.com/RoleDisplay/Billy_Bob_Thornton/20000303">Billy Bob Thornton</a>) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (<a href="http://www.netflix.com/RoleDisplay/Bruce_Willis/99786">Bruce Willis</a>) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task. <a href="http://www.netflix.com/RoleDisplay/Ben_Affleck/20000016">Ben Affleck</a> and <a href="http://www.netflix.com/RoleDisplay/Liv_Tyler/162745">Liv Tyler</a> co-star.]]> Text without HTML: As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (Billy Bob Thornton) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (Bruce Willis) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task.Ben Affleck and Liv Tyler co-star. Thanks!

    Read the article

  • Replace System.Net.Mail.MailMessage with manually created message and send it

    - by DEH
    I am trying to send emails that will bounce to a known mailbox. I plan to use VERP. Unfortunately the System.Net.Mail.MailMessage object does not allow me to precisely set the From: and Sender: headers within my email - it forces the values so that the resulting email contains the phrase 'on behalf of', and does not allow me fine control over the relevant mime headers. I therefore plan to manually write mime email messages directly to the pickup directory so that I can independently control the From and Sender headers. My dev box is a Vista box and therefore does not have an SMTP server. I would like to configure the dev box so that I have an SMTP server running on it. I can then turn off the SMTP server, write messages to the pickup dir, then turn on the SMPT server and see how the individual emails that I have written will behave (some delivered, some bounced to a bounce handler on a different email domain, as dictated by the Sender). Two questions: 1. Can anyone recommend an SMTP server that will monitor a pickup directory? 2. If I set headers as follows; From:[email protected]; Sender:[email protected] then the recipient will see the email as having come from [email protected] ( and won't see any reference to [email protected]), but if the mail bounces then the NDR will be sent to [email protected]). It's a real pain to have to do this, but I can't see any way of using System.Net.Mail.MailMessage without it messing up my headers.

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • SQL Server 2005, wide indexes, computed columns, and sargable queries

    - by luksan
    In my database, assume we have a table defined as follows: CREATE TABLE [Chemical]( [ChemicalId] int NOT NULL IDENTITY(1,1) PRIMARY KEY, [Name] nvarchar(max) NOT NULL, [Description] nvarchar(max) NULL ) The value for Name can be very large, so we must use nvarchar(max). Unfortunately, we want to create an index on this column, but nvarchar(max) is not supported inside an index. So we create the following computed column and associated index based upon it: ALTER TABLE [Chemical] ADD [Name_Indexable] AS LEFT([Name], 20) CREATE INDEX [IX_Name] ON [Chemical]([Name_Indexable]) INCLUDE([Name]) The index will not be unique but we can enforce uniqueness via a trigger. If we perform the following query, the execution plan results in a index scan, which is not what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' However, if we modify the query to make it "sargable," then the execution plan results in an index seek, which is what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Indexable_Name]='[1,1''-Bicyclohexyl]-' AND [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' Is this a good solution if we control the format of all queries executed against the database via our middle tier? Is there a better way? Is this a major kludge? Should we be using full-text indexing?

    Read the article

  • Skip Checkout in Magento for a downloadable product

    - by Aaron Newton
    Hello Magento boffins. I am using Magento to build an eBooks site. For the release, we plan to have a number of free downloadable books. We were hoping that it would be possible to use the normal Magento 'catalog' functionality to add categories with products underneath. However, since these are free downloadable products, it doesn't really make sense to send users through the checkout when they try to download. Does anyone know of a way to create a free downloadable product which bypasses the checkout altogether? I have noticed that there is a 'free sample' option for downloadable products, but I would prefer not to use this if I can as I plan to use this field for its intended purpose when I add paid products. [EDIT] I have noticed that some of you have voted this question down for 'lack of question clarity'. For clarity, I want: to know if it is possible to create a downloadable product in Magento which doesn't require users to go through the usual checkout process (since it is free) and which is not the 'Free Sample' field of a downloadable product Unfortunately I don't think I can ask this any more eloquently. [/EDIT]

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • Easiest RPC client method in PHP

    - by T.K.
    I've been asked to help a friend's company to bring up a web application. I have very limited time and I reluctantly accepted the request, at one condition. As most of the logic goes on in the back-end, I suggested that I would finish the complete back-end only, allowing a front-end developer to simply interface with my backend. I plan to do the back-end in Java EE or Python (with Pylons). It does not really matter at this point. I plan to have my back-end completely ready and unit-tested, so that my input will hardly be needed after my work is done. I know they have a PHP programmer, but as far as I could tell he is a real rookie. I want him to basically interface with my backend's services in the easiest possible way, with no way of him "stuffing" it up. It's basically a CRUD-only application. I could implement the backend as accessible through a webservice such as XML-RPC or SOAP. Even a RESTful API could be possible. However, my main objective is to make something that complete "noob" PHP programmer can easily interface with without getting confused. Preferably I do not even want to talk to him because I generally have an extremely busy schedule, and doing "support calls" is not something I am willing to do. Which approach should I choose? I would welcome any suggestions and inputs!

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • Scrum - Responding to traditional RFPs

    - by Todd Charron
    Hi all, I've seen many articles about how to put together Agile RFP's and negotiating agile contracts, but how about if you're responding to a more traditional RFP? Any advice on how to meet the requirements of the RFP while still presenting an agile approach? A lot of these traditional RFP's request specific technical implementations, timelines, and costs, while also requesting exact details about milestones and how the technical solutions will be implemented. While I'm sure in traditional waterfall it's normal to pretend that these things are facts, it seems wrong to commit to something like this if you're an agile organization just to get through the initial screening process. What methods have you used to respond to more traditional RFP's? Here's a sample one grabbed from google, http://www.investtoronto.ca/documents/rfp-web-development.pdf Particularly, "3. A detailed work plan outlining how they expect to achieve the four deliverables within the timeframe outlined. Plan for additional phases of development." and "8. The detailed cost structure, including per diem rates for team members, allocation of hours between team members, expenses and other out of pocket disbursements, and a total upset price."

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • My first MVVM application architecture setup

    - by 1110
    Ok, time is coming for my first WPF project :). I work before with Flex and PureMVC and I know how project setup is important in RIA's. I decided to work with MVVM. And decided to work with PRISM framework. Application is somethin like operating system. There will be 'shell' (parent for smaller applications). Smaller application I plan to make like modules. So I plan to design structure of project something like this. Module_A {view, viewModel, model, assets} // for example calculator Module_B {view, viewModel, model, assets} // notebook etc I read prism doc and I see that parrent for all this modules should be shell project, and this is my main question here. Parrent_Project {App.xaml, Bootstrapper.cs, Shell.xaml} Because this shell will be fullscreen with background images (like operating system), right click with some features. Is that ok to create folder structure like in modulesXYZ for Shell.xaml here? I want to start project with good structure so any advice is welcome. Thanks

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • 3D and AI basics. The foundation before the coding.

    - by Allan
    Hi, everyone. (If you have the time and patience:) I've recently made the decision to study programming seriously and I'm about to order TAOCP and Concrete Mathematics to begin my studies (please don't get caught up on this). I'm very much interested in learning and understanding how 3D works but I'm aware that if I plan to do it right there's still a long walk before I get to actually play with 3D coding. Now to the question.. (tl;dr) Excluding programming itself, what disciplines do I have to be familiar with to code 3D? What kinds of mathematics? Physics? What else? What books do you recommend on such subjects? Now read it all again but replacing "3D" with "AI". Please don't recommend computer-specific books. The question is about the foundation to be learned before using the machine. Also, if possible, please keep the list brief; I plan to order one book on each subject but no more than that for now. Excuse me for any English mistakes, it's not my first language. Thank you.

    Read the article

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • Help Needed Finding a Programmer

    - by ssean
    Good Morning, I am trying to find a programmer to code a piece of custom software for my business. I plan on using this software to manage my business, and possibly sell it to other companies (in the same industry) at a later date. I've never hired a programmer before, so I'm not sure what to expect or where to begin. I know exactly what features I need, and how I want it laid out, I just need someone who can take my ideas and make it happen. This software will be used to manage customer information, and keep track of orders. What I think I need: * SQL Server or similar database that will be located at our office. * Desktop Application, that connects via LAN to the database server (cannot be browser based) * Multiple User Support (Simultaneous users accesing the system) * Needs to be scalable (currently we have 5 employees, but who knows what the future will bring) * Multi-Platform Support (Windows, Linux) I posted a job offer through elance, which seems to raise more questions than answers. How do I decide what language(s) will work best for my situation? (I have received offers for C#, Eclipse, .NET, Powerbuilder, etc. - I want to make sure that I choose the best one now, so I don't run into problems later) Does the programmer hold any rights to the software? (I plan to offer the software for sale at a later date) Any help or insight would be appreciated, and I'd be happy to clarify anything if it helps. Thanks in advance!

    Read the article

  • Logging in to Wordpress through CodeIgniter DX Authentication

    - by whobutsb
    Hello All, I'm about to start a very large project of rebuilding my companies intranet. The plan is to have most of the intranet live in a CI application. I chose to use CI because i'm very familiar with all the CI methods. Some sections of the intranet are going to be wordpress blogs. For example the Human Resources Dept. and the Marketing Dept will have their own wordpress blogs. Ideally my plan is to log on to the intranet, with a CI authentication library like DXAuth by querying the Active Directory of the company. When I return the AD information for the user I will by saving their group memberships into a session. It would be fantastic if I could have that session information of the user be used by wordpress to log the user as an editor if they are a member of the Marketing Group. And allow users who are not members of the group be able to comment on that blog, with out logging into wordpress. My question is if there are any CI classes or Wordpress Plugins, or tutorals out there, of this sort of integration with the two systems. Thank you for your help!

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • PHP download script for "processes running limited" hosting (eg. hostgator)

    - by Joe
    I am currently with HostGator on a shared hosting plan. I have a new website I'm trying to setup with a download.php script. The issue I am having is that, while someone is "downloading" a file through the download.php script, it counts as a "process" and my hosting plan limits the processes that can be running at the same time to 25 at present. My question is, what options do I have? a). Move to new web hosting that doesn't limit processes running. b). Change the way files are downloaded. I would like to choose option b), however it occurs to me that I need to have the file accessed through PHP in order to restrict the number of downloads and to track download statistics, as well as protecting against hotlinking. If there was a way to have the PHP script send them the file so that the process doesn't need to be running the whole time the file is being downloaded, I would eliminate the problem, however to my knowledge that isn't possible. Should I make the move to a new hosting company? I really enjoy HostGator as they have provided the best hosting experience for me thus far, except for this one issue of course, so I don't want to go on the hunt for another decent shared hosting company that doesn't limit processes running, only to find out there is another restriction or "catch" to the shared hosting deal.

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

  • find the difference between two very large list

    - by user157195
    I have two large list(could be a hundred million items), the source of each list can be either from a database table or a flat file. both lists are of comparable sizes, both unsorted. I need to find the difference between them. so I have 3 scenarios: 1. List1 is a database table(assume each row simply have one item(key) that is a string), List2 is a large file. 2. Both lists are from 2 db tables. 3. both lists are from two files. in case 2, I plan to use: select a.item from MyTable a where a.item not in (select b.item form MyTable b) this clearly is inefficient, is there a better way? Another approach is: I plan to sort each list, and then walk down both of them to find the diff. If the list is from a file, I have to read it into a db table first, then use db sorting to output the list. Is the run time complexity still O(nlogn) in db sorting? either approach is a pain and seems would be very slow when the list involved has hundreds of millions of items. any suggestions?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >