Search Results

Search found 628 results on 26 pages for 'doctor jones'.

Page 4/26 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • PHP Development Environment (Host: Windows 7, Guest: Ubuntu)

    - by Kristian Leiws Jones
    Since editing files live from a remote server slows down development. I use XAMPP on windows to develop then run the web app's on a Linux server. However to avoid environment dependencies I'd like to mirror the live environment and the development environments. What I'm asking is running development server on Ubuntu inside VirtualBox whilst editing the source files via ftp/Dreamweaver is a good idea? If so, and I wanted to view the local website on the host OS (windows) how would I do this? does the guest OS have a LAN/Local IP address? I notice on windows "ipconfig /all" there are "tunneling" adapters which I assume is for VirtualBox, so I guess the guest OS has the same LAN/Local IP address? if so how would I view the websites hosted on the guest OS on the host OS? I'd also need to host FTP server on guest OS. Note: I need windows! I would love to use Linux all the way -.-

    Read the article

  • Access denied error when adding new server to existing SharePoint 2007 Farm

    - by Kelly Jones
    I recently got an error when adding a new SharePoint 2007 (SP2) server to our existing MOSS farm.  I had run the installation fine, and was walking through the SharePoint Configuration Wizard, when I got an error on step 2: “Resource retrieved id PostSetupConfigurationFailedEventLog is Configuration of SharePoint Products and Technologies failed.” I searched the net, but didn’t really find anything. I then remembered that I had forgotten to run the latest SharePoint updates (for us, the latest applied was December 2009).  I installed the WSS update, and then the MOSS update and both worked fine.  I then ran the Configuration wizard again and it worked without any errors.

    Read the article

  • How&rsquo;s your Momma an&rsquo; them?

    - by Bill Jones Jr.
    When a Southern “boy” like me sees somebody that used to be, or should be, a close friend or relative that they haven’t seen in a long time, that’s a typical greeting.  Come to think of it, we were often related to close friends. So “back in the day”, we not only knew people but everybody close to them.  When I started driving, my Dad told me to always drive carefully in Polk county.  He said if I ran into anybody there, it was likely they would be related or close family friends. Not so much any more… the cities have gotten bigger and more people come south and stay.  One of the curses of air conditioning I guess. Anyway, it’s been a while.  So “How’s your Momma and them”?  Have you been waiting for me to blog again?  Too bad, I’m back anyway <smile>. Here in Charlotte we just had another great code camp.  The Enterprise Developers Guild is going strong, thanks to the help of a lot of dedicated people.  Mark Wilson, Brian Gough, Syl Walker, Ghayth Hilal, Alberto Botero, Dan Thyer, Jean Doiron, Matt Duffield all come to mind.  Plus all the regulars who volunteer for every special event we have. Brian Gough put on a successful SharePoint Saturday.  Rafael Salas and our friends at the local Pass SQL group had a great SQL Saturday.  Brian Hitney and Glen Gordon keep on doing their usual great job for developers in the southeast as our local Microsoft reps. Since my last post, I have the honor of being designated the INetA Membership Mentor for Georgia in addition to mentoring the groups in the Carolinas for the past several years.  Georgia could be a really good thing since my wife likes shopping in Atlanta, not to mention how much we both like Georgia in general.  As I recall, my Momma had people in Georgia.  Wonder how their “Mommas an’ them” are doing?   Bill J

    Read the article

  • Corrupted Views when migrating document libraries from SharePoint 2003 to 2007

    - by Kelly Jones
    A coworker of mine ran into this error recently, while migrating a document library from SharePoint 2003 to 2007: “A WebPartZone can only exist on a page which contains a SPWebPartManager. The SPWebPartManager must be placed before any WebPartZones on the page.” He saw this when he tried to see the All Documents view for the library. After looking into it, we figured out what had happened.  He was migrating documents using the Explorer View in SharePoint.  He had copied the contents of the library from one server (a remote server that we didn’t have administrative access to) to his desktop.  He then opened an Explorer View of the new library and copied the files to it.  Well, it turns out he had copied the hidden “Forms” folder, which contained the files necessary to display the different views for the library. (He had set his explorer to show hidden files, which made them visible.) So, he had copied the 2003 forms to the 2007 library, which are incompatible. We fixed it, by simply deleting the new document library, recreating it, and then copied everything except that hidden Forms folder.  Another option might have been to create a new document library on 2007, and copy the Forms folder from it to the broken library.  Since we didn’t need to save anything in the broken BTW, I confirmed my suspicion with this blog post: http://palmettotq.com/blog/?p=54

    Read the article

  • SharePoint 2007 Parser Error after updating master page

    - by Kelly Jones
    A few weeks ago I was updating the master page for a SharePoint 2007 (WSS) site.  The client wanted the site updated to reflect the new look and feel that is being applied to another set of sites in the organization. I created a new theme and master page, which I already wrote about here and here.  It worked well, except for a few pages on a subsite.  On those pages, I got the following error: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Code blocks are not allowed in this file.   I decided to go comb through my new master page and compare it to the existing master page that was already working.  After going through them line by line several times, I had no clue what would be causing the error because they were basically the same! It turns out, it was a combination of two things.  First, on a few of the pages in the site, there was some include code (basically an <% EVAL()%> snippet).  This was the code that was triggering my error “Code blocks are not allowed in this file”. However, this code was working fine with the previous master page. I decided to then try doing a full deployment of the site with the new master page, and it worked fine!  Apparently, if the master page is deployed using a Feature, then it is granted permission to allow code blocks, but if you upload pages either using web UI or SharePoint Designer, then the pages won’t be able to use code blocks. I haven’t been able to pin down the rules or official info about this, but I thought others might find it useful anyway.

    Read the article

  • Netretail's online retail operation benefits from personal contact

    - by christopher.jones
    Hot on oracle.com is a snapshot of Netretail Holding B.V. profiling their use of PHP and Oracle technology such as Oracle RAC cluster database to become a leading online retailer across Central and Eastern Europe. We've also just refreshed our key PHP Scalability and High Availability whitepaper which talks about connection pooling (DRCP) and Fast Application Notification (FAN). We brought it up to date for 11gR2 and PHP 5.3. It now includes the new 11gR2 V$CPOOL_CONN_INFO view, the new columns for DBA_CPOOL_INFO, information about LOGOFF triggers, and information about the support for Client Result Caching with DRCP. Back to Netretail. Two of their secrets to success are keeping technically up to date, and networking. That is, networking in the business sense. I had the pleasure of meeting Michal Táborský (@whizz), the Chief System Architect, when he was in California for a Velocity conference. Michal took time to visit Oracle HQ and talk with our developers about his then current architecture and future needs. I also met his manager at last year's Oracle OpenWorld conference. Having built up a relationship with us, Netretail now has access to Oracle Development staff. While this will never bypass Oracle Support (which have tools, systems etc that are needed and useful for resolving issues), it makes communication easier for some classes of questions. It helps discussions that will let us improve Oracle products, and make Netretail stronger. I like this. And there's no reason why you can't talk with us too. You know where to email me.

    Read the article

  • Wireless card disappeards when restart ubuntu

    - by Kristian Jones
    I used 'additional drivers' to install 'Broadcom STA wireless driver' and it returns an error. WIthin jockey.log it says the following numerous times. 2011-02-14 21:24:06,945 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: blacklisted, b43legacy: blacklisted After it returns the error the network card will work temporarily until I restart the laptop. When I restart I got to go through the procedure again of trying to activate the driver, returns an error however it works temporarily. The network card is as follows on a Dell Inspiron 1545: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] Rev 01 I have been trying to solve this myself for many hours. Any help is appreciated/

    Read the article

  • Can't complete dropbox installation from behind proxy

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Design Pattern Advice for Bluetooth App for Android

    - by Aimee Jones
    I’m looking for some advice on which patterns would apply to some of my work. I’m planning on doing a project as part of my college work and I need a bit of help. My main project is to make a basic Android bluetooth tracking system where the fixed locations of bluetooth dongles are mapped onto a map of a building. So my android app will regularly scan for nearby dongles and triangulate its location based on signal strength. The dongles location would be saved to a database along with their mac addresses to differentiate between them. The android phones location will then be sent to a server. This information will be used to show the phone’s location on a map of the building, or map of a route taken, on a website. My side project is to choose a suitable design pattern that could be implemented in this main project. I’m still a bit new to design patterns and am finding it hard to get my head around ones that may be suitable. I’ve heard maybe some that are aimed at web applications for the server side of things may be appropriate. My research so far is leading me to the following: Navigation Strategy Pattern Observer Pattern Command Pattern News Design Pattern Any advice would be a great help! Thanks

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 2 of 4

    - by Kelly Jones
    In my last post, I explained why we decided to use a Click Once application to solve our business problem. To quickly review, we needed a way for our business users to upload documents to a SharePoint 2007 document library in mass, set the meta data, set the permissions per document, and to do so easily. Let’s look at the pieces that make up our solution.  First, we have the Windows Form application.  This app is deployed using Click Once and calls SharePoint web services in order to upload files and then calls web services to set the meta data (SharePoint columns and permissions).  Second, we have a custom action.  The custom action is responsible for providing our users a link that will launch the Windows app, as well as passing values to it via the query string.  And lastly, we have the web services that the Windows Form application calls.  For our solution, we used both out of the box web services and a custom web service in order to set the column values in the document library as well as the permissions on the documents. Now, let’s look at the technical details of each of these pieces.  (All of the code is downloadable from here: )   Windows Form application deployed via Click Once The Windows Form application, called “Custom Upload”, has just a few classes in it: Custom Upload -- the form FileList.xsd -- the dataset used to track the names of the files and their meta data values SharePointUpload -- this class handles uploading the file SharePointUpload uses an HttpWebRequest to transfer the file to the web server. We had to change this code from a WebClient object to the HttpWebRequest object, because we needed to be able to set the time out value.  public bool UploadDocument(string localFilename, string remoteFilename) { bool result = true; //Need to use an HttpWebRequest object instead of a WebClient object // so we can set the timeout (WebClient doesn't allow you to set the timeout!) HttpWebRequest req = (HttpWebRequest)WebRequest.Create(remoteFilename); try { req.Method = "PUT"; req.Timeout = 60 * 1000; //convert seconds to milliseconds req.AllowWriteStreamBuffering = true; req.Credentials = System.Net.CredentialCache.DefaultCredentials; req.SendChunked = false; req.KeepAlive = true; Stream reqStream = req.GetRequestStream(); FileStream rdr = new FileStream(localFilename, FileMode.Open, FileAccess.Read); byte[] inData = new byte[4096]; int bytesRead = rdr.Read(inData, 0, inData.Length); while (bytesRead > 0) { reqStream.Write(inData, 0, bytesRead); bytesRead = rdr.Read(inData, 0, inData.Length); } reqStream.Close(); rdr.Close(); System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse(); if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created) { String msg = String.Format("An error occurred while uploading this file: {0}\n\nError response code: {1}", System.IO.Path.GetFileName(localFilename), response.StatusCode.ToString()); LogWarning(msg, "2ACFFCCA-59BA-40c8-A9AB-05FA3331D223"); result = false; } } catch (Exception ex) { LogException(ex, "{E9D62A93-D298-470d-A6BA-19AAB237978A}"); result = false; } return result; } The class also contains the LogException() and LogWarning() methods. When the application is launched, it parses the query string for some initial values.  The query string looks like this: string queryString = "Srv=clickonce&Sec=N&Doc=DMI&SiteName=&Speed=128000&Max=50"; This Srv is the path to the server (my Virtual Machine is name “clickonce”), the Sec is short for security – meaning HTTPS or HTTP, the Doc is the shortcut for which document library to use, and SiteName is the name of the SharePoint site.  Speed is used to calculate an estimate for download speed for each file.  We added this so our users uploading documents would realize how long it might take for clients in remote locations (using slow WAN connections) to download the documents. The last value, Max, is the maximum size that the SharePoint site will allow documents to be.  This allowed us to give users a warning that a file is too large before we even attempt to upload it. Another critical piece is the meta data collection.  We organized our site using SharePoint content types, so when the app loads, it gets a list of the document library’s content types.  The user then select one of the content types from the drop down list, and then we query SharePoint to get a list of the fields that make up that content type.  We used both an out of the box web service, and one that we custom built, in order to get these values. Once we have the content type fields, we then add controls to the form.  Which type of control we add depends on the data type of the field.  (DateTime pickers for date/time fields, etc)  We didn’t write code to cover every data type, since we were working with a limited set of content types and field data types. Here’s a screen shot of the Form, before and after someone has selected the content types and our code has added the custom controls:     The other piece of meta data we collect is the in the upper right corner of the app, “Users with access”.  This box lists the different SharePoint Groups that we have set up and by checking the boxes, the user can set the permissions on the uploaded documents. All of this meta data is collected and submitted to our custom web service, which then sets the values on the documents on the list.  We’ll look at these web services in a future post. In the next post, we’ll walk through the Custom Action we built.

    Read the article

  • Start Your Engines

    - by Richard Jones
    Just passing on the good news from MIX Keynote yesterday. The CTP Developers Kit for Windows Phone 7 Series, is available here. http://www.microsoft.com/downloads/details.aspx?FamilyID=2338b5d1-79d8-46af-b828-380b0f854203&displaylang=en#filelist First impressions are great.   Hello World up and running in under 2 minutes - Technorati Tags: Windows Pgone 7 Series

    Read the article

  • Best way to block "comment spam" postings to web forms? [closed]

    - by David Jones
    Possible Duplicate: Make your site anti-bot? I have a custom web form on my PHP-based site. Recently it is getting a regular stream of comment-spam postings from a few specific IP addresses. Question: What is a good way to block a small set of blacklisted IP addresses from accessing my site? I was thinking it should be possible using .htaccess to respond with status code 403 (Forbidden) for all HTTP requests from the blacklisted IP addresses, ... but I am not sure exactly how to do that. If anyone knows the .htaccess syntax needed to accomplish this, ... please let me know. thanks in advance,

    Read the article

  • Why would someone want to take over control of my domain name?

    - by mike jones
    I was approached by a person wanting to help me set up a website. In order to do this he has requested that I allow him to transfer my domain name to his account, for easier management. I would retain the right of usage and he would pay the bill for maintaining the name. This sounds fishy, but I can't figure out what he hopes to gain if this is a scam. Is this a common practice among 'Administrative Contacts'?

    Read the article

  • Broadcom wireless card disappeards on restart

    - by Kristian Jones
    I used 'additional drivers' to install 'Broadcom STA wireless driver' and it returns an error. WIthin jockey.log it says the following numerous times. 2011-02-14 21:24:06,945 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: blacklisted, b43legacy: blacklisted After it returns the error the network card will work temporarily until I restart the laptop. When I restart I got to go through the procedure again of trying to activate the driver, returns an error however it works temporarily. The network card is as follows on a Dell Inspiron 1545: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] Rev 01 I have been trying to solve this myself for many hours. Any help is appreciated/

    Read the article

  • How To Validate an Email Address

    - by Richard Jones
    I’m using my blog today as book mark service today. I just found this article on howto really validate if an email address exists. I’ll have a go at wrapping this article into a WCF/Webservice http://www.webdigi.co.uk/blog/2009/how-to-check-if-an-email-address-exists-without-sending-an-email/

    Read the article

  • Why does the location of my vehicle spawner change when I open a matinee?

    - by Gareth Jones
    I'm doing work with InterActors and vehicle spawners in Unreal Tournament 3's editor, and have it set up like so: (The Walkway its on is the InterActor) However if I go in Kemsit and open the matinee that handles the InterActor, this happens: It does look to me like the editor is moving it out of the way so I can see the InterActor (which would be very clever) because only the image of the vehicle moves, not the gizmo, nor does the vehicle spawn in that location in game. Is this the case?

    Read the article

  • Exception Handling And Other Contentious Political Topics

    - by Justin Jones
    So about three years ago, around the time of my last blog post, I promised a friend I would write this post. Keeping promises is a good thing, and this is my first step towards easing back into regular blogging. I fully expect him to return from Pennsylvania to buy me a beer over this. However, it’s been an… ahem… eventful three years or so, and blogging, unfortunately, got pushed to the back burner on my priority list, along with a few other career minded activities. Now that the personal drama of the past three years is more or less resolved, it’s time to put a few things back on the front burner. What I consider to be proper exception handling practices is relatively well known these days. There are plenty of blog posts out there already on this topic which more or less echo my opinions on this topic. I’ll try to include a few links at the bottom of the post. Several years ago I had an argument with a co-worker who posited that exceptions should be caught at every level and logged. This might seem like sanity on the surface, but the resulting error log looked something like this: Error: System.SomeException Followed by small stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace.   These were all the same exception. The problem with this approach is that the error log, if you run any kind of analytics on in, becomes skewed depending on how far up the stack trace your exception was thrown. To mitigate this problem, we came up with the concept of the “PreLoggedException”. Basically, we would log the exception at the very top level and subsequently throw the exception back up the stack encapsulated in this pre-logged type, which our logging system knew to ignore. Now the error log looked like this: Error: System.SomeException Followed by small stack trace. Much cleaner, right? Well, there’s still a problem. When your exception happens in production and you go about trying to figure out what happened, you’ve lost more or less all context for where and how this exception was thrown, because all you really know is what method it was thrown in, but really nothing about who was calling the method or why. What gives you this clue is the entire stack trace, which we’re losing here. I believe that was further mitigated by having the logging system pull a system stack trace and add it to the log entry, but what you’re actually getting is the stack for how you got to the logging code. You’re still losing context about the actual error. Not to mention you’re executing a whole slew of catch blocks which are sloooooooowwwww……… In other words, we started with a bad idea and kept band-aiding it until it didn’t suck quite so bad. When I argued for not catching exceptions at every level but rather catching them following a certain set of rules, my co-worker warned me “do yourself a favor, never express that view in any future interviews.” I suppose this is my ultimate dismissal of that advice, but I’m not too worried. My approach for exception handling follows three basic rules: Only catch an exception if 1. You can do something about it. 2. You can add useful information to it. 3. You’re at an application boundary. Here’s what that means: 1. Only catch an exception if you can do something about it. We’ll start with a trivial example of a login system that uses a file. Please, never actually do this in production code, it’s just concocted example. So if our code goes to open a file and the file isn’t there, we get a FileNotFound exception. If the calling code doesn’t know what to do with this, it should bubble up. However, if we know how to create the file from scratch we can create the file and continue on our merry way. When you run into situations like this though, What should really run through your head is “How can I avoid handling an exception at all?” In this case, it’s a trivial matter to simply check for the existence of the file before trying to open it. If we detect that the file isn’t there, we can accomplish the same thing without having to handle in in a catch block. 2. Only catch an exception if you can do something about it. Continuing with the poorly thought out file based login system we contrived in part 1, if the code calls a Login(…) method and the FileNotFound exception is thrown higher up the stack, the code that calls Login must account for a FileNotFound exception. This is kind of counterintuitive because the calling code should not need to know the internals of the Login method, and the data file is an implementation detail. What makes more sense, assuming that we didn’t implement any of the good advice from step 1, is for Login to catch the FileNotFound exception and wrap it in a new exception. For argument’s sake we’ll say LoginSystemFailureException. (Sorry, couldn’t think of anything better at the moment.) This gives us two stack traces, preserving the original stack trace in the inner exception, and also is much more informative to the calling code. 3. Only catch an exception if you’re at an application boundary. At some point we have to catch all the exceptions, even the ones we don’t know what to do with. WinForms, ASP.Net, and most other UI technologies have some kind of built in mechanism for catching unhandled exceptions without fatally terminating the application. It’s still a good idea to somehow gracefully exit the application in this case if possible though, because you can no longer be sure what state your application is in, but nothing annoys a user more than an application just exploding. These unhandled exceptions need to be logged, and this is a good place to catch them. Ideally you never want this option to be exercised, but code as though it will be. When you log these exceptions, give them a “Fatal” status (e.g. Log4Net) and make sure these bugs get handled in your next release. That’s it in a nutshell. If you do it right each exception will only get logged once and with the largest stack trace possible which will make those 2am emergency severity 1 debugging sessions much shorter and less frustrating. Here’s a few people who also have interesting things to say on this topic:  http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx http://www.codeproject.com/Articles/9538/Exception-Handling-Best-Practices-in-NET I know there’s more but I can’t find them at the moment.

    Read the article

  • Dell XPS 15 L502x and Ubuntu 11.04 - HDMI output

    - by Jones
    Recently I've bought my dream's notebook, a Dell XPS 15 but since then this dream became a kind of endless nightmare. I'm almost getting crazy to make my graphic card driver work properly, but it seems to be just impossible. Yes, I have a 2GB NVIDIA GeForce GT 540m (Optimus) in it! It simply doesn't work. Every time I generate the xorg.conf Ubuntu hangs on while starting up, which forces me to remove this file to be able to start the notebook with the standard graphic settings. Another problem is that the Dell XPS 15 does NOT have a VGA output, but a HDMI. So, to be able to use a second monitor I have to configure it by the NVIDIA X Server Settings, which just works if the driver is properly initialized with the xorg.conf. I've also tried to make it work with the Bumblebee, but unfortunately it didn't help me much with the HDMI output. Do you guys have any idea to solve this deadlock? Is there any way for me to use my second monitor?

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >