Search Results

Search found 37883 results on 1516 pages for 'sparse files'.

Page 254/1516 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • June 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m happy to announce the June 2013 release of the Ajax Control Toolkit. For this release, we enhanced the AjaxFileUpload control to support uploading files directly to Windows Azure. We also improved the SlideShow control by adding support for CSS3 animations. You can get the latest release of the Ajax Control Toolkit by visiting the project page at CodePlex (http://AjaxControlToolkit.CodePlex.com). Alternatively, you can execute the following NuGet command from the Visual Studio Library Package Manager window: Uploading Files to Azure The AjaxFileUpload control enables you to efficiently upload large files and display progress while uploading. With this release, we’ve added support for uploading large files directly to Windows Azure Blob Storage (You can continue to upload to your server hard drive if you prefer). Imagine, for example, that you have created an Azure Blob Storage container named pictures. In that case, you can use the following AjaxFileUpload control to upload to the container: <toolkit:ToolkitScriptManager runat="server" /> <toolkit:AjaxFileUpload ID="AjaxFileUpload1" StoreToAzure="true" AzureContainerName="pictures" runat="server" /> Notice that the AjaxFileUpload control is declared with two properties related to Azure. The StoreToAzure property causes the AjaxFileUpload control to upload a file to Azure instead of the local computer. The AzureContainerName property points to the blob container where the file is uploaded. .int3{position:absolute;clip:rect(487px,auto,auto,444px);}SMALL cash advance VERY CHEAP To use the AjaxFileUpload control, you need to modify your web.config file so it contains some additional settings. You need to configure the AjaxFileUpload handler and you need to point your Windows Azure connection string to your Blob Storage account. <configuration> <appSettings> <!--<add key="AjaxFileUploadAzureConnectionString" value="UseDevelopmentStorage=true"/>--> <add key="AjaxFileUploadAzureConnectionString" value="DefaultEndpointsProtocol=https;AccountName=testact;AccountKey=RvqL89Iw4npvPlAAtpOIPzrinHkhkb6rtRZmD0+ojZupUWuuAVJRyyF/LIVzzkoN38I4LSr8qvvl68sZtA152A=="/> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> You supply the connection string for your Azure Blob Storage account with the AjaxFileUploadAzureConnectionString property. If you set the value “UseDevelopmentStorage=true” then the AjaxFileUpload will upload to the simulated Blob Storage on your local machine. After you create the necessary configuration settings, you can use the AjaxFileUpload control to upload files directly to Azure (even very large files). Here’s a screen capture of how the AjaxFileUpload control appears in Google Chrome: After the files are uploaded, you can view the uploaded files in the Windows Azure Portal. You can see that all 5 files were uploaded successfully: New AjaxFileUpload Events In response to user feedback, we added two new events to the AjaxFileUpload control (on both the server and the client): · UploadStart – Raised on the server before any files have been uploaded. · UploadCompleteAll – Raised on the server when all files have been uploaded. · OnClientUploadStart – The name of a function on the client which is called before any files have been uploaded. · OnClientUploadCompleteAll – The name of a function on the client which is called after all files have been uploaded. These new events are most useful when uploading multiple files at a time. The updated AjaxFileUpload sample page demonstrates how to use these events to show the total amount of time required to upload multiple files (see the AjaxFileUpload.aspx file in the Ajax Control Toolkit sample site). SlideShow Animated Slide Transitions With this release of the Ajax Control Toolkit, we also added support for CSS3 animations to the SlideShow control. The animation is used when transitioning from one slide to another. Here’s the complete list of animations: · FadeInFadeOut · ScaleX · ScaleY · ZoomInOut · Rotate · SlideLeft · SlideDown You specify the animation which you want to use by setting the SlideShowAnimationType property. For example, here is how you would use the Rotate animation when displaying a set of slides: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSlideShow.aspx.cs" Inherits="TestACTJune2013.ShowSlideShow" %> <%@ Register TagPrefix="toolkit" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <script runat="Server" type="text/C#"> [System.Web.Services.WebMethod] [System.Web.Script.Services.ScriptMethod] public static AjaxControlToolkit.Slide[] GetSlides() { return new AjaxControlToolkit.Slide[] { new AjaxControlToolkit.Slide("slides/Blue hills.jpg", "Blue Hills", "Go Blue"), new AjaxControlToolkit.Slide("slides/Sunset.jpg", "Sunset", "Setting sun"), new AjaxControlToolkit.Slide("slides/Winter.jpg", "Winter", "Wintery..."), new AjaxControlToolkit.Slide("slides/Water lilies.jpg", "Water lillies", "Lillies in the water"), new AjaxControlToolkit.Slide("slides/VerticalPicture.jpg", "Sedona", "Portrait style picture") }; } </script> <!DOCTYPE html> <html > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <toolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:Image ID="Image1" Height="300" Runat="server" /> <toolkit:SlideShowExtender ID="SlideShowExtender1" TargetControlID="Image1" SlideShowServiceMethod="GetSlides" AutoPlay="true" Loop="true" SlideShowAnimationType="Rotate" runat="server" /> </div> </form> </body> </html> In the code above, the set of slides is exposed by a page method named GetSlides(). The SlideShowAnimationType property is set to the value Rotate. The following animated GIF gives you an idea of the resulting slideshow: If you want to use either the SlideDown or SlideRight animations, then you must supply both an explicit width and height for the Image control which is the target of the SlideShow extender. For example, here is how you would declare an Image and SlideShow control to use a SlideRight animation: <toolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:Image ID="Image1" Height="300" Width="300" Runat="server" /> <toolkit:SlideShowExtender ID="SlideShowExtender1" TargetControlID="Image1" SlideShowServiceMethod="GetSlides" AutoPlay="true" Loop="true" SlideShowAnimationType="SlideRight" runat="server" /> Notice that the Image control includes both a Height and Width property. Here’s an approximation of this animation using an animated GIF: Summary The Superexpert team worked hard on this release. We hope you like the new improvements to both the AjaxFileUpload and the SlideShow controls. We’d love to hear your feedback in the comments. On to the next sprint!

    Read the article

  • How to access files on a drive from an older system, mounted in a new system?

    - by David Thomas
    I've recently built a new system, after a rather large physical injury was sustained by my previous system (a precarious balance, and gravity, were not a happy mix). Surprisingly the /home drive of that system appears to have more-or-less survived the trauma. However... I decided to use a fresh drive for / (and swap) partition(s), and another fresh drive for the new /home. Now that's working, I decided to install the old /home drive (that I had assumed until now would be entirely dead and without capacity for use) into the new system to recover the files and data (so far as is possible). At this point I've run into a snag: I have no idea how to go about this (with Windows it was relatively easy, the new drive would be the latest character of the alphabet, and go from there). With 'disk utility' (System - Administration - Disk Utitlity) I've worked out which drive it is (/dev/sda) but clicking on 'mount' produces an error: 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on / mount failed ...if it is mounted on / I can't see it. I'm also moderately confused by the disk (device /dev/sda) being referred to as /dev/sdb1. Any and all insights would be incredibly welcome (I've already voted for: Idea #9063: New internal hard drives default automount at Brainstorm). Edited in response to Roland's request for a screenshot of disk utility: Details (so far as I know them): 40GB disk is / and swap, 1.0 TB Samsung is /home 1.0 TB Hitachi is from the old system (and was the old /home drive). Output from sudo fdisk -l pasted below: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bef00 Device Boot Start End Blocks Id System /dev/sda1 1 121601 976760001 83 Linux Disk /dev/sdb: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00037652 Device Boot Start End Blocks Id System /dev/sdb1 * 1 4742 38084608 83 Linux /dev/sdb2 4742 4866 993281 5 Extended /dev/sdb5 4742 4866 993280 82 Linux swap / Solaris Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d46 Device Boot Start End Blocks Id System /dev/sdc1 1 121602 976760832 83 Linux

    Read the article

  • xutility file???

    - by user574290
    Hi all. I'm trying to use c code with opencv in face detection and counting, but I cannot build the source. I am trying to compile my project and I am having a lot of problems with a line in the xutility file. the error message show that it error with xutility file. Please help me, how to solve this problem? this is my code // Include header files #include "stdafx.h" #include "cv.h" #include "highgui.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <math.h> #include <float.h> #include <limits.h> #include <time.h> #include <ctype.h> #include <iostream> #include <fstream> #include <vector> using namespace std; #ifdef _EiC #define WIN32 #endif int countfaces=0; int numFaces = 0; int k=0 ; int list=0; char filelist[512][512]; int timeCount = 0; static CvMemStorage* storage = 0; static CvHaarClassifierCascade* cascade = 0; void detect_and_draw( IplImage* image ); void WriteInDB(); int found_face(IplImage* img,CvPoint pt1,CvPoint pt2); int load_DB(char * filename); const char* cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; // BEGIN NEW CODE #define WRITEVIDEO char* outputVideo = "c:\\face_counting1_tracked.avi"; //int faceCount = 0; int posBuffer = 100; int persistDuration = 10; //faces can drop out for 10 frames int timestamp = 0; float sameFaceDistThreshold = 30; //pixel distance CvPoint facePositions[100]; int facePositionsTimestamp[100]; float distance( CvPoint a, CvPoint b ) { float dist = sqrt(float ( (a.x-b.x)*(a.x-b.x) + (a.y-b.y)*(a.y-b.y) ) ); return dist; } void expirePositions() { for (int i = 0; i < posBuffer; i++) { if (facePositionsTimestamp[i] <= (timestamp - persistDuration)) //if a tracked pos is older than three frames { facePositions[i] = cvPoint(999,999); } } } void updateCounter(CvPoint center) { bool newFace = true; for(int i = 0; i < posBuffer; i++) { if (distance(center, facePositions[i]) < sameFaceDistThreshold) { facePositions[i] = center; facePositionsTimestamp[i] = timestamp; newFace = false; break; } } if(newFace) { //push out oldest tracker for(int i = 1; i < posBuffer; i++) { facePositions[i] = facePositions[i - 1]; } //put new tracked position on top of stack facePositions[0] = center; facePositionsTimestamp[0] = timestamp; countfaces++; } } void drawCounter(IplImage* image) { // Create Font char buffer[5]; CvFont font; cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, .5, .5, 0, 1); cvPutText(image, "Faces:", cvPoint(20, 20), &font, CV_RGB(0,255,0)); cvPutText(image, itoa(countfaces, buffer, 10), cvPoint(80, 20), &font, CV_RGB(0,255,0)); } #ifdef WRITEVIDEO CvVideoWriter* videoWriter = cvCreateVideoWriter(outputVideo, -1, 30, cvSize(240, 180)); #endif //END NEW CODE int main( int argc, char** argv ) { CvCapture* capture = 0; IplImage *frame, *frame_copy = 0; int optlen = strlen("--cascade="); const char* input_name; if( argc > 1 && strncmp( argv[1], "--cascade=", optlen ) == 0 ) { cascade_name = argv[1] + optlen; input_name = argc > 2 ? argv[2] : 0; } else { cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; input_name = argc > 1 ? argv[1] : 0; } cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); if( !cascade ) { fprintf( stderr, "ERROR: Could not load classifier cascade\n" ); fprintf( stderr, "Usage: facedetect --cascade=\"<cascade_path>\" [filename|camera_index]\n" ); return -1; } storage = cvCreateMemStorage(0); //if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0') ) // capture = cvCaptureFromCAM( !input_name ? 0 : input_name[0] - '0' ); //else capture = cvCaptureFromAVI( "c:\\face_counting1.avi" ); cvNamedWindow( "result", 1 ); if( capture ) { for(;;) { if( !cvGrabFrame( capture )) break; frame = cvRetrieveFrame( capture ); if( !frame ) break; if( !frame_copy ) frame_copy = cvCreateImage( cvSize(frame->width,frame->height), IPL_DEPTH_8U, frame->nChannels ); if( frame->origin == IPL_ORIGIN_TL ) cvCopy( frame, frame_copy, 0 ); else cvFlip( frame, frame_copy, 0 ); detect_and_draw( frame_copy ); if( cvWaitKey( 30 ) >= 0 ) break; } cvReleaseImage( &frame_copy ); cvReleaseCapture( &capture ); } else { if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0')) cvNamedWindow( "result", 1 ); const char* filename = input_name ? input_name : (char*)"lena.jpg"; IplImage* image = cvLoadImage( filename, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } else { /* assume it is a text file containing the list of the image filenames to be processed - one per line */ FILE* f = fopen( filename, "rt" ); if( f ) { char buf[1000+1]; while( fgets( buf, 1000, f ) ) { int len = (int)strlen(buf); while( len > 0 && isspace(buf[len-1]) ) len--; buf[len] = '\0'; image = cvLoadImage( buf, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } } fclose(f); } } } cvDestroyWindow("result"); #ifdef WRITEVIDEO cvReleaseVideoWriter(&videoWriter); #endif return 0; } void detect_and_draw( IplImage* img ) { static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} }; double scale = 1.3; IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 ); IplImage* small_img = cvCreateImage( cvSize( cvRound (img->width/scale), cvRound (img->height/scale)), 8, 1 ); CvPoint pt1, pt2; int i; cvCvtColor( img, gray, CV_BGR2GRAY ); cvResize( gray, small_img, CV_INTER_LINEAR ); cvEqualizeHist( small_img, small_img ); cvClearMemStorage( storage ); if( cascade ) { double t = (double)cvGetTickCount(); CvSeq* faces = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 2, 0/*CV_HAAR_DO_CANNY_PRUNING*/, cvSize(30, 30) ); t = (double)cvGetTickCount() - t; printf( "detection time = %gms\n", t/((double)cvGetTickFrequency()*1000.) ); if (faces) { //To save the detected faces into separate images, here's a quick and dirty code: char filename[6]; for( i = 0; i < (faces ? faces->total : 0); i++ ) { /* CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, colors[i%8], 3, 8, 0 );*/ // Create a new rectangle for drawing the face CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); // Find the dimensions of the face,and scale it if necessary pt1.x = r->x*scale; pt2.x = (r->x+r->width)*scale; pt1.y = r->y*scale; pt2.y = (r->y+r->height)*scale; // Draw the rectangle in the input image cvRectangle( img, pt1, pt2, CV_RGB(255,0,0), 3, 8, 0 ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, CV_RGB(255,0,0), 3, 8, 0 ); //update counter updateCounter(center); int y=found_face(img,pt1,pt2); if(y==0) countfaces++; }//end for printf("Number of detected faces: %d\t",countfaces); }//end if //delete old track positions from facePositions array expirePositions(); timestamp++; //draw counter drawCounter(img); #ifdef WRITEVIDEO cvWriteFrame(videoWriter, img); #endif cvShowImage( "result", img ); cvDestroyWindow("Result"); cvReleaseImage( &gray ); cvReleaseImage( &small_img ); }//end if } //end void int found_face(IplImage* img,CvPoint pt1,CvPoint pt2) { /*if (faces) {*/ CvSeq* faces = cvHaarDetectObjects( img, cascade, storage, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(40, 40) ); int i=0; char filename[512]; for( i = 0; i < (faces ? faces->total : 0); i++ ) {//int scale = 1, i=0; //i=iface; //char filename[512]; /* extract the rectanlges only */ // CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); //IplImage* gray_img = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); IplImage* clone = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, img->nChannels ); IplImage* gray = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, 1 ); cvCopy (img, clone, 0); cvNamedWindow ("ROI", CV_WINDOW_AUTOSIZE); cvCvtColor( clone, gray, CV_RGB2GRAY ); face_rect.x = pt1.x; face_rect.y = pt1.y; face_rect.width = abs(pt1.x - pt2.x); face_rect.height = abs(pt1.y - pt2.y); cvSetImageROI ( gray, face_rect); //// * rectangle = cvGetImageROI ( clone ); face_rect = cvGetImageROI ( gray ); cvShowImage ("ROI", gray); k++; char *name=0; name=(char*) calloc(512, 1); sprintf(name, "Image%d.pgm", k); cvSaveImage(name, gray); //////////////// for(int j=0;j<512;j++) filelist[list][j]=name[j]; list++; WriteInDB(); //int found=SIFT("result.txt",name); cvResetImageROI( gray ); //return found; return 0; // }//end if }//end for }//end void void WriteInDB() { ofstream myfile; myfile.open ("result.txt"); for(int i=0;i<512;i++) { if(strcmp(filelist[i],"")!=0) myfile << filelist[i]<<"\n"; } myfile.close(); } Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 13 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 18 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 23 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 10 error C2868: 'std::iterator_traits<_Iter>::value_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 25 error C2868: 'std::iterator_traits<_Iter>::reference' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 20 error C2868: 'std::iterator_traits<_Iter>::pointer' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 5 error C2868: 'std::iterator_traits<_Iter>::iterator_category' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 15 error C2868: 'std::iterator_traits<_Iter>::difference_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 9 error C2602: 'std::iterator_traits<_Iter>::value_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 24 error C2602: 'std::iterator_traits<_Iter>::reference' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 19 error C2602: 'std::iterator_traits<_Iter>::pointer' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 4 error C2602: 'std::iterator_traits<_Iter>::iterator_category' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 14 error C2602: 'std::iterator_traits<_Iter>::difference_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 7 error C2146: syntax error : missing ';' before identifier 'value_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 22 error C2146: syntax error : missing ';' before identifier 'reference' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 17 error C2146: syntax error : missing ';' before identifier 'pointer' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 2 error C2146: syntax error : missing ';' before identifier 'iterator_category' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 12 error C2146: syntax error : missing ';' before identifier 'difference_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 6 error C2039: 'value_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 21 error C2039: 'reference' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 16 error C2039: 'pointer' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 1 error C2039: 'iterator_category' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 11 error C2039: 'difference_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766

    Read the article

  • Paste multiple lines before a line in vim?

    - by Umar
    How do I copy multiple lines and paste them as a block before a line? As an example I have the following code and I want to copy and paste the three lines after the if statement to after the else statement but before the line below it. [row col] = find(H); if (nargin < 4) delqmn = sparse(row, col, 0, M, N); % diff of msgs from bits to checks delrmn = sparse(row, col, 0, M, N);% diff of msgs from checks to bits rmn0 = sparse(row, col, 0, M, N);% msgs from checks to bits (p=0) else // Insert 3 lines after if statement here qn0 = 1-r;% pseudoposterior probabilities qn1 = r;% pseudoposterior probabilities Thanks

    Read the article

  • Ways to setup a ZFS pool on a device without possibility to create/manage partitions?

    - by Karl Richter
    I have a NAS where I don't have a possibility to create and manage partitions (maybe I could with some hacks that I don't want to make). What ways to setup multiple ZFS pools with one partition each (for starters - just want to use deduplication) exist? The setup should work with the NAS, i.e. over network (I'd mount the images via NFS or cifs). My ideas and associated issues so far: sparse files mounted over loop device (specifying sparse file directly as ZFS vdev doesn't work, see Can I choose a sparse file as vdev for a zfs pool?): problem that the name/number of the assigned loop device is anything but constant, not sure how increasing the number loop device with kernel parameter affects performance (there has to be a reason to limit it to 8 in the default value, right?)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I properly add existing source code files to my Xcode project?

    - by BeachRunnerJoe
    I'm new to iPhone development and I'm still getting familiar with the Mac dev environment, including Xcode. I want to add some 3rd party code to my iPhone project, but when I add the "existing files" to my Xcode project, I'm presented with a dialog box that has far too many options that I don't understand and, as such, my project isn't working. When I #import headerfilename.h, I get a build error that reads headerfilename.h: No such file or directory. Can anyone explain to me what all these options mean or give me a link to some documentation that can? I'm having a hard time finding anything in Apple's docs. Which options do I want to choose to add existing source code files to my Xcode project? I should note that the source code files that I'm trying to add are located in my project/Classes/frameworkname/ directory. After they're added, do I need to reference this new code directory in my project settings anywhere (i.e. some kind of header file directory variable)? Thanks so much! Update: I found the following answers/responses on the apple dev forums that were very useful and helped me fix my issue... To make it simple : - if you do not check the copy option, the file stay where it is. - if you check it, it is copied in your project folders In the first case (what it seems you are doing) you need to tell the compiler that the header files are in another directory : - project info - build - search paths - User Header Search Path : add the directory from where you took the header file Hope this will help You have discovered the most confusing dialog box that ever came out of Cupertino. Six years of Xcode, and this thing still is partly a mystery to me. To even get that far, I had to make many test projects to try and reverse-engineer what this thing does. The "Copy" box means that it will copy the files as they are right now, into the project. If this box is not checked, then it just references those files during a build and copies them as they are at THAT time. For source code, you want the Copy box checked. The "relative to" is a total mystery to me and I can't help you with that. I usually leave it however it is already set. Does it mean relative to where they are on disk, or the arrangement in Xcode, or in the bundle? Who knows. The last 2 radio buttons SEEM to mean that it will either re-create the folder structure of the folder you are adding, or just put "fake" folders in Xcode that point to the real folders. This is probably your problem - you are adding source code that is not all at the top level, and when it goes to find it, it does not re-create the hierarchy. Others can supply a better way, hopefully, but what I would do is put all of the source in one folder and add that, using the Copy box. Then in Xcode you can make whatever bogus folders you want and put the source file names in those fake folders.

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Jmeter- HTTP Cache Manager, Unable to cache everything what it is being cached by Browser

    - by chinmay brahma
    I used HTTP Chache Manager to Cache files which are being cached in browser. I am successful of doing it for some of the pages. Number of files being cached in Jmeter is equal to Number of files being cached by browser. But in some cases : I found number files being cached is lesser than the files being cached by browser. Using Jmeter I found only 5 files are being cached but in real browser 12 files are getting cached. Thanks in advance

    Read the article

  • Use ASP.NET 4 Browser Definitions with ASP.NET 3.5

    - by Stephen Walther
    We updated the browser definitions files included with ASP.NET 4 to include information on recent browsers and devices such as Google Chrome and the iPhone. You can use these browser definition files with earlier versions of ASP.NET such as ASP.NET 3.5. The updated browser definition files, and instructions for installing them, can be found here: http://aspnet.codeplex.com/releases/view/41420 The changes in the browser definition files can cause backwards compatibility issues when you upgrade an ASP.NET 3.5 web application to ASP.NET 4. If you encounter compatibility issues, you can install the old browser definition files in your ASP.NET 4 application. The old browser definition files are included in the download file referenced above. What’s New in the ASP.NET 4 Browser Definition Files The complete set of browsers supported by the new ASP.NET 4 browser definition files is represented by the following figure:     If you look carefully at the figure, you’ll notice that we added browser definitions for several types of recent browsers such as Internet Explorer 8, Firefox 3.5, Google Chrome, Opera 10, and Safari 4. Furthermore, notice that we now include browser definitions for several of the most popular mobile devices: BlackBerry, IPhone, IPod, and Windows Mobile (IEMobile). The mobile devices appear in the figure with a purple background color. To improve performance, we removed a whole lot of outdated browser definitions for old cell phones and mobile devices. We also cleaned up the information contained in the browser files. Here are some of the browser features that you can detect: Are you a mobile device? <%=Request.Browser.IsMobileDevice %> Are you an IPhone? <%=Request.Browser.MobileDeviceModel == "IPhone" %> What version of JavaScript do you support? <%=Request.Browser["javascriptversion"] %> What layout engine do you use? <%=Request.Browser["layoutEngine"] %>   Here’s what you would get if you displayed the value of these properties using Internet Explorer 8: Here’s what you get when you use Google Chrome: Testing Browser Settings When working with browser definition files, it is useful to have some way to test the capability information returned when you request a page with different browsers. You can use the following method to return the HttpBrowserCapabilities the corresponds to a particular user agent string and set of browser headers: public HttpBrowserCapabilities GetBrowserCapabilities(string userAgent, NameValueCollection headers) { HttpBrowserCapabilities browserCaps = new HttpBrowserCapabilities(); Hashtable hashtable = new Hashtable(180, StringComparer.OrdinalIgnoreCase); hashtable[string.Empty] = userAgent; // The actual method uses client target browserCaps.Capabilities = hashtable; var capsFactory = new System.Web.Configuration.BrowserCapabilitiesFactory(); capsFactory.ConfigureBrowserCapabilities(headers, browserCaps); capsFactory.ConfigureCustomCapabilities(headers, browserCaps); return browserCaps; } At the end of this blog entry, there is a link to download a simple Visual Studio 2008 project – named Browser Definition Test -- that uses this method to display capability information for arbitrary user agent strings. For example, if you enter the user agent string for an iPhone then you get the results in the following figure: The Browser Definition Test application enables you to submit a user-agent string and display a table of browser capabilities information. The browser definition files contain sample user-agent strings for each browser definition. I got the iPhone user-agent string from the comments in the iphone.browser file. Enumerating Browser Definitions Someone asked in the comments whether or not there is a way to enumerate all of the browser definitions. You can do this if you ware willing to use a little reflection and read a private property. The browser definition files in the config\browsers folder get parsed into a class named BrowserCapabilitesFactory. After you run the aspnet_regbrowsers tool, you can see the source for this class in the config\browser folder by opening a file named BrowserCapsFactory.cs. The BrowserCapabilitiesFactoryBase class has a protected property named BrowserElements that represents a Hashtable of all of the browser definitions. Here's how you can read this protected property and display the ID for all of the browser definitions: var propInfo = typeof(BrowserCapabilitiesFactory).GetProperty("BrowserElements", BindingFlags.NonPublic | BindingFlags.Instance); Hashtable browserDefinitions = (Hashtable)propInfo.GetValue(new BrowserCapabilitiesFactory(), null); foreach (var key in browserDefinitions.Keys) { Response.Write("" + key); } If you run this code using Visual Studio 2008 then you get the following results: You get a huge number of outdated browsers and devices. In all, 449 browser definitions are listed. If you run this code using Visual Studio 2010 then you get the following results: In the case of Visual Studio 2010, all the old browsers and devices have been removed and you get only 19 browser definitions. Conclusion The updated browser definition files included in ASP.NET 4 provide more accurate information for recent browsers and devices. If you would like to test the new browser definitions with different user-agent strings then I recommend that you download the Browser Definition Test project: Browser Definition Test Project

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • How to remove MySQL completely with config and library files?

    - by codeartist
    I tried everything till now: sudo apt-get remove mysql-server mysql-client mysql-common sudo apt-get purge mysql-server mysql-client mysql-common sudo apt-get autoremove and even more commands... But whenever I am trying to locate mysql. I get a no. of files related to mysql command: shell>> locate mysql Output: /etc/mysql /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/abstractions/mysql /etc/apparmor.d/cache/usr.sbin.mysqld /etc/apparmor.d/cache/usr.sbin.mysqld-akonadi /etc/apparmor.d/local/usr.sbin.mysqld /etc/bash_completion.d/mysqladmin /etc/init/mysql.conf /etc/logcheck/ignore.d.paranoid/mysql-server-5_5 /etc/logcheck/ignore.d.server/mysql-server-5_5 /etc/logcheck/ignore.d.workstation/mysql-server-5_5 /etc/logrotate.d/mysql-server /etc/mysql/conf.d /etc/mysql/debian-start /etc/mysql/debian.cnf /etc/mysql/conf.d/mysqld_safe_syslog.cnf /home/pkr/.mysql_history /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,libqt4-sql-mysql,,349051c3a57da571aa832adb39177aff /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,cbf77a486cdc80547317981a33144427 /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,de8220dee4d957a9502caa79e8d2fdda /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,libqt4-sql-mysql,page,1,helpful,,17fb2e657321dc51526ee8fe9928da30 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,mysql-client,page,1,helpful,,a4c1b6e8200f36ab5745c6f81f14da0a /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,libqt4-sql-mysql,page,1,helpful,,c54295fb82b8183350cd34f22c3547ef /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,mysql-client,page,1,helpful,,fcf201c1abff3f774af89173a84de2cc /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,libqt4-sql-mysql,page,1,helpful,,0cd86648584efeccfb16119012f89540 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,mysql-client,page,1,helpful,,eb84724e9da7851ff8862a227d8bac59 /home/pkr/.local/share/akonadi/mysql.conf /home/pkr/.local/share/akonadi/db_data/mysql /home/pkr/.local/share/akonadi/db_data/mysql.err /home/pkr/.local/share/akonadi/db_data/mysql.err.old /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/db.MYD /home/pkr/.local/share/akonadi/db_data/mysql/db.MYI /home/pkr/.local/share/akonadi/db_data/mysql/db.frm /home/pkr/.local/share/akonadi/db_data/mysql/event.MYD /home/pkr/.local/share/akonadi/db_data/mysql/event.MYI /home/pkr/.local/share/akonadi/db_data/mysql/event.frm /home/pkr/.local/share/akonadi/db_data/mysql/func.MYD /home/pkr/.local/share/akonadi/db_data/mysql/func.MYI /home/pkr/.local/share/akonadi/db_data/mysql/func.frm /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/general_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_category.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.frm /home/pkr/.local/share/akonadi/db_data/mysql/host.MYD /home/pkr/.local/share/akonadi/db_data/mysql/host.MYI /home/pkr/.local/share/akonadi/db_data/mysql/host.frm /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYD /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYI /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.frm /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYD /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYI /home/pkr/.local/share/akonadi/db_data/mysql/plugin.frm /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proc.frm /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYD /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYI /home/pkr/.local/share/akonadi/db_data/mysql/servers.frm /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.frm /home/pkr/.local/share/akonadi/db_data/mysql/user.MYD /home/pkr/.local/share/akonadi/db_data/mysql/user.MYI /home/pkr/.local/share/akonadi/db_data/mysql/user.frm /usr/bin/mysql /usr/bin/mysql_install_db /usr/bin/mysql_upgrade /usr/bin/mysqlcheck /usr/sbin/mysqld /usr/share/mysql /usr/share/app-install/desktop/gmysqlcc:gmysqlcc.desktop /usr/share/app-install/desktop/mysql-client.desktop /usr/share/app-install/desktop/mysql-navigator:mysql-navigator.desktop /usr/share/app-install/desktop/mysql-server.desktop /usr/share/app-install/icons/gmysqlcc-32.png /usr/share/app-install/icons/mysql-navigator.png /usr/share/doc/mysql-client-core-5.5 /usr/share/doc/mysql-server-core-5.5 /usr/share/kde4/apps/katepart/syntax/sql-mysql.xml /usr/share/man/man1/mysql.1.gz /usr/share/man/man1/mysql_install_db.1.gz /usr/share/man/man1/mysql_upgrade.1.gz /usr/share/man/man1/mysqlcheck.1.gz /usr/share/man/man8/mysqld.8.gz /var/cache/apt/archives/akonadi-backend-mysql_1.7.2-0ubuntu1_all.deb /var/cache/apt/archives/libmysqlclient-dev_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libmysqlclient18_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libqt4-sql-mysql_4%3a4.8.1-0ubuntu4.1_i386.deb /var/cache/apt/archives/mysql-client-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-common_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-server-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server_5.5.22-0ubuntu1_all.deb /var/lib/dpkg/info/mysql-client-core-5.5.list /var/lib/dpkg/info/mysql-client-core-5.5.md5sums /var/lib/dpkg/info/mysql-server-5.5.list /var/lib/dpkg/info/mysql-server-5.5.postrm /var/lib/dpkg/info/mysql-server-core-5.5.list /var/lib/dpkg/info/mysql-server-core-5.5.md5sums /var/log/mysql /var/log/mysql.err /var/log/mysql.log /var/log/mysql.log.1.gz /var/log/mysql.log.2.gz /var/log/mysql.log.3.gz /var/log/mysql.log.4.gz /var/log/mysql.log.5.gz /var/log/mysql.log.6.gz /var/log/mysql.log.7.gz /var/log/upstart/mysql.log.1.gz /var/log/upstart/mysql.log.2.gz /var/log/upstart/mysql.log.3.gz /var/log/upstart/mysql.log.4.gz /var/log/upstart/mysql.log.5.gz /var/log/upstart/mysql.log.6.gz /var/log/upstart/mysql.log.7.gz What should I do now? Please help me out in this :( I was trying to find out if there is any way I can remove mysql related every file and then reinstall mysql. I need it for Qt connectivity. I don't understand what to do! Please help :(

    Read the article

  • How to edit files directly on webdav in windows.

    - by phazei
    I have a webDAV setup with the cPanel webdisk. I can connect to it through NetHood and I can drag and drop files to/from there. What I can't do is simply edit any of the files directly. I need to copy it somewhere else, edit it, then copy it back. That's essentially what is needed with ftp, though smart clients will monitor the file, making it easier than webDAV in the current state I'm using it in. I was under the impression that webdav was supposed to let me work on the files as if it were a local drive. But nothing can actually open the files. How can I go about bringing more functionality around to it? Or is this as good as it gets? I have tried 'net use q:\ https://myserver.com:2083' and 'net use q:\ '\myserver.com@SSL:2083\' but neither work and only throws: "System error 67 has occurred. The network name cannot be found." I ultimately want to use TortiseSVN with the webDAV so I can have my working copy running on the server.

    Read the article

  • Is there any way to access files in your source tree in Android?

    - by Chris Thompson
    Hi all, This is a bit unorthodox but I'm trying to figure out if there's a way to access files stored in the src tree of my applications apk in Android. I'm trying to use i-Jetty (Jetty implementation for Android) and rather than use it as a separate application and manually download my war file, I'd rather just bake i-jetty in. However, in order to use (easily) standard html/jsp I need to be able to give it a document root, preferably within my application's apk file. I know Android specifically works to prevent you from accessing (freely) the stuff on the actual system so this may not be possible, but I'm thinking it might be possible to access something within the apk. One option to work around this would be to have all of the files stored in the res directory and then copy them to the sdcard on startup but this wouldn't allow me to automatically remove the files on uninstall. To give you an idea of what I've tried, currently, the html files are stored in org.webtext.android Context rootContext = new Context(server_, "/", Context.SESSIONS); rootContext.setResourceBase("org/webtext/webapp"); Returns a 404 error. final URL url = this.getClassLoader().getResource("org/webtext/webapp"); Context html = new WebAppContext(url.toExternalForm(), "/"); Blows up with a NullPointerException because no URL is returned from the getResource call. Any thoughts would be greatly appreciated! Thanks, Chris

    Read the article

  • Making ANTLR generated class files into one jar file.

    - by prosseek
    With ANTLR, I get some java class files after compilation. And I need to make all the class files into one jar file. I make manifest.mf file that has one line "Main-class: Test" to indicate the main file. I run 'jar cmf manifest.mf hello.jar *.class' to get hello.jar file. But when I try to run 'java -jar hello.jar', I get the following error messages. $ java -jar hello.jar Exception in thread "main" java.lang.NoClassDefFoundError: Test Caused by: java.lang.ClassNotFoundException: Test at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:315) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:330) at java.lang.ClassLoader.loadClass(ClassLoader.java:250) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:398) What's wrong? I get correct result when I run 'java Test'. The example that I used is the source code from the book 'The Definitive ANTLR Reference' that you can download from http://www.pragprog.com/titles/tpantlr/source_code The example is in /tour/trees/. I get a bunch of class files after compiling g and java files.

    Read the article

  • Is there an XML XQuery interface to existing XML files?

    - by xsaero00
    My company is in education industry and we use XML to store course content. We also store some course related information (mostly metainfo) in relational database. Right now we are in the process of switching from our proprietary XML Schema to DocBook 5. Along with the switch we want to move course related information from database to XML files. The reason for this is to have all course data in one place and to put it under Subversion. However, we would like to keep flexibility of the relational database and be able to easily extract specific information about a course from an XML document. XQuery seems to be up to the task so I was researching databases that supports it but so far could not find what I needed. What I basically want, is to have my XML files in a certain directory structure and then on top of this I would like to have a system that would index my files and let me select anything out of any file using XQuery. This way I can have "my cake and eat it too": I will have XQuery interface and still keep my files in plain text and versioned. Is there anything out there at least remotely resembling to what I want? If you think that what I an asking for is nonsense please make an alternative suggestion. On the related note: What XML Databases (preferably native and open source) do you have experience with and what would you recommend?

    Read the article

  • Why can't I include these data files in a Python distribution using distutils?

    - by froadie
    I'm writing a setup.py file for a Python project so that I can distribute it. The aim is to eventually create a .egg file, but I'm trying to get it to work first with distutils and a regular .zip. This is an eclipse pydev project and my file structure is something like this: ProjectName src somePackage module1.py module2.py ... config propsFile1.ini propsFile2.ini propsFile3.ini setup.py Here's my setup.py code so far: from distutils.core import setup setup(name='ProjectName', version='1.0', packages=['somePackage'], data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] ) When I run this (with sdist as a command line parameter), a .zip file gets generated with all the python files - but the config files are not included. I thought that this code: data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] indicates that those 3 specified config files should be copied to a "config" directory in the zip distribution. Why is this code not accomplishing anything? What am I doing wrong? (I have also tried playing around with the paths of the config files... But nothing seems to help. Would Python throw an error or warning if the path was incorrect / file was not found?)

    Read the article

  • Is there something like a Filestorage class to store files in?

    - by nebukadnezzar
    Is there something like a class that might be used to store Files and directories in, just like the way Zip files might be used? Since I haven't found any "real" class to write Zip files (real class as in real class), It would be nice to be able to store Files and Directories in a container-like file. A perfect API would probably look like this: int main() { ContainerFile cntf("myContainer.cnt", ContainerFile::CREATE); cntf.addFile("data/some-interesting-stuff.txt"); cntf.addDirectory("data/foo/"); cntf.addDirectory("data/bar/", ContainerFile::RECURSIVE); cntf.close(); } ... I hope you get the Idea. Important Requirements are: The Library must be crossplatform anything *GPL is not acceptable in this case (MIT and BSD License are) I already played with the thought of creating an Implentation based on SQLite (and its ability to store binary blobs). Unfortunately, it seems impossible to store Directory structures in a SQLite Database, which makes it pretty much useless in this case. Is it useless to hope for such a class library?

    Read the article

  • How do I tell nant to only call csc when there are cs files in to compile?

    - by rob_g
    In my NAnt script I have a compile target that calls csc. Currently it fails because no inputs are specified: <target name="compile"> <csc target="library" output="${umbraco.bin.dir}\Mammoth.${project::get-name()}.dll"> <sources> <include name="project/*.cs" /> </sources> <references> </references> </csc> </target> How do I tell NAnt to not execute the csc task if there are no CS files? I read about the 'if' attribute but am unsure what expression to use with it, as ${file::exists('*.cs')} does not work. The build script is a template for Umbraco (a CMS) projects and may or may not ever have .cs source files in the project. Ideally I would like to not have developers need to remember to modify the NAnt script to include the compile task when .cs files are added to the project (or exclude it when all .cs files are removed).

    Read the article

  • VC++ 6 and MS Speech SDK 5.1 fatal error C1083: Cannot open source file: 'files\microsoft': No such

    - by eg123
    Trying to compile an application (flite synthesis sapi) on vc++6. This requires Microsoft Speech SDK 5.1 Have included C:\Program Files\Microsoft Speech SDK 5.1\IDL C:\Program Files\Microsoft Speech SDK 5.1\include using Toolsoptionsdirectories and also on another attempt via ProjectSettings Repeatedly get this error microsoft fatal error C1083: Cannot open source file: 'files\microsoft': No such file or directory speech fatal error C1083: Cannot open source file: 'speech': No such file or directory sdk fatal error C1083: Cannot open source file: 'sdk': No such file or directory idl fatal error C1083: Cannot open source file: '5.1\idl': No such file or directory FliteCMUKalDiphone.idl Thought it may be spaces related so included full path in quotes in relevant .h files. No joy Installed Microsoft Speech SDK 5.1 on another machine in same folder as flite and renamed to mssdk51 (so no spaces in pathname) but same error came up. Tried pasting in contents of each .idl called in file where glitch seems to generate Still same message. I am new to C++ and programming in general. My only guess is that something in the speech sdk is calling the .idl file and I can't find where from. Of course this is probably way wrong!

    Read the article

  • how to filter files from the root "classes" and "test-classes" folders in Eclipse?

    - by Kidburla
    I am using ClearCase in my application which generates a whole load of ".copyarea.db" files (one in every folder). These cause conflicts when publishing to Tomcat as Eclipse will bundle the "classes" and "test-classes" folders into one JAR (not sure why it does this - as there is no need to have test classes available on the application server). Any folders with the same names will have a separate .copyarea.db in the classes and test-classes branches. I managed to get around this problem in general by adding ".copyarea.db" to the Filtered resources on the Java->Compiler->Building->Output Folder preference page. This stops the file appearing in source output (package/class folders), the vast majority of cases. However there remains the problem of the root folder, i.e. "target/classes/.copyarea.db" and "target/test-classes/.copyarea.db". These files are not filtered as they are not part of the compile task. Just deleting the files manually doesn't help either, as Eclipse expects to find them and doesn't. How can I exclude these ".copyarea.db" files from the root "classes" and "test-classes" folders?

    Read the article

  • Is it possible to install into Program Files with limited privileges?

    - by Marek
    I have an application that will be deployed as MSI package (authored in WiX). I am deciding whether to specify elevated or limited privileges as required for the installer. The application does not include anything requiring elevated privileges besides the default install location, which is under Program Files. Now the problem: If I specify elevated privileges, then the user is prompted by UAC for administrator password during the installation. This is not required and prevents non-admin users from installing. If I specify limited privileges, then the user is presented with a dialog to select install location with Program Files being default. In case they do not change the install location (95 % of end users probably won't), then the installer will fail with a message that they should contact the Administrator or run the application as administrator. If they launch the installer as Administrator then they can install into Program Files without problem - but most of the users won't probably know how to launch an installer as administrator. I can potentially set the default install location to e.g. C:\Company name\Program\, but this seems nonstandard to me and majority of users will not probably like this (they are probably used to installing into Program Files). How do you solve this problem with installing applications under limited user accounts?

    Read the article

  • Multiple XML/XSLT files in PHP, transform one with XSLT and add others but process it first with PHP

    - by ipalaus
    I am processing XML files transformations with XSLT in PHP correctly. Actually I use this code: $xml = new DOMDocument; $xml->LoadXML($xml_contents); $xsl = new DOMDocument; $xsl->load($xsl_file); $proc = new XSLTProcesoor; $proc->importStyleSheet($xsl); echo $proc->transformToXml($xml); $xml_contents is the XML processed with PHP, this is done by including the XML file first and then assigning $xml_contents = ob_get_contents(); ob_end_clean();. This forces to process the PHP code on the XML, and it works perfectly. My problem is that I use more than one XML file and this XML files has PHP code on it that need to be processed AND have a XSLT file associated to process the data. Actually I'm including this files in XSLT with the next code: <!-- First I add the XML file --> <xsl:param name="menu" select="document('menu.xml')" /> <!-- Next I add the transformations for menu.xml file --> <xsl:include href="menu.xsl" /> <!-- Finally, I process it on the actual ("parent") XML --> <xsl:apply-templates select="$menu/menu" /> My questiion is how I can handle this. I need to add mutiple XML(+XSLT) files to my first XML file that will containt PHP so it needs to be processed. Thank you in advance!

    Read the article

  • How would I sort files to directories based on filenames?

    - by gnomed
    I have a huge number of files to sort all named in some terrible convention. Here are some examples: (4)_mr__mcloughlin____.txt 12__sir_john_farr____.txt (b)mr__chope____.txt dame_elaine_kellett-bowman____.txt dr__blackburn______.txt These names are supposed to be a different person (speaker) each. Someone in another IT department produced these from a ton of XML files using some script but the naming is unfathomably stupid as you can see. I need to sort literally tens of thousands of these files with multiple files of text for each person; each with something stupid making the filename different, be it more underscores or some random number. They need to be sorted by speaker. This would be easier with a script to do most of the work then I could just go back and merge folders that should be under the same name or whatever. There are a number of ways I was thinking about doing this. parse the names from each file and sort them into folders for each unique name. get a list of all the unique names from the filenames, then look through this simplified list of unique names for similar ones and ask me whether they are the same, and once it has determined this it will sort them all accordingly. I plan on using Perl, but I can try a new language if it's worth it. I'm not sure how to go about reading in each filename in a directory one at a time into a string for parsing into an actual name. I'm not completely sure how to parse with regex in perl either, but that might be googleable. For the sorting, I was just gonna use the shell command: `cp filename.txt /example/destination/filename.txt` but just cause that's all I know so it's easiest. I dont even have a pseudocode idea of what im going to do either so if someone knows the best sequence of actions, im all ears. I guess I am looking for a lot of help, I am open to any suggestions. Many many many thanks to anyone who can help. B.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >