Search Results

Search found 282 results on 12 pages for 'extraction'.

Page 9/12 | < Previous Page | 5 6 7 8 9 10 11 12  | Next Page >

  • How to extract tags from XML

    - by uku
    Hi, I have a simple XML extraction issue that should be solvable with straight PHP and not require any libraries. All I need to do is extract the values of one tag. For example, given the string of XML: <ResultSet xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ....> <Result>Foo</Result> <Result>Bar</Result> </ResultSet> I just need to put Foo and Bar in an array. What is the easiest way to do this? Thanks!

    Read the article

  • Reading PDF metadata in PHP

    - by Mark Trapp
    I'm trying to read metadata attached to arbitrary PDFs: title, author, subject, and keywords. Is there a PHP library, preferably open-source, that can read PDF metadata? If so, or if there isn't, how would one use the library (or lack thereof) to extract the metadata? To be clear, I'm not interested in creating or modifying PDFs or their metadata, and I don't care about the PDF bodies. I've looked at a number of libraries, including FPDF (which everyone seems to recommend), but it appears only to be for PDF creation, not metadata extraction.

    Read the article

  • Minimal files to run a fully-functional PostgreSQL in Windows

    - by fxam
    I would like to bundle PostgreSQL into my application. I downloaded the PostgreSQL 9 beta zip without the installer. After extraction, there are 8 directories. I found out that at least the following directories are required to run PostgreSQL properly, at least under my experiments. bin lib share The question is, are the above three directories really sufficient to run a fullly functional PostgreSQL? Do I need the 'include' and 'symbols' directories? Also, can I remove 'share\locale' if I don't use these locales?

    Read the article

  • Unicode string handling using Windows API

    - by DeadMG
    I always assumed that Unicode string handling was some dark art. However, I've seen that the Windows API has functions for comparing Unicode strings, for example. Does that mean that it's actually feasible to write a Unicode string class that can perform simple actions like sorting, equality comparison, and extraction from a file? Or are there hidden gotchas in the use of these functions that makes it actually a really bad idea? I'm just looking at libraries like ICU and they seem incredibly over-complicated compared to what a Unicode string class backed by the Windows API could actually look like, which would resemble the Standard string classes quite closely.

    Read the article

  • Assign RegEx submatches to variables or map (C++/C)

    - by Michael
    I need to extract the SAME type of information (e.g. First name, Last Name, Telephone, ...), from numerous different text sources (each with a different format & different order of the variables of interest). I want a function that does the extraction based on a regular expression and returns the result as DESCRIPTIVE variables. In other words, instead of returning each match result as submatch[0], submatch[1], submatch[2], ..., have it do EITHER of the following: 1.) return std::map so that the submatches can be accessed via: submatch["first_name"], submatch["last_name"], submatch["telephone"] 2.) return a variables with the submatches so that the submatches can be accessed via: submatch_first_name, submatch_last_name, submatch_telephone I can write a wrapper class around boost::regex to do #1, but I was hoping there would be a built-in or a more elegant way to do this in C++/Boost/STL/C.

    Read the article

  • Example WLST Script to Obtain JDBC and JTA MBean Values

    - by Daniel Mortimer
    Introduction Following on from the blog entry "Get an Offline or Online WebLogic Domain Summary Using WLST!", I have had a request to create a smaller example which only collects a selection of JDBC (System Resource) and JTA configuration and runtime MBeans values. So, here it is. Download Sample Script You can grab the sample script by clicking here. Instructions to Run: 1. After download, extract the zip to the machine hosting the WebLogic environment. You should have three directories along with a readme.txt output Sample_Output scripts 2. In the scripts directory, find the start wrapper script startWLSTJDBCSummarizer.sh (Unix) or startWLSTJDBCSummarizer.cmd (MS Windows). Open the appropriate file in an editor and change the environment variable settings to suit your system. Example - startWLSTDomainSummarizer.cmd set WL_HOME=D:\product\FMW11g\wlserver_10.3 set DOMAIN_HOME=D:\product\FMW11g\user_projects\domains\MyDomain set WLST_OUTPUT_PATH=D:\WLSTDomainSummarizer\output\ set WLST_OUTPUT_FILE=WLST_JDBC_Summary_Via_MBeans.html call "%WL_HOME%\common\bin\wlst.cmd" WLS_JDBC_Summary_Online.py Note: The WLST_OUTPUT_PATH directory value must have a trailing slash. If there is no trailing slash, the script will error and not continue.  3. Run the shell / command line wrapper script. It should launch WLST and kick off "WLS_JDBC_Summary_Online.py". This will hit you with some prompts e.g. Is your domain Admin Server up and running and do you have the connection details? (Y /N ): Y Enter connection URL to Admin Server e.g t3://mymachine.acme.com:7001 : t3://localhost:7001 Enter weblogic username: weblogic Enter weblogic username password (function prompt 1): welcome1 (Note: the value typed in for password will not be echoed back to the console). 4. If the scripts run successfully, you should get a HTML summary in the specified output directory. See example screenshots below: Screenshot 1 - JDBC System Resource Tab Page  Screenshot 2 - JTA Tab Page 5. For the HTML to render correctly, ensure the .js and .css files provided (review the output directory created by the zip file extraction) are accessible. For example, to view the HTML locally (without using a web server), place the HTML output, jquery-ui.js, spry.js and wlstsummarizer.css in the same directory. Disclaimer This is a sample script. I have tested it against WebLogic Server 10.3.6 domains on MS Windows and Unix.  I cannot guarantee that the script will run error free or produce the expected output on your system. If you have any feedback add a comment to the blog. I will endeavour to fix any problems with my WLST code. Credits JQuery: http://jquery.com/ Spry (Adobe) : https://github.com/adobe/Spryhttp://www.red-team-design.com/cool-headings-with-pseudo-elements

    Read the article

  • What is a good design model for my new class?

    - by user66662
    I am a beginning programmer who, after trying to manage over 2000 lines of procedural php code, now has discovered the value of OOP. I have read a few books to get me up to speed on the beginning theory, but would like some advice on practical application. So,for example, let's say there are two types of content objects - an ad and a calendar event. what my application does is scan different websites (a predefined list), and, when it finds an ad or an event, it extracts the data and saves it to a database. All of my objects will share a $title and $description. However, the Ad object will have a $price and the Event object will have $startDate. Should I have two separate classes, one for each object? Should I have a 'superclass' with the $title and $description with two other Ad and Event classes with their own properties? The latter is at least the direction I am on now. My second question about this design is how to handle the logic that extracts the data for $title, $description, $price, and $date. For each website in my predefined list, there is a specific regex that returns the desired value for each property. Currently, I have an extremely large switch statement in my constructor which determines what website I am own, sets the regex variables accordingly, and continues on. Not only that, but now I have to repeat the logic to determine what site I am on in the constructor of each class. This doesn't feel right. Should I create another class Algorithms and store the logic there for each site? Should the functions of to handle that logic be in this class? or specific to the classes whos properties they set? I want to take into account in my design two things: 1) I will add different content objects in the future that share $title and $description, but will have their own properties, so, I want to be able to easily grow these as needed. 2) I will add more websites constantly (each with their own algorithms for data extraction) so I would like to plan efficienty managing and working with these now. I thought about extending the Ad or Event class with 'websiteX' class and store its functions there. But, this didn't feel right either as now I have to manage 100s of little website specific class files. Note, I didn't know if this was the correct site or stackoverflow was the better choice. If so, let me know and I'll post there.

    Read the article

  • Commandline program to extract archives with automatic subdirectry detection

    - by ??????
    The title already says it. What I'm looking for is essentially the pure commandline counterpart to ark -ba <path> (on KDE), or file-roller -h <path> (on GNOME/Unity). Unfortunately, both ark and file-roller require X to be running. I'm aware that it is relatively simple to write a tool that detects archives based on their file extension, and then runs the appropiate program: #!/bin/bash if [[ -f "$1" ]] ; then case $1 in *.tar.bz2) tar xjvf $1 ;; *.tar.gz) tar xzvf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) rar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xf $1 ;; *.tbz2) tar xjvf $1 ;; *.tgz) tar xzvf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1 ;; *.7z) 7z x $1 ;; *) echo "'$1' cannot be extracted with this utility" ;; esac else echo "path '$1' does not exist or is not a file" fi However, that doesn't take care of subdirectory detection (and in fact, many extraction programs do not even supply such an option). So might there be a program that does exactly that? I wasn't sure whether or not to ask on askubuntu.com, because this question isn't really about Ubuntu, but rather about any Linux operating system. My apologies if this question does not fit in here.

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • MS SQL Server 2005 Express rebuild master DB problem

    - by PaN1C_Showt1Me
    Hi ! There has been a power loss on our server and i cannot start the SQL service because the master DB is corrupted (as the log states). I found many articles recommending running the setup.exe with optional parameters: This is what I did: I've downloaded SQLEXPR32.EXE from MS page and ran it The first problem was, that it extracted all the setup files and started the default installation procedure. (which was unuseful for me as I need those params). If I canceled it, all the extracted files disappeared. That's why I decided to copy the extracted files somewhere and than cancel the default installation. Now I'm trying to run the setup.exe from the extraction: setup.exe /qb INSTANCENAME=MSSQLSERVER REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=xxxxx it asks me if I want to rewrite the system db, which is what I need, but then while installing I get this error: *An installation package for the product Microsoft SQL Server 2005 Express Edition cannot be found. Try the installation again using a valid copy of the installation package 'SqlRun_SQL.msi'* Then it tries to install something and it states: cannot install because the same instance name already exists. But I don't want to install a new instance .. Any idea how to solve this, please? Thank you in advance !

    Read the article

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Pattern matching gnmap fields with SED

    - by Ovid
    I am testing the regex needed for creating field extraction with Splunk for nmap and think I might be close... Example full line: Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 5.9p1 Debian 5ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 2.2.22 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 2.6.32 - 3.2 Seq Index: 257 IP ID Seq: All zeros I've used underscore "_" as the delimiter because it makes it a little easier to read. root@host:/# sed -n -e 's_\([0-9]\{1,5\}\/[^/]*\/[^/]*\/\/[^/]*\/\/[^/]*\/.\)_\n\1_pg' filename The same regex with the escape characters removed: root@host:/# sed -n -e 's_\([0-9]\{1,5\}/[^/]*/[^/]*//[^/]*//[^/]*/.\)_\n\1_pg' filename Output: ... ... ... Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 2.0p1 Debian 2ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 5.4.32 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 9.8.76 - 7.3 Seq Index: 257 IPID Seq: All zeros ... ... ... As you can see, the pattern matching appears to be working - although I am unable to: 1 - match on both the end of line ( comma , and white/tabspace). The last line contains unwanted text (in this case, the OS and TCP timing info) and 2 - remove any of the un-necessary data - i.e. print only the matching pattern. It is actually printing the whole line. If i remove the sed -n flag, the remaining file contents are also printed. I can't seem to locate a way to only print the matched regex. Being fairly new to sed and regex, any help or pointers is greatly appreciated!

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • JPedal Action for Converting PDF to JavaFX

    - by Geertjan
    The question of the day comes from Mark Stephens, from JPedal (JPedal is the leading 100% Java PDF library, providing a Java PDF viewer, PDF to image conversion, PDF printing or adding PDF search and PDF extraction features), in the form of a screenshot: The question is clear. By looking at the annotations above, you can see that Mark has an ActionListener that has been bound to the right-click popup menu on PDF files. Now he needs to get hold of the file to which the Action has been bound. How, oh  how, can one get hold of that file? Well, it's simple. Leave everything you see above exactly as it is but change the Java code section to this: public final class PDF2JavaFXContext implements ActionListener {     private final DataObject context;     public PDF2JavaFXContext(DataObject context) {         this.context = context;     }     public void actionPerformed(ActionEvent ev) {         FileObject fo = context.getPrimaryFile();         File theFile = FileUtil.toFile(fo);         //do something with your file...     } } The point is that the annotations at the top of the class bind the Action to either Actions.alwaysEnabled, which is a factory method for creating always-enabled Actions, or Actions.context, which is a factory method for creating context-sensitive Actions. How does the Action get bound to the factory method? The annotations are converted, when the module is compiled, into XML registration entries in the "generated-layer.xml", which you can find in your "build" folder, in the Files window, after building the module. In Mark's case, since the Action should be context-sensitive to PDF files, he needs to bind his PDF2JavaFXContext ActionListener (which should probably be named "PDF2JavaFXActionListener", since the class is an ActionListener) to Actions.context. All he needs to do that is pass in the object he wants to work with into the constructor of the ActionListener. Now, when the module is built, the annotation processor is going to take the annotations and convert them to XML registration entries, but the constructor will also be checked to see whether it is empty or not. In this case, the constructor isn't empty, hence the Action should be context-sensitive and so the ActionListener is bound to Actions.context. The Actions.context will do all the enablement work for Mark, so that he will not need to provide any code for enabling/disabling the Action. The Action will be enabled whenever a DataObject is selected. Since his Action is bound to Nodes in the Projects window that represent PDF files, the Action will always be enabled whenever Mark right-clicks on a PDF Node, since the Node exposes its own DataObject. Once Mark has access to the DataObject, he can get the underlying FileObject via getPrimaryFile and he can then convert the FileObject to a java.io.File via FileUtil.getConfigFile. Once he's got the java.io.File, he can do with it whatever he needs. Further reading: http://bits.netbeans.org/dev/javadoc/

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Restarting explorer from a batch file only opens an explorer window

    - by Ben Hooper
    In one part of a batch file (kind of), I need to restart Explorer. I use the following method to accomplish this: taskkill /f /im explorer.exe >nul explorer.exe :: I have also tried: %winDir%\explorer.exe :: start %winDir%\explorer.exe :: start /b %winDir%\explorer.exe :: start /d %winDir%\explorer.exe (as suggested by panda-34) :: :: I've even tried delaying the above commands with: ping localhost -n 11 >nul Then this happens: explorer.exe is successfully terminated (denoted by the lack of taskbar and desktop) An explorer window opens, which I am left with indefinitely (see Image 1) I can only then restart explorer by manually starting a new process from Task Manager (Win + R doesn't respond), even though explorer.exe is actually already in the process list, strangely enough.. (see Image 2)   Now, I say "kind of" as I'm running the batch file from a self-executing SFX archive, created with WinRAR. When executed, the contents of the archive are extracted to %temp% and a user-defined bootstrapper (in this case, my batch file) is run upon successful extraction. The strangest thing about it, though, is that if you manually extract the contents of the archive and run the batch file then explorer restarts correctly. It only ever glitches when called from an SFX. I'm experiencing this glitch on Windows 7 x64.   Link to an SFX archive demonstrating this, if anyone wants it: https://dl.dropbox.com/u/27573003/Social%20Distribution/restart-explorer.exe Image 1: Image 2:

    Read the article

  • Rails Autocompletion Issue - Rails 1.2.3 to 2.3.5

    - by Grant Sayer
    I have an issue with rails Autocompletion from some code that i've inherited from an old Rails 1.2.3 project that I'm porting to Rails 2.3.5. The issue revolves around javascript execution within the auto_complete helper :after_update_element. The scenario is: A user is presented with a popup form with a number of fields. In the first field as they enter text the auto_complete AJAX call occurs, returning a result, plus a series of other HTML data wrapped in <divs> so that the after_update_element call can iterate over the other data and fill in the remaining fields. The issue lies with the extraction of the other fields which works on IE, fails on Firefox. Here is the code: <%= text_field_with_auto_complete :item, :product_code, {:value => ""}, {:size => 40, :class => "input-text", :tabindex => 6, :select => 'code', :with => "element.name + '=' + escape(element.value) + '&supplier_id=' + $('item_supplier_id').value", :after_update_element => "function (ele, value) { $('item_supplier_id').value = Utilities.extract_value(value, 'supplier_id'); $('item_supplied_size').value = Utilities.extract_value(value, 'size')}"}%> Now the function Utilities is designed to grab the fields from the string of values and looks like: // // Extract a particular set of data from the autocomplete actions // Utilities.extract_value = function (value, className) { var result; var elements = document.getElementsByClassName(className, value); if (elements && elements.length == 1) { result = elements[0].innerHTML.unescapeHTML(); } return result; }; In Firefox the value of result is undefined

    Read the article

  • Extracting bool from istream in a templated function

    - by Thomas Matthews
    I'm converting my fields class read functions into one template function. I have field classes for int, unsigned int, long, and unsigned long. These all use the same method for extracting a value from an istringstream (only the types change): template <typename Value_Type> Value_Type Extract_Value(const std::string& input_string) { std::istringstream m_string_stream; m_string_stream.str(input_string); m_string_stream.clear(); m_string_stream >> value; return; } The tricky part is with the bool (Boolean) type. There are many textual representations for Boolean: 0, 1, T, F, TRUE, FALSE, and all the case insensitive combinations Here's the questions: What does the C++ standard say are valid data to extract a bool, using the stream extraction operator? Since Boolean can be represented by text, does this involve locales? Is this platform dependent? I would like to simplify my code by not writing my own handler for bool input. I am using MS Visual Studio 2008 (version 9), C++, and Windows XP and Vista.

    Read the article

  • Obtaining information about executable code from exe/pdb

    - by Miro Kropacek
    Hello, I need to extract code (but not data!) from classic win32 exe/dll files. It's clear I can't do this only with extraction of code segment content (because code segment contains also the data -- jump tables for example) and that I need some help from compiler. *.map files are nice but they only contain addresses of functions, i.e. the safest thing I can do is to start at that address and to process until the first return / jump instruction (because part of the function could be mentioned data) *.pdb files are better but I'm not sure what tools to use to extract information like this -- I took a look at DbgHelp and DIA SDK, the latter one seems to be the right tool but it doesn't look very simple. So my question/questions: To your knowledge, it is possible to extract information about code/data position (address + length) only via DbgHelp? If the DIA SDK is the only way, any idea what should I call for getting information like that? (that COM stuff is pretty heavy) Is there any other way? Of course my concern is about Visual Studio, C/C++ source compilation in the first place. Thanks for any hint.

    Read the article

  • Basic Steps in reading Excel files into matlab

    - by user3693727
    >> [NUM,TXT,RAW]=xlsread('C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1') ??? Error using ==> xlsread at 219 XLSREAD unable to open file C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1. File C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1.xls not found. This is the error that I have received when I try to read a simple Excel file into MATLAB. This is a snapshot of the spreadsheet I would like to load in. Could guide me the basic know-how to extract these data? I have looked through the other questions pertaining to reading Excel files into MATLAB, but I am still very confused. I ultimately wish to extract the file below for my project using the same method. The second image shows the data I have to extract which I could not do. Its file type seems to be different, it is comma separated values file which is not xls. Hence, I am also confuse about whether different file type prevents extraction of data. Thanks you for helping(:

    Read the article

  • How can I substitute the nth occurrence of a match in a Perl regex?

    - by Zaid
    Following up from an earlier question on extracting the n'th regex match, I now need to substitute the match, if found. I thought that I could define the extraction subroutine and call it in the substitution with the /e modifier. I was obviously wrong (admittedly, I had an XY problem). use strict; use warnings; sub extract_quoted { # à la codaddict my ($string, $index) = @_; while($string =~ /'(.*?)'/g) { $index--; return $1 if(! $index); } return; } my $string = "'How can I','use' 'PERL','to process this' 'line'"; extract_quoted ( $string, 3 ); $string =~ s/&extract_quoted($string,2)/'Perl'/e; print $string; # Prints 'How can I','use' 'PERL','to process this' 'line' There are, of course, many other issues with this technique: What if there are identical matches at different positions? What if the match isn't found? In light of this situation, I'm wondering in what ways this could be implemented.

    Read the article

  • Delphi Shell IExtractIcon usage and result

    - by Roy M Klever
    What I do: Try to extract thumbnail using IExtractImage if that fail I try to extract icons using IExtractIcon, to get maximum iconsize, but IExtractIcon gives strange results. Problem is I tried to use a methode that extracts icons from an imagelist but if there is no large icon (256x256) it will render the smaller icon at the topleft position of the icon and that does not look good. That is why I am trying to use the IExtractIcon instead. But icons that show up as 256x256 icons in my imagelist extraction methode reports icon sizes as 33 large and 16 small. So how do I check if a large (256x256) icon exists? If you need more info I can provide som sample code. if PThumb.Image = nil then begin OleCheck(ShellFolder.ParseDisplayName(0, nil, StringToOleStr(PThumb.Name), Eaten, PIDL, Atribute)); ShellFolder.GetUIObjectOf(0, 1, PIDL, IExtractIcon, nil, XtractIcon); CoTaskMemFree(PIDL); bool:= False; if Assigned(XtractIcon) then begin GetLocationRes := XtractIcon.GetIconLocation(GIL_FORSHELL, @Buf, sizeof(Buf), IIdx, IFlags); if (GetLocationRes = NOERROR) or (GetLocationRes = E_PENDING) then begin Bmp := TBitmap.Create; try OleCheck(XtractIcon.Extract(@Buf, IIdx, LIcon, SIcon, 32 + (16 shl 16))); Done:= False; Roy M Klever

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12  | Next Page >