Search Results

Search found 3241 results on 130 pages for 'extract'.

Page 40/130 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • CodePlex Daily Summary for Thursday, September 27, 2012

    CodePlex Daily Summary for Thursday, September 27, 2012Popular ReleasesVisual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textMCEBuddy 2.x: MCEBuddy 2.2.16: Changelog for 2.2.16 (32bit and 64bit) Now a standalone remote client also available to control the Engine remotely. 1. Added support for remote connections for status and configuration. MCEBuddy now uses port 23332. The remote server name, remote server port and local server port can be updated from the MCEBuddy.conf file BUT the Service or GUI needs to be restarted (i.e. reboot or restart service or restart program) for it to take effect. Refer to documentation for more details http://mce...LoLHQ - Personal League of Legends Assistant: LoLHQ v1.0: LoLHQ version 1.0 (full, portable) Instructions: - Download - Extract - Run LoLHQ.exe - If needed, specify League of Legends installation path.D3 Loot Tracker: 1.3.1 (patch): - Magic find value will now display properly and includes follower value. - ILevel of legendary items will now display properly.ZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demoSSIS GoogleAnalyticsSource: Version 1.1 Alpha 2: The component uses now the Google API V2.4 including the management API.Ext Spec: Ext Spec 1.0.0: Completed remaining tasks: 7 12 17The GLMET Project: Sound Recorder: --WinRtBehaviors: V1.0.2: Includes simple Blend SupportMVC Bootstrap: MVC Boostrap 0.5.1: A small demo site, based on the default ASP.NET MVC 3 project template, showing off some of the features of MVC Bootstrap. This release uses Entity Framework 5 for data access and Ninject 3 for dependency injection. If you download and use this project, please give some feedback, good or bad!Simple PM - Project Management Simplified !: Simple Pm v1 - Aplha (Re-released): INSTALLATION GUIDE 1. Run the web setup, which will install the web app to IIS. 2. Make sure you select your application pool to "ASP.NET v4.0" during the installation. 3. Create a database named "SimplePm" 4. Run the attached database script on this database. 5. Change the database username and password in connection strings defined as SimplePmEntities and ApplicationServices from the Web.config file 6. Thats its ! Simple Pm are ready to go ! For any installation assistance feel free to c...menu4web: menu4web 1.0 - free javascript menu for web sites: menu4web 1.0 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 21 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Rawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.VidCoder: 1.4.1 Beta: Updated to HandBrake 4971. This should fix some issues with stuck PGS subtitles. Fixed build break which prevented pre-compiled XML serializers from showing up. Fixed problem where a preset would get errantly marked as modified when re-opening the encode settings window or importing a new preset.JSLint for Visual Studio 2010: 1.4.0: VS2012 support is alphaBlackJumboDog: Ver5.7.2: 2012.09.23 Ver5.7.2 (1)InetTest?? (2)HTTP?????????????????100???????????Player Framework by Microsoft: Player Framework for Windows 8 (Preview 6): IMPORTANT: List of breaking changes from preview 5 Added separate samples download with .vsix dependencies instead of source dependencies Support for FreeWheel SmartXML ad responses Support for Smooth Streaming SDK DownloaderPlugins Support for VMAP and TTML polling for live scenarios Support for custom smooth streaming byte stream and scheme handlers Support for new play time and position tracking plugin Added IsLiveChanged event Added AdaptivePlugin.MaxBitrate property Add...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.8: Version: 2.5.0.8 (Milestone 8): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Mark the class DataModel as serializable. InfoMan: Minor improvements. InfoMan: Add unit tests for all modules. Othe...LogicCircuit: LogicCircuit 2.12.9.20: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionToolbars on text note dialog are more flexible now. You can select font face, size, color, and background of text you are typing. RAM now can be initialized to one of the following: random va...New ProjectsAltairis Binary Store Provider: Altairis Binary Store is provider based system for storing arbitrary binary data either in file system or in Azure Blob Storage.Azure Diagnostic Log Viewer: This tool helps in viewing logs written in azure diagnostics log tables. <MORE DETAILS COMING SOON>bluequiz: Là d? án giúp các cá nhân, t? ch?c, tru?ng h?c, t? ch?c thi tr?c nghi?m tr?c tuy?n và qu?n lý di?m m?t cách d? dàng hon.Detect User Geo location: Detect site visitor geo location Detect geo location of the site visitors by html5 geolocation (first) or ipinfodb.com(second choice). Educards: Educational cards.EuWangSSO: SSO (single sign-on) solutionFree Aspx Image Gallery: This is first basic release of my free aspx image gallery project. It is free to use and modify by the user without any need of providing any credit to me.GEFe: GEFeguohai's project: a open source project!ISMOT - Async Helper: This is a wrapper library for Microsoft's Async Library (CTP). Using this library greatly facilitates async calls in Silverlight and Windows Phone projects.LoLHQ - Personal League of Legends Assistant: LoLHQ - Personal League of Legends Assistant Check detailed ELO ratings and statistics of your teammates. Personalize counter-pick lists, view Champions detailsMNK-HKM addon: Special thanks to: Kill Us Or Die TryingPCV_CLINIC: my project PCV_ClinicRemote Commands: Program which allow to control computer sending e-mails with commands.Resource File Comparer: Resource File Comparer is a quick utility to help compare the availability of required resource strings in all resource files for an application.Rich Client Template: The new Rich Client Template for Visual Studio 2012Simple Microsoft Excel Document Converter (Convert To XLS, XLSX, PDF, XPS): This library is based on Microsoft.Office.Interop.Excel. This library can convert Excel documents to PDF or XPS in C#.TCC News: testeTesPro: ???The Eggbert Chronicles: The Eggbert Chronicles, a 2.5D platformer built in Unity and created using Javascript and C#.Tombola XNA: Gioco della tombola utilizzando XNA.Tools and Company: "Tools and Company" is a DLL which contains a set of useful and generic classes and functions in C# language. I use this DLL in my different other C# projects.TradeIt: :)usermade: user made websiteWIT Sync Manager 2010: Enables you to synchronize virtually all aspects of work item types and related artifacts across collections and projects in TFS 2010.xebictetris: Xebic Tetris Web Testxmwms: xmwmsXNA for Windows 8(Windows RT).: Officially there is no support for XNA Game Studio in Windows 8(WinRT). So this is a light weight XNA Framework made using the .NET 4.5 and Metro Style APIs and????????????? ?????????: ???????? ?????? ?? ???????? ?????????????? ??????????

    Read the article

  • Convert NSData to primitive variable with ieee-754 or twos-complement ?

    - by William GILLARD
    Hi every one. I am new programmer in Obj-C and cocoa. Im a trying to write a framework which will be used to read a binary files (Flexible Image Transport System or FITS binary files, usually used by astronomers). The binary data, that I am interested to extract, can have various formats and I get its properties by reading the header of the FITS file. Up to now, I manage to create a class to store the content of the FITS file and to isolate the header into a NSString object and the binary data into a NSData object. I also manage to write method which allow me to extract the key values from the header that are very valuable to interpret the binary data. I am now trying to convert the NSData object into a primitive array (array of double, int, short ...). But, here, I get stuck and would appreciate any help. According to the documentation I have about the FITS file, I have 5 possibilities to interpret the binary data depending on the value of the BITPIX key: BITPIX value | Data represented 8 | Char or unsigned binary int 16 | 16-bit two's complement binary integer 32 | 32-bit two's complement binary integer 64 | 64-bit two's complement binary integer -32 | IEEE single precision floating-point -64 | IEEE double precision floating-point I already write the peace of code, shown bellow, to try to convert the NSData into a primitive array. // self reefer to my FITS class which contain a NSString object // with the content of the header and a NSData object with the binary data. -(void*) GetArray { switch (BITPIX) { case 8: return [self GetArrayOfUInt]; break; case 16: return [self GetArrayOfInt]; break; case 32: return [self GetArrayOfLongInt]; break; case 64: return [self GetArrayOfLongLong]; break; case -32: return [self GetArrayOfFloat]; break; case -64: return [self GetArrayOfDouble]; break; default: return NULL; } } // then I show you the method to convert the NSData into a primitive array. // I restrict my example to the case of 'double'. Code is similar for other methods // just change double by 'unsigned int' (BITPIX 8), 'short' (BITPIX 16) // 'int' (BITPIX 32) 'long lon' (BITPIX 64), 'float' (BITPIX -32). -(double*) GetArrayOfDouble { int Nelements=[self NPIXEL]; // Metod to extract, from the header // the number of element into the array NSLog(@"TOTAL NUMBER OF ELEMENTS [%i]\n",Nelements); //CREATE THE ARRAY double (*array)[Nelements]; // Get the total number of bits in the binary data int Nbit = abs(BITPIX)*GCOUNT*(PCOUNT + Nelements); // GCOUNT and PCOUNT are defined // into the header NSLog(@"TOTAL NUMBER OF BIT [%i]\n",Nbit); int i=0; //FILL THE ARRAY double Value; for(int bit=0; bit < Nbit; bit+=sizeof(double)) { [Img getBytes:&Value range:NSMakeRange(bit,sizeof(double))]; NSLog(@"[%i]:(%u)%.8G\n",i,bit,Value); (*array)[i]=Value; i++; } return (*array); } However, the value I print in the loop are very different from the expected values (compared using official FITS software). Therefore, I think that the Obj-C double does not use the IEEE-754 convention as well as the Obj-C int are not twos-complement. I am really not familiar with this two convention (IEEE and twos-complement) and would like to know how I can do this conversion with Obj-C. In advance many thanks for any help or information.

    Read the article

  • Best way to get back to using the power of lxml after having to use a regex to find something in an

    - by PyNEwbie
    I am trying to rip some text out of a large number of html documents (numbers in the hundreds of thousands). The documents are really forms but they are prepared by a very large group of different organizations so there is significant variation in how they create the document. For example, the documents are divided into chapters. I might want to extract the contents of Chapter 5 from every document so I can analyze the content of the chapter. Initially I thought this would be easy but it turns out that the authors might use a set of non-nested tables throughout the document to hold the content so that Chapter n could be displayed using td tags inside a table. Or they might use other elements such as p tags H tags, div tags or any other block level element. After trying repeatedly to use lxml to help me identify the beginning and end of each chapter I have determined that it is a lot cleaner to use a regular expression because in every case, no matter what the enclosing html element is the chapter label is always in the form of >Chapter # It is a little more complicated in that there might be some white space or non-breaking space represented in different ways (  or   or just spaces). Nonetheless it was trivial to write a regular expression to identify the beginning of each section. (The beginning of one section is the end of the previous section.) But now I want to use lxml to get the text out. My thought is that I have really no choice but to walk along my string to find the close tag for the element that encloses the text I am using to find the relevant section. That is here is one example where the element holding the Chapter name is a div <div style="DISPLAY: block; MARGIN-LEFT: 0pt; TEXT-INDENT: 0pt; MARGIN-RIGHT: 0pt" align="left"><font style="DISPLAY: inline; FONT-WEIGHT: bold; FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman">Chapter 1.&#160;&#160;&#160;Our Beginnings.</font></div> So I am imagining that I would begin at the location where I found the match for chapter 1 and set up a regular expressions to find the next </div|</td|</p|</h1 . . . So at this point I have identified the type of element holding my chapter heading I can use the same logic to find all of the text that is within that element that is set up a regular expression to help me mark from >Chapter 1.&#160;&#160;&#160;Our Beginnings.< So I have identified where my Chapter 1 begins I can do the same for chapter 2 (which is where Chapter 1 ends) Now I am imagining that I am going to snip the document beginning at the opening of the element that I identified as the element the indicates where chapter 1 begins and ending just before the opening of the element that I identified as the element that indicates where Chapter 2 begins. The string that I have identified will then be fed to lxml to use its power to get the content. I am going to all of this trouble because I have read over and over - never use a regular expression to extract content from html documents and I have not hit on a way to be as accurate with lxml to identify the starting and ending locations for the text I want to extract. For example, I can never be certain that the subtitle of Chapter 1 is Our Beginnings it could be Our Red Canary. Let me say that I spent two solid days trying with lxml to be confident that I had the beginning and ending elements and I could only be accurate <60% of the time but a very short regular expression has given me better than 95% success. I have a tendency to make things more complicated than necessary so I am wondering if anyone has seen or solved a similar problems and if they had an approach (not the details mind you) that they would like to offer.

    Read the article

  • jsp getServletContext() error

    - by Reigel
    html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Murach's Java Servlets and JSP</title> </head> <body> <%-- import packages and classes needed by the scripts --%> <%@ page import="business.*, data.*" %> <% //get parameters from the request String firstName = request.getParameter("firstName"); String lastName = request.getParameter("lastName"); String emailAddress = request.getParameter("emailAddress"); // get the real path for the EmailList.txt file ServletContext sc = this.getServletContext(); String path = sc.getRealPath("/WEB-INF/EmailList.txt"); // use regular Java objects User user = new User(firstName, lastName, emailAddress); UserIO.add(user, path); %> <h1>Thanks for joining our email list</h1> <p>Here is the information that you entered: </p> <table cellspacing="5" cellpadding="5" border="1"> <tr> <td align="right">First name:</td> <td><%= firstName %></td> </tr> <tr> <td align="right">Last name:</td> <td><%= lastName %></td> </tr> <tr> <td align="right">Email Address:</td> <td><%= emailAddress %></td> </tr> </table> <p>To enter another email address, click on the Back <br /> button in your browser or the Return button shown <br /> below.</p> <form action="index.jsp" method="post"> <input type="submit" value="Return" /> </form> </body> </html> and it's giving me this error page... Compilation of 'C:\bea\user_projects\domains\mydomain.\myserver.wlnotdelete\extract\myserver_sample01_WebContent\jsp_servlet__display_email_entry.java' failed: C:\bea\user_projects\domains\mydomain.\myserver.wlnotdelete\extract\myserver_sample01_WebContent\jsp_servlet__display_email_entry.java:140: cannot resolve symbol probably occurred due to an error in /display_email_entry.jsp line 19: ServletContext sc = this.getServletContext(); Full compiler error(s): C:\bea\user_projects\domains\mydomain.\myserver.wlnotdelete\extract\myserver_sample01_WebContent\jsp_servlet__display_email_entry.java:140: cannot resolve symbol symbol : method getServletContext () location: class jsp_servlet.__display_email_entry     ServletContext sc = this.getServletContext(); //[ /display_email_entry.jsp; Line:19]                                    ^ 1 error Thu Jun 03 15:56:09 CST 2010 any hint? I'm really new to JSP, and this is my first learning practice... can't find it by google.com.... thanks!

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 19 (sys.dm_exec_query_stats)

    - by Tamarick Hill
    The sys.dm_exec_query_stats DMV is one of the most useful DMV’s out there when it comes to performance tuning. If you have been keeping up with this blog series this month, you know that I started out on Day 1 reviewing many of the DMV’s within the ‘exec’ namespace. I’m not sure how I missed this one considering how valuable it is, but hey, they say it’s better late than never right?? On Day 7 and Day 8 we reviewed the sys.dm_exec_procedure_stats and sys.dm_exec_trigger_stats respectively. This sys.dm_exec_query_stats DMV is very similar to these two. As a matter of fact, this DMV will return all of the information you saw in the other two DMV’s, but in addition to that, you can see stats for all queries that have cached execution plans on your server. You can even see stats for statements that are ran Ad-Hoc as long as they are still cached in the buffer pool. To better illustrate this DMV, let have a quick look at it: SELECT * FROM sys.dm_exec_query_stats As you can see, there is a lot of information returned from this DMV. I wont go into detail about each and every one of these columns, but I will touch on a few of them briefly. The first column is the ‘sql_handle’, which if you remember from Day 4 of our blog series, I explained how you can use this column to extract the actual SQL text that was executed. The next columns statement_start_offset and statement_end_offset provide you a way of extracting the exact SQL statement that was executed as part of a batch. The plan_handle column is used to extract the Execution plan that was used, which we talked about during Day 5 of this blog series. Later in the result set, you have columns to identify how many times a particular statement was executed, how much CPU time it used, how many reads/writes it performed, the duration, how many rows were returned, etc. These columns provide you with a solid avenue to begin your performance optimization. The last column I will touch on is the query_plan_hash column. A lot of times when you have Dynamic SQL running on your server, you have similar statements with different parameter values being passed in. Many times these types of statements will get similar execution plans and then a Binary hash value can be generated based on these similar plans. This query plan hash can be used to find the cost of all queries that have similar execution plans and then you can tune based on that plan to improve the performance of all of the individual queries. This is a very powerful way of identifying and tuning Ad-hoc statements that run on your server. As I stated earlier, this sys.dm_exec_query_stats DMV is a very powerful and recommended DMV for performance tuning. You are able to quickly identify statements that are running on your server and analyze their impact on system resources. Using this DMV to track down the biggest performance killers on your server will allow you to make the biggest gains once you focus your tuning efforts on those top offenders. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms189741.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Fill a Flash Drive with Portable Software using Lupo PenSuite

    - by Asian Angel
    A flash drive full of portable software is helpful to have along wherever you go. The Lupo PenSuite lets you choose from three different versions to get the best fit for your everyday needs. Note: If running the full version you will need a 512 MB USB flash drive or larger. Using Lupo PenSuite The one window to watch for during the setup process is where you have the opportunity to add a specific language pack if needed. Outside of that all that you need to do is sit back and wait for the suite to be extracted. Note: Extraction times will vary based on version and extraction location. Here we browsed to our flash drive to extract it to… Once the setup process is complete locate and double click the Lupo_PenSuite.exe file. This one time window will present you the opportunity to start using the suite immediately, or go directly into the options. When the suite is active you will have a new system tray icon that operates as a start menu button. At the bottom you can monitor the remaining room on your flash drive, and use the close button to exit the suite (may display as a power button based on menu theme). A quick look at the set up inside the suite. There is a pre-configured area for organizing and storing your personal files. Prefer a classic style menu? Just select for it in the options (various tab) and enjoy a smaller streamlined look. Note: You can also change the theme for the regular menu and add a user pic. The suite provides access to your portable software and online sites. You get to enjoy the best of both as shown in the following examples. Websites will open using the suite’s portable Firefox install. VLC is ready to play your downloaded videos. The suite also has some very nice photo editing programs added in. Installing Additional Apps If one of your favorite programs is not included in the suite version, it only takes a few minutes to add it in. Go to the Additional Apps webpage, download the app(s), and extract them onto your hard-drive. Note: Link for additional apps webpage provided below. Add the extracted app(s) to the MyApps folder in the suite’s folder hierarchy. Click on ASuite in the suite’s start menu. Drag and drop the portable app’s exe file into the MyApps section in the ASuite window. Your new software’s shortcut should display as shown here. Close this window when finished. Checking the suite’s start menu will show your new software ready to be used. Conclusion If you need a good portable software collection to carry with you on a flash drive then Lupo PenSuite is definitely worth taking a look at. We tested Lupo PenSuite on XP, Vista, and Windows 7 and it works great on all three. Another popular choice is PortableApps and you can check out our Review of that too they are essentially the same thing, each is just packaged differently. Links Download Lupo PenSuite (Full, Lite, & Zero versions) *Download links approximately one-third down the page. Download Additional Apps for Lupo PenSuite Download Additional Skins for Lupo PenSuite Start Menu View Video Tutorials *Has tutorial for easy updating of entire suite. Similar Articles Productive Geek Tips Install and Run Applications from Your iPod, Flash Drive or Mp3 PlayerRebit Backup Software [Review]BitLocker To Go Encrypts Portable Flash Drives in Windows 7Create a Bootable Ubuntu USB Flash Drive the Easy WaySpeed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • HSSFS Part 2.1 - Parsing @@VERSION

    - by Most Valuable Yak (Rob Volk)
    For Part 2 of the Handy SQL Server Function Series I decided to tackle parsing useful information from the @@VERSION function, because I am an idiot.  It turns out I was confused about CHARINDEX() vs. PATINDEX() and it pretty much invalidated my original solution.  All is not lost though, this mistake turned out to be informative for me, and hopefully for you. Referring back to the "Version" view in the prelude I started with the following query to extract the version number: SELECT DISTINCT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, 12) VerNum FROM VERSION I used PATINDEX() to find the first hyphen "-" character in the string, since the version number appears 2 positions after it, and got these results: SQLVersion VerNum ----------- ------------ 2000 8.00.2055 (I 2005 9.00.3080.00 2005 9.00.4053.00 2008 10.50.1600.1 As you can see it was good enough for most of the values, but not for the SQL 2000 @@VERSION.  You'll notice it has only 3 version sections/octets where the others have 4, and the SUBSTRING() grabbed the non-numeric characters after.  To properly parse the version number will require a non-fixed value for the 3rd parameter of SUBSTRING(), which is the number of characters to extract. The best value is the position of the first space to occur after the version number (VN), the trick is to figure out how to find it.  Here's where my confusion about PATINDEX() came about.  The CHARINDEX() function has a handy optional 3rd parameter: CHARINDEX (expression1 ,expression2 [ ,start_location ] ) While PATINDEX(): PATINDEX ('%pattern%',expression ) Does not.  I had expected to use PATINDEX() to start searching for a space AFTER the position of the VN, but it doesn't work that way.  Since there are plenty of spaces before the VN, I thought I'd try PATINDEX() on another character that doesn't appear before, and tried "(": SELECT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, PATINDEX('%(%',VersionString)) FROM VERSION Unfortunately this messes up the length calculation and yields: SQLVersion VerNum ----------- --------------------------- 2000 8.00.2055 (Intel X86) Dec 16 2008 19:4 2005 9.00.3080.00 (Intel X86) Sep 6 2009 01: 2005 9.00.4053.00 (Intel X86) May 26 2009 14: 2008 10.50.1600.1 (Intel X86) Apr 2008 10.50.1600.1 (X64) Apr 2 20 Yuck.  The problem is that PATINDEX() returns position, and SUBSTRING() needs length, so I have to subtract the VN starting position: SELECT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, PATINDEX('%(%',VersionString)-PATINDEX('%-%',VersionString)) VerNum FROM VERSION And the results are: SQLVersion VerNum ----------- -------------------------------------------------------- 2000 8.00.2055 (I 2005 9.00.4053.00 (I Msg 537, Level 16, State 2, Line 1 Invalid length parameter passed to the LEFT or SUBSTRING function. Ummmm, whoops.  Turns out SQL Server 2008 R2 includes "(RTM)" before the VN, and that causes the length to turn negative. So now that that blew up, I started to think about matching digit and dot (.) patterns.  Sadly, a quick look at the first set of results will quickly scuttle that idea, since different versions have different digit patterns and lengths. At this point (which took far longer than I wanted) I decided to cut my losses and redo the query using CHARINDEX(), which I'll cover in Part 2.2.  So to do a little post-mortem on this technique: PATINDEX() doesn't have the flexibility to match the digit pattern of the version number; PATINDEX() doesn't have a "start" parameter like CHARINDEX(), that allows us to skip over parts of the string; The SUBSTRING() expression is getting pretty complicated for this relatively simple task! This doesn't mean that PATINDEX() isn't useful, it's just not a good fit for this particular problem.  I'll include a version in the next post that extracts the version number properly. UPDATE: Sorry if you saw the unformatted version of this earlier, I'm on a quest to find blog software that ACTUALLY WORKS.

    Read the article

  • Oracle OpenWorld Update: Oracle GoldenGate for High Availability

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} One of our primary themes this year for the Oracle OpenWorld Sessions featuring Oracle GoldenGate is High Availability. This is a pretty wide theme, but the focus will be on ways of maximizing uptime for critical systems during planned and unplanned events. We have a number of very informative sessions dedicated to exploring this theme in detail; from deep product implementation strategies up to lessons learned by our customers when using Oracle GoldenGate to meet strict SLAs. We kick this track off with our Customer Panel on Zero Downtime Operations on Monday, which I overviewed in my last posting. This is followed by Comcast, who will be hosting a sessions at 1:45PM in Moscone West 3014. Their session will discuss using Oracle GoldenGate to reduce downtime during a database upgrade. Here’s an overview: CON8571 - Oracle Database Upgrade with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability? In this session, Comcast, among the largest telecom firms in the world, shares tips on how to achieve zero downtime while upgrading to Oracle Database 11g Release 2, using a combination of Oracle technologies: Oracle Real Application Clusters (Oracle RAC), Oracle Database’s Grid Infrastructure and Oracle Recovery Manager (Oracle RMAN) features, Oracle GoldenGate, and Oracle Active Data Guard. This successful upgrade took place on a mission-critical system that handles more than 60 million business requests and service calls a day. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to maximize performance and availability of its Oracle technologies. On Tuesday, Joydip Kundu (Director of Software Development) will be presenting “Oracle GoldenGate and Oracle Data Guard: Working Together Seamlessly” at 10:15AM in Moscone South 3005. This session focuses on how both modes of Oracle GoldenGate extract (Classic and Integrated Capture) can be used with Oracle Data Guard for disaster recovery purposes or to offload extract processing. That afternoon at 1:15PM Comcast takes the stage again to discuss firsthand lessons learned implementing Oracle GoldenGate in a heterogeneous, highly available environment. Here’s a rundown of their session: CON8750 - High-Volume OLTP with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability in a mission-critical environment? In this session, Comcast, one of the largest telecom firms, shares best practices for leveraging Oracle GoldenGate to replicate high-volume online transaction processing data from Tandem NSK SQL/MX to Teradata. Hear critical success factors from Comcast for overall platform and component architectures as well as configuration and tuning techniques. Learn how it met the challenges of replication in a complex heterogeneous environment. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to provide mission-critical support for maximized performance and availability of its Oracle environment. The final session on the high availability track will be hosted by Patricia Mcelroy (Distinguished Product Manager) and Stephan Haisly (Principle Member of Technical Staff). Their session (CON8401 - Tuning and Troubleshooting Oracle GoldenGate on Oracle Database) covers techniques for performance tuning and troubleshooting of Oracle GoldenGate on Oracle Database. Using various types of workloads (OLTP, batch, Oracle’s PeopleSoft Enterprise), the presentation steps through the process of monitoring and troubleshooting the configuration to maximize performance and replication throughput within and between Oracle clouds. Join us at our sessions or stop by our demo pods in Moscone south and meet the product management and development teams.

    Read the article

  • How to Get Windows 7 Theme Wallpapers Without Installing Them

    - by Mysticgeek
    Are you using an older version of Windows but like the Windows 7 theme wallpapers? What if you have Windows 7 but you don’t want to install the themes just to get the wallpapers? Here is how to get them without having to install themes. This guest article was written by Ryan Dozier from the Doztech tech blog. Getting the Wallpaper on XP, Vista, or Windows 7 First download and install 7-zip on your machine (link below). After you’ve installed 7-zip, download a Windows 7 theme (link below) and right-click on the theme, select 7-Zip, and Extract to “Theme Name”… A new folder will appear with the theme name on it. When you open it, there will be a folder called DesktopBackground or something similar.   Open the folder to get the wallpapers to view the wallpapers for the theme. You can delete the extra files and just keep the wallpapers!   Getting the Wallpaper on Ubuntu Extracting the wallpaper on Ubuntu can be a little tricky. Just follow these steps and you will be able to do it. First go to the Ubuntu Software Center under the Applications menu. Search for 7zip and click on the arrow to go to the applications menu. Find the Install button and click it. It will take a couple of minutes for 7zip to install. After 7zip installs, close the Ubuntu Software Center and download a Windows 7 theme. Store it somewhere you can access it quickly. Right-click on the theme and select Rename and get rid of the themepack extension and replace it with zip. The file should be “Theme Name.zip” after you rename it. Right-click on the theme and click Extract Here. After  the extracting you will have a new folder with the theme name. Open it and go into the DesktopBackground folder to get the wallpapers. You can delete the extra files and just keep the wallpapers. If you want to get the new Windows 7 Themes Wallpapers, but don’t want to search and install them separately, this is a nice workaround. Links Get 7 zip for Windows  here Get Windows 7 Themes here Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • how to do loop for array which have different data for each array

    - by Suriani Salleh
    i have this file XML file.. I need to convert it form XMl to MYSQL. if it have only one array than i know how to do it.. now my question how to extract this two array Each array will have different value of data..for example for first array, pmIntervalTxEthMaxUtilization data : 0,74,0,0,48 and for second array pmIntervalRxPowerLevel data: -79,-68,-52 , pmIntervalTxPowerLevel data: 13,11,-55 . can some one help to guide how write php code to extract this xml file to MY SQL <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxUndersizedFrames</mt> [ this is first array] <mt>pmIntervalRxUnicastFrames</mt> <mt>pmIntervalTxUnicastFrames</mt> <mt>pmIntervalRxEthMaxUtilization</mt> <mt>pmIntervalTxEthMaxUtilization</mt> <mv> <moid>port:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 1st array i want to insert in DB] <r>0</r> <r>0</r> <r>5</r> <r>0</r> </mv> </mi> <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxSES</mt> [this is second array] <mt>pmIntervalRxPowerLevel</mt> <mt>pmIntervalTxPowerLevel</mt> <mv> <moid>client:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 2nd array i want to insert in DB] <r>-79</r> <r>13</r> </mv> </mi> this is the code for one array that i write..i dont know how to write code for two array because the field appear two times and have different data value for each array // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; ........................... i have do a little research on it this is the out come... $i = 0; while( $i < 5) { // Loop through the specified xpath foreach($xml->md->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_uni = $subchild->r[10]; $tx_uni = $subchild->r[11]; $rx_eth = $subchild->r[16]; $tx_eth = $subchild->r[17]; // dump into database; .............................. $i++; if( $i == 5 )break; } } // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; .......................

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • ReSharper 8.0 EAP now available

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/28/resharper-8.0-eap-now-available.aspxJetbrains have just released |ReSharper 8.0 Beta on their Early Access |Programme at http://www.jetbrains.com/resharper/whatsnew/?utm_source=resharper8b&utm_medium=newsletter&utm_campaign=resharper&utm_content=customersResharper 8.0 comes with the following new features:Support for Visual Studio 2013 Preview. Yes, ReSharper is known to work well with the fresh preview of Visual Studio 2013, and if you have already started digging into it, ReSharper 8.0 Beta is ready for the challenge.Faster code fixes. Thanks to the new Fix in Scope feature, you can choose to batch-fix some of the code issues that ReSharper detects in the scope of a project or the whole solution. Supported fixes include removing unused directives and redundant casts.Project dependency viewer. ReSharper is now able to visualize a project dependency graph for a bird's eye view of dependencies within your solution, all without compiling anything!Multifile templates. ReSharper's file templates can now be expanded to generate more than one file. For instance, this is handy for generating pairs of a main logic class and a class for extensions, or sets of partial files.Navigation improvements. These include a new action called Go to Everything to let you search for a file, type or method name from the same input box; support for line numbers in navigation actions; a new tool window called Assembly Explorer for browsing through assemblies; and two more contextual navigation actions: Navigate to Generic Substitutions and Navigate to Assembly Explorer.New solution-wide refactorings. The set of fresh refactorings is headlined by the highly requested Move Instance Method to move methods between classes without making them static. In addition, there are Inline Parameter and Pull Parameter. Last but not least, we're also introducing 4 new XAML-specific refactorings!Extraordinary XAML support. A plethora of new and improved functionality for all developers working with XAML code includes dedicated grid inspections and quick-fixes; Extract Style, Extract, Move and Inline Resource refactorings; atomic renaming of dependency properties; and a lot more.More accessible code completion. ReSharper 8 makes more of its IntelliSense magic available in automatic completion lists, including extension methods and an option to import a type. We're also introducing double completion which gives you additional completion items when you press the corresponding shortcut for the second time.A new level of extensibility. With the new NuGet-based Extension Manager, discovering, installing and uninstalling ReSharper extensions becomes extremely easy in Visual Studio 2010 and higher. When we say extensions, we mean not only full-fledged plug-ins but also sets of templates or SSR patterns that can now be shared much more easily.CSS support improvements. Smarter usage search for CSS attributes, new CSS-specific code inspections, configurable support for CSS3 and earlier versions, compatibility checks against popular browsers - there's a rough outline of what's new for CSS in ReSharper 8.A command-line version of ReSharper. ReSharper 8 goes beyond Visual Studio: we now provide a free standalone tool with hundreds of ReSharper inspections and additionally a duplicate code finder that you can integrate with your CI server or version control system.Multiple minor improvements in areas such as decompiling and code formatting, as well as support for the Blue Theme introduced in Visual Studio 2012 Update 2.

    Read the article

  • how to use nokogiri methods .xpath & .at_xpath

    - by Radek
    I'm learning how to use nokogiri and few questions came to me based on the code below require 'rubygems' require 'mechanize' post_agent = WWW::Mechanize.new post_page = post_agent.get('http://www.vbulletin.org/forum/showthread.php?t=230708') puts "\nabsolute path with tbody gives nil" puts post_page.parser.xpath('/html/body/div/div/div/div/div/table/tbody/tr/td/div[2]').xpath('text()').to_s.strip.inspect puts "\n.at_xpath gives an empty string" puts post_page.parser.at_xpath("//div[@id='posts']/div/table/tr/td/div[2]").at_xpath('text()').to_s.strip.inspect puts "\ntwo lines solution with .at_xpath gives an empty string" rows = post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]") puts rows[0].at_xpath('text()').to_s.strip.inspect puts puts "two lines working code" rows = post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]") puts rows[0].xpath('text()').to_s.strip puts "\none line working code" puts post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]")[0].xpath('text()').to_s.strip puts "\nanother one line code" puts post_page.parser.at_xpath("//div[@id='posts']/div/table/tr/td/div[2]").xpath('text()').to_s.strip puts "\none line code with full path" puts post_page.parser.xpath("/html/body/div/div/div/div/div/table/tr/td/div[2]")[0].xpath('text()').to_s.strip is it better to use // or / in xpath? @AnthonyWJones says that 'the use of an unprefixed //' is not so good idea I had to remove tbody from any working xpath otherwise I got 'nil' result. How is possible to remove an element from the xpath to get things work? do I have to use .xpath twice to extract data if not using full xpath? why I cannot make .at_xpath working to extract data? it works nicely here what is the difference?

    Read the article

  • OCR with Neural network: data extraction

    - by Sebastian Hoitz
    I'm using the AForge library framework and its neural network. At the moment when I train my network I create lots of images (one image per letter per font) at a big size (30 pt), cut out the actual letter, scale this down to a smaller size (10x10 px) and then save it to my harddisk. I can then go and read all those images, creating my double[] arrays with data. At the moment I do this on a pixel basis. So once I have successfully trained my network I test the network and let it run on a sample image with the alphabet at different sizes (uppercase and lowercase). But the result is not really promising. I trained the network so that RunEpoch had an error of about 1.5 (so almost no error), but there are still some letters left that do not get identified correctly in my test image. Now my question is: Is this caused because I have a faulty learning method (pixelbased vs. the suggested use of receptors in this article: http://www.codeproject.com/KB/cs/neural_network_ocr.aspx - are there other methods I can use to extract the data for the network?) or can this happen because my segmentation-algorithm to extract the letters from the image to look at is bad? Does anyone have ideas on how to improve it?

    Read the article

  • HtmlAgilityPack giving problems with malformed html

    - by Kapil
    I want to extract meaningful text out of an html document and I was using html-agility-pack for the same. Here is my code: string convertedContent = HttpUtility.HtmlDecode(ConvertHtml(HtmlAgilityPack.HtmlEntity.DeEntitize(htmlAsString))); ConvertHtml: public string ConvertHtml(string html) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(html); StringWriter sw = new StringWriter(); ConvertTo(doc.DocumentNode, sw); sw.Flush(); return sw.ToString(); } ConvertTo: public void ConvertTo(HtmlAgilityPack.HtmlNode node, TextWriter outText) { string html; switch (node.NodeType) { case HtmlAgilityPack.HtmlNodeType.Comment: // don't output comments break; case HtmlAgilityPack.HtmlNodeType.Document: foreach (HtmlNode subnode in node.ChildNodes) { ConvertTo(subnode, outText); } break; case HtmlAgilityPack.HtmlNodeType.Text: // script and style must not be output string parentName = node.ParentNode.Name; if ((parentName == "script") || (parentName == "style")) break; // get text html = ((HtmlTextNode)node).Text; // is it in fact a special closing node output as text? if (HtmlNode.IsOverlappedClosingElement(html)) break; // check the text is meaningful and not a bunch of whitespaces if (html.Trim().Length > 0) { outText.Write(HtmlEntity.DeEntitize(html) + " "); } break; case HtmlAgilityPack.HtmlNodeType.Element: switch (node.Name) { case "p": // treat paragraphs as crlf outText.Write("\r\n"); break; } if (node.HasChildNodes) { foreach (HtmlNode subnode in node.ChildNodes) { ConvertTo(subnode, outText); } } break; } } Now in some cases when the html pages are malformed (for example the following page - http://rareseeds.com/cart/products/Purple_of_Romagna_Artichoke-646-72.html has a malformed meta-tag like <meta content="text/html; charset=uft-8" http-equiv="Content-Type">) [Note "uft" instead of utf] my code is puking at the time I am trying to load the html document. Can someone suggest me how can I overcome these malformed html pages and still extract relevant text out of a html document? Thanks, Kapil

    Read the article

  • Complex regex question, data may or may not be in brackets

    - by martinpetts
    I need to extract data from a source that presents it in one of two ways. The data could be formatted like this: Francis (Lab) 18,077 (60.05%); Waller (LD) 4,140 (13.75%); Evans (PC) 3,545 (11.78%); Rees-Mogg (C) 3,064 (10.18%); Wright (Veritas) 768 (2.55%); La Vey (Green) 510 (1.69%) Or like this: Lab 8,994 (33.00%); C 7,924 (29.07%); LD 5,197 (19.07%); PC 3,818 (14.01%); Others 517 (1.90%); Green 512 (1.88%); UKIP 296 (1.09%) The data I need to extract is the percentage and the party (these are election results), which is either in brackets (first example) or is the only non-numeric text. So far I have this: preg_match('/(.*)\(([^)]*)%\)/', $value, $match); Which is giving me the following matches (for first example): Array ( [0] => Francis (Lab) 18,077 (60.05%) [1] => Francis (Lab) 18,077 [2] => 60.05 ) So I have the percentage, but I also need the party label, which may or may not be in brackets and may or may not be the only text. Can anyone help?

    Read the article

  • UML class diagram vs ER database diagram

    - by salva84
    Hi, I'm a little confused, I'm developing a program, the program consist in two parts, the server and the clients, there are groups, places, messages... stored in the server, and the clients has to connect with it. I've design the use cases diagram, the activity diagrams, and I have design the class diagram too. The thing is that I want to implement the server in a mysql tables for storing the users, groups, places... users in groups... so I've designed a E-R diagram consisting in 6 tables, but the problem is that I think that my class diagram and my ER diagram looks too similar, I mean, I think I'm not doing things right because I have a class for each table practically, and when I have to extract all the users on my system, do I have to convert all the rows into objects at first and write in the database for each object modified? The easy choice for me would be to base my whole application only in the database, and making a class to extract and insert data in it, but I have to follow the UML specification and I'm a little confused what to do with the class diagram, because the books I have read say that I have to create a class for each "entity" of my program. Sorry for my bad English. Thank you.

    Read the article

  • Encrypting with Perl CBC and decrypting with PHP mcrypt

    - by Ed
    I have an encrypted string that was encrypted with Perl Crypt::CBC (Rijndael,cbc). The original plaintext was encrypted with encrypt_hex() method of Crypt::CBC. $encrypted_string = '52616e646f6d49567b2c89810ceddbe8d182c23ba5f6562a418e318b803a370ea25a6a8cbfe82bc6362f790821dce8441a790a7d25d3d9ea29f86e6685d0796d'; I have the 32 character key that was used. mcrypt is successfully compiled into PHP, but I'm having a very hard time trying to decrypt the string in PHP. I keep getting gibberish back. If I unpack('H*', $encrypted_string), I see 'RandomIV' followed by what looks like binary. I can't seem to correctly extract the IV and separate the actual encrypted message. I know I'm not providing my information, but I'm not sure where else to start. $cipher = 'rijndael-256'; $cipher_mode = 'cbc'; $td = mcrypt_module_open($cipher, '', $cipher_mode, ''); $key = '32 characters'; // Does this need to converted to something else before being passed? $iv = ?? // Not sure how to extract this from $encrypted_string. $token = ?? // Should be a sub-string of $encrypted_string, correct? mcrypt_generic_init($td, $key, $iv); $clear = rtrim(mdecrypt_generic($td, $token), ''); mcrypt_generic_deinit($td); mcrypt_module_close($td); echo $clear; Any help, pointers in the right direction, would be greatly appreciated. Let me know if I need to provide more information.

    Read the article

  • extracting RDL data using LINQ

    - by BobC
    I'm working with some SQL Report definition files (RDLs), using LINQ to extract component query statements for validation. I'm trying to extract the <DataSet> elements from under the <DataSets> element. I seem to be getting hung up with one of the elements under <DataSet><Fields><Field> which has a namespace qualifier <rd:TypeName> I've been using LINQ to XML for other parts of the files where there is no namespace qualifiers with no trouble, by specifying a default namespace. The RDL specifies two namespaces: xmlns="http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition" xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner"> When I try to get the <DataSets> element, however, I get the following error: System.Xml.XmlException - The ':' character, hexadecimal value 0x3A, cannot be included in a name. I know it has to do with the namespace qualifier (rd:) in one of the child elements, but I'm having difficulty getting a LINQ expression that works. Any help would be appreciated. Thanks!

    Read the article

  • How to get around batch file processing limit

    - by Patrick Cuff
    I have a Windows batch file that processes all the files in a given directory. I have 206,783 files I need to process: for %%f in (*.xml) do call :PROCESS %%f goto :STOP :PROCESS :: do something with the file program.exe %1 > %1.new set /a COUNTER=%COUNTER%+1 goto :EOF :STOP @echo %COUNTER% files processed When I run the batch file, the following output is written: 65535 files processed As part of the processing, an output file is created for each file procesed, with a .new extension. When I do a dir *.new it reports 65,535 files exist. So, it appears my command environment has a hard limit on the number of files it can recognize, and that limit is 64K - 1. Is there a way to extend the command environment to manage more than 64K - 1 files? If not, would a VBScript or JavaScript be able to process all 206,783 files? I'm running on Windows 2003 server, Enterprise Edition, 32-bit. UPDATE It looks like the root cause of my issue was with the built-in Windows "extract" command for ZIP files. The files I have to process were copied from another system via a ZIP file. My server doesn't have a ZIP utility installed, just the native Windows commands. I right-clicked on the ZIP file, and did an "Extract all...", which apparently just extracted the first 65,535 files. I downloaded and installed 7-zip onto my server, unzipped all the files, and my batch script worked as intended.

    Read the article

  • Reading binary data from serial port using Dejan TComport Delphi component

    - by johnma
    Apologies for this question but I am a bit of a noob with Delphi. I am using Dejan TComport component to get data from a serial port. A box of equipment connected to the port sends about 100 bytes of binary data to the serial port. What I want to do is extract the bytes as numerical values into an array so that I can perform calculations on them. TComport has a method Read(buffer,Count) which reads DATA from input buffer. function Read(var Buffer; Count: Integer): Integer; The help says the Buffer variable must be large enough to hold Count bytes but does not provide any example of how to use this function. I can see that the Count variable holds the number of bytes received but I can't find a way to access the bytes in Buffer. TComport also has a methord Readstr which reads data from input buffer into a STRING variable. function ReadStr(var Str: String; Count: Integer): Integer; Again the Count variable shows the number of bytes received and I can use Memo1.Text:=str to display some information but obviously Memo1 has problems displaying the control characters. I have tried various ways to try and extract the byte data from Str but so far without success. I am sure it must be easy. Here's hoping.

    Read the article

  • Extracting images from a PDF

    - by sagar
    My Query I want to extract only images from a PDF document, using Objective-C in an iPhone Application. My Efforts I have gone through the info on this link, which has details regarding different operators on PDF documents. I also studied this document from Apple about PDF parsing with Quartz. I also went through the entire PDF reference document from the Adobe site. According to that document, for each image there are the following operators: q Q BI EI I have created a table to get the image: myTable = CGPDFOperatorTableCreate(); CGPDFOperatorTableSetCallback(myTable, "q", arrayCallback2); CGPDFOperatorTableSetCallback(myTable, "TJ", arrayCallback); CGPDFOperatorTableSetCallback(myTable, "Tj", stringCallback); I use this method to get the image: void arrayCallback2(CGPDFScannerRef inScanner, void *userInfo) { // THIS DOESN'T WORK // CGPDFStreamRef stream; // represents a sequence of bytes // if (CGPDFDictionaryGetStream (d, "BI", &stream)){ // CGPDFDataFormat t=CGPDFDataFormatJPEG2000; // CFDataRef data = CGPDFStreamCopyData (stream, &t); // } } This method is called for the operator "q", but I don't know how to extract an image from it. What should be the solution for extracting the images from the PDF documents? Thanks in advance for your kind help.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >