Search Results

Search found 4045 results on 162 pages for 'rss reader'.

Page 54/162 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Are Vala and desktopcouch ready?

    - by pavolzetor
    Hi, I have started writting rss reader in Vala, but I don't know, what database system should I use, I cannot connect to couchdb and sqlite works fine, but I would like use couchdb because of ubuntu one. I have natty with latest updates public CouchDB.Session session; public CouchDB.Database db; public string feed_table = "feed"; public string item_table = "item"; public struct field { string name; string val; } // constructor public Database() { try { this.session = new CouchDB.Session(); } catch (Error e) { stderr.printf ("%s a\n", e.message); } try { this.db = new CouchDB.Database (this.session, "test"); } catch (Error e) { stderr.printf ("%s a\n", e.message); } try { this.session.get_database_info("test"); } catch (Error e) { stderr.printf ("%s aa\n", e.message); } try { var newdoc = new CouchDB.Document (); newdoc.set_boolean_field ("awesome", true); newdoc.set_string_field ("phone", "555-VALA"); newdoc.set_double_field ("pi", 3.14159); newdoc.set_int_field ("meaning_of_life", 42); this.db.put_document (newdoc); // store document } catch (Error e) { stderr.printf ("%s aaa\n", e.message); } reports $ ./xml_parser rss.xmlCannot connect to destination (127.0.0.1) aa Cannot connect to destination (127.0.0.1) aaa

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • I have finally traded my Blackberry in for a Droid!

    - by Bob Porter
    Over the years I have used a number of different types of phones. Windows Mobile, Blackberry, Nokia, and now Android. Until the Blackberry, which was my last phone (and I still have one issued from my office) I had never found a phone that “just worked” especially with email and messaging. The Blackberry did, and does, excel at those functions. My last personal phone was a Storm 1 which was Blackberry’s first touch screen phone. The Storm 2 was an improved version that fixed some screen press detection issues from the first model and it added Wifi. Over the last few years I have watched others acquire and fall in love with their ‘Droid’s including a number of iPhone users which surprised me. Our office has until recently only supported Blackberry phones, adding iPhones within the last year or so. When I spoke with our internal telecom folks they confirmed they were evaluating Android phones, but felt they still were not secure enough out of the box for corporate use and SOX compliance. That being said, as a personal phone, the Droid Rocks! I am impressed with its speed, the number of apps available, and the overall design. It is not as “flashy” as an iPhone but it does everything that I care about and more. The model I bought is the Motorola Droid 2 Global from Verizon. It is currently running Android 2.2 for it’s OS, 2.3 is just around the corner. It has 8 gigs of internal flash memory and can handle up to a 32 gig SDCard. (I currently have 2 8 gig cards, one for backups, and have ordered a 16 gig card!) Being a geek at heart, I “rooted” the phone which means gained superuser access to the OS on the phone. And opens a number of doors for further modifications down the road. Also being a geek meant I have already setup a development environment and built and deployed the obligatory “Hello Droid” application. I will be writing of my development experiences with this new platform here often, to start off I thought I would share my current application list to give you an idea what I am using. Zedge: http://market.android.com/details?id=net.zedge.android XDA: http://market.android.com/details?id=com.quoord.tapatalkxda.activity WRAL.com: http://market.android.com/details?id=com.mylocaltv.wral Wireless Tether: http://market.android.com/details?id=android.tether Winamp: http://market.android.com/details?id=com.nullsoft.winamp Win7 Clock: http://market.android.com/details?id=com.androidapps.widget.toggles.win7 Wifi Analyzer: http://market.android.com/details?id=com.farproc.wifi.analyzer WeatherBug: http://market.android.com/details?id=com.aws.android Weather Widget Forecast Addon: http://market.android.com/details?id=com.androidapps.weather.forecastaddon Weather & Toggle Widgets: http://market.android.com/details?id=com.androidapps.widget.weather2 Vlingo: http://market.android.com/details?id=com.vlingo.client VirtualTENHO-G: http://market.android.com/details?id=jp.bustercurry.virtualtenho_g Twitter: http://market.android.com/details?id=com.twitter.android TweetDeck: http://market.android.com/details?id=com.thedeck.android.app Tricorder: http://market.android.com/details?id=org.hermit.tricorder Titanium Backup PRO: http://market.android.com/details?id=com.keramidas.TitaniumBackupPro Titanium Backup: http://market.android.com/details?id=com.keramidas.TitaniumBackup Terminal Emulator: http://market.android.com/details?id=jackpal.androidterm Talking Tom Free: http://market.android.com/details?id=com.outfit7.talkingtom Stock Blue: http://market.android.com/details?id=org.adw.theme.stockblue ST: Red Alert Free: http://market.android.com/details?id=com.oldplanets.redalertwallpaper ST: Red Alert: http://market.android.com/details?id=com.oldplanets.redalertwallpaperplus Solitaire: http://market.android.com/details?id=com.kmagic.solitaire Skype: http://market.android.com/details?id=com.skype.raider Silent Time Lite: http://market.android.com/details?id=com.QuiteHypnotic.SilentTime ShopSavvy: http://market.android.com/details?id=com.biggu.shopsavvy Shopper: http://market.android.com/details?id=com.google.android.apps.shopper Shiny clock: http://market.android.com/details?id=com.androidapps.clock.shiny ShareMyApps: http://market.android.com/details?id=com.mattlary.shareMyApps Sense Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.senseglassadwtheme ROM Manager: http://market.android.com/details?id=com.koushikdutta.rommanager Roboform Bookmarklet Installer: http://market.android.com/details?id=roboformBookmarkletInstaller.android.com RealCalc: http://market.android.com/details?id=uk.co.nickfines.RealCalc Package Buddy: http://market.android.com/details?id=com.psyrus.packagebuddy Overstock: http://market.android.com/details?id=com.overstock OMGPOP Toggle: http://market.android.com/details?id=com.androidapps.widget.toggle.omgpop OI File Manager: http://market.android.com/details?id=org.openintents.filemanager nook: http://market.android.com/details?id=bn.ereader MyAtlas-Google Maps Navigation ext: http://market.android.com/details?id=com.adaptdroid.navbookfree3 MSN Droid: http://market.android.com/details?id=msn.droid.im Matrix Live Wallpaper: http://market.android.com/details?id=com.jarodyv.livewallpaper.matrix LogMeIn: http://market.android.com/details?id=com.logmein.ignitionpro.android Liveshare: http://market.android.com/details?id=com.cooliris.app.liveshare Kobo: http://market.android.com/details?id=com.kobobooks.android Instant Heart Rate: http://market.android.com/details?id=si.modula.android.instantheartrate IMDb: http://market.android.com/details?id=com.imdb.mobile Home Plus Weather: http://market.android.com/details?id=com.androidapps.widget.skin.weather.homeplus Handcent SMS: http://market.android.com/details?id=com.handcent.nextsms H7C Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skin.h7c GTasks: http://market.android.com/details?id=org.dayup.gtask GPS Status: http://market.android.com/details?id=com.eclipsim.gpsstatus2 Google Voice: http://market.android.com/details?id=com.google.android.apps.googlevoice Google Sky Map: http://market.android.com/details?id=com.google.android.stardroid Google Reader: http://market.android.com/details?id=com.google.android.apps.reader GoMarks: http://market.android.com/details?id=com.androappsdev.gomarks Goggles: http://market.android.com/details?id=com.google.android.apps.unveil Glossy Black Weather: http://market.android.com/details?id=com.androidapps.widget.weather.skin.glossyblack Fox News: http://market.android.com/details?id=com.foxnews.android Foursquare: http://market.android.com/details?id=com.joelapenna.foursquared FBReader: http://market.android.com/details?id=org.geometerplus.zlibrary.ui.android Fandango: http://market.android.com/details?id=com.fandango Facebook: http://market.android.com/details?id=com.facebook.katana Extensive Notes Pro: http://market.android.com/details?id=com.flufflydelusions.app.extensive_notes_donate Expense Manager: http://market.android.com/details?id=com.expensemanager Espresso UI (LightShow w/ Slide): http://market.android.com/details?id=com.jaguirre.slide.lightshow Engadget: http://market.android.com/details?id=com.aol.mobile.engadget Earth: http://market.android.com/details?id=com.google.earth Drudge: http://market.android.com/details?id=com.iavian.dreport Dropbox: http://market.android.com/details?id=com.dropbox.android DroidForums: http://market.android.com/details?id=com.quoord.tapatalkdrodiforums.activity DroidArmor ADW: http://market.android.com/details?id=mobi.addesigns.droidarmorADW Droid Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.skins.white Droid 2 Bootstrapper: http://market.android.com/details?id=com.koushikdutta.droid2.bootstrap doubleTwist: http://market.android.com/details?id=com.doubleTwist.androidPlayer Documents To Go: http://market.android.com/details?id=com.dataviz.docstogo Digital Clock Widget: http://market.android.com/details?id=com.maize.digitalClock Desk Home: http://market.android.com/details?id=com.cowbellsoftware.deskdock Default Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skins.defaultclock Daily Expense Manager: http://market.android.com/details?id=com.techahead.ExpenseManager ConnectBot: http://market.android.com/details?id=org.connectbot Colorized Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.colorized Chrome to Phone: http://market.android.com/details?id=com.google.android.apps.chrometophone CardStar: http://market.android.com/details?id=com.cardstar.android Books: http://market.android.com/details?id=com.google.android.apps.books Black Ipad Toggle: http://market.android.com/details?id=com.androidapps.toggle.widget.skin.blackipad Black Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.blackglassadwtheme Bing: http://market.android.com/details?id=com.microsoft.mobileexperiences.bing BeyondPod Unlock Key: http://market.android.com/details?id=mobi.beyondpod.unlockkey BeyondPod: http://market.android.com/details?id=mobi.beyondpod BeejiveIM: http://market.android.com/details?id=com.beejive.im Beautiful Widgets Animations Addon: http://market.android.com/details?id=com.levelup.bw.forecast Beautiful Widgets: http://market.android.com/details?id=com.levelup.beautifulwidgets Beautiful Live Weather: http://market.android.com/details?id=com.levelup.beautifullive BBC News: http://market.android.com/details?id=net.jimblackler.newswidget Barnacle Wifi Tether: http://market.android.com/details?id=net.szym.barnacle Barcode Scanner: http://market.android.com/details?id=com.google.zxing.client.android ASTRO SMB Module: http://market.android.com/details?id=com.metago.astro.smb ASTRO Pro: http://market.android.com/details?id=com.metago.astro.pro ASTRO Bluetooth Module: http://market.android.com/details?id=com.metago.astro.network.bluetooth ASTRO: http://market.android.com/details?id=com.metago.astro AppBrain App Market: http://market.android.com/details?id=com.appspot.swisscodemonkeys.apps App Drawer Icon Pack: http://market.android.com/details?id=com.adwtheme.appdrawericonpack androidVNC: http://market.android.com/details?id=android.androidVNC AndroidGuys: http://market.android.com/details?id=com.handmark.mpp.AndroidGuys Android System Info: http://market.android.com/details?id=com.electricsheep.asi AndFTP: http://market.android.com/details?id=lysesoft.andftp ADWTheme Red: http://market.android.com/details?id=adw.theme.red ADWLauncher EX: http://market.android.com/details?id=org.adwfreak.launcher ADW.Theme.One: http://market.android.com/details?id=org.adw.theme.one ADW.Faded theme: http://market.android.com/details?id=com.xrcore.adwtheme.faded ADW Gingerbread: http://market.android.com/details?id=me.robertburns.android.adwtheme.gingerbread Advanced Task Killer Free: http://market.android.com/details?id=com.rechild.advancedtaskkiller Adobe Reader: http://market.android.com/details?id=com.adobe.reader Adobe Flash Player 10.1: http://market.android.com/details?id=com.adobe.flashplayer Adobe AIR: http://market.android.com/details?id=com.adobe.air 3G Auto OnOff: http://market.android.com/details?id=com.yuantuo --- Generated by ShareMyApps http://market.android.com/details?id=com.mattlary.shareMyApps Sent from my Droid

    Read the article

  • w3schools xsd example won't work with dom4j. How do I use dom4j to validate xml using xsds?

    - by HappyEngineer
    I am trying to use dom4j to validate the xml at http://www.w3schools.com/Schema/schema_example.asp using the xsd from that same page. It fails with the following error: org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'shiporder'. I'm using the following code: SAXReader reader = new SAXReader(); reader.setValidation(true); reader.setFeature("http://apache.org/xml/features/validation/schema", true); reader.setErrorHandler(new XmlErrorHandler()); reader.read(in); where in is an InputStream and XmlErrorHandler is a simple class that just logs all errors. I'm using the following xml file: <?xml version="1.0" encoding="ISO-8859-1"?> <shiporder orderid="889923" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="test1.xsd"> <orderperson>John Smith</orderperson> <shipto> <name>Ola Nordmann</name> <address>Langgt 23</address> <city>4000 Stavanger</city> <country>Norway</country> </shipto> <item> <title>Empire Burlesque</title> <note>Special Edition</note> <quantity>1</quantity> <price>10.90</price> </item> <item> <title>Hide your heart</title> <quantity>1</quantity> <price>9.90</price> </item> </shiporder> and the corresponding xsd: <?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:simpleType name="stringtype"> <xs:restriction base="xs:string"/> </xs:simpleType> <xs:simpleType name="inttype"> <xs:restriction base="xs:positiveInteger"/> </xs:simpleType> <xs:simpleType name="dectype"> <xs:restriction base="xs:decimal"/> </xs:simpleType> <xs:simpleType name="orderidtype"> <xs:restriction base="xs:string"> <xs:pattern value="[0-9]{6}"/> </xs:restriction> </xs:simpleType> <xs:complexType name="shiptotype"> <xs:sequence> <xs:element name="name" type="stringtype"/> <xs:element name="address" type="stringtype"/> <xs:element name="city" type="stringtype"/> <xs:element name="country" type="stringtype"/> </xs:sequence> </xs:complexType> <xs:complexType name="itemtype"> <xs:sequence> <xs:element name="title" type="stringtype"/> <xs:element name="note" type="stringtype" minOccurs="0"/> <xs:element name="quantity" type="inttype"/> <xs:element name="price" type="dectype"/> </xs:sequence> </xs:complexType> <xs:complexType name="shipordertype"> <xs:sequence> <xs:element name="orderperson" type="stringtype"/> <xs:element name="shipto" type="shiptotype"/> <xs:element name="item" maxOccurs="unbounded" type="itemtype"/> </xs:sequence> <xs:attribute name="orderid" type="orderidtype" use="required"/> </xs:complexType> <xs:element name="shiporder" type="shipordertype"/> </xs:schema> The xsd and xml file are in the same directory. What is the problem?

    Read the article

  • Animation issue caused by C# parameters passed by reference rather than value, but where?

    - by Jordan Roher
    I'm having trouble with sprite animation in XNA that appears to be caused by a struct passed as a reference value. But I'm not using the ref keyword anywhere. I am, admittedly, a C# noob, so there may be some shallow bonehead error in here, but I can't see it. I'm creating 10 ants or bees and animating them as they move across the screen. I have an array of animation structs, and each time I create an ant or bee, I send it the animation array value it requires (just [0] or [1] at this time). Deep inside the animation struct is a timer that is used to change frames. The ant/bee class stores the animation struct as a private variable. What I'm seeing is that each ant or bee uses the same animation struct, the one I thought I was passing in and copying by value. So during Update(), when I advance the animation timer for each ant/bee, the next ant/bee has its animation timer advanced by that small amount. If there's 1 ant on screen, it animates properly. 2 ants, it runs twice as fast, and so on. Obviously, not what I want. Here's an abridged version of the code. How is BerryPicking's ActorAnimationGroupData[] getting shared between the BerryCreatures? class BerryPicking { private ActorAnimationGroupData[] animations; private BerryCreature[] creatures; private Dictionary<string, Texture2D> creatureTextures; private const int maxCreatures = 5; public BerryPickingExample() { this.creatures = new BerryCreature[maxCreatures]; this.creatureTextures = new Dictionary<string, Texture2D>(); } public void LoadContent() { // Returns data from an XML file Reader reader = new Reader(); animations = reader.LoadAnimations(); CreateCreatures(); } // This is called from another function I'm not including because it's not relevant to the problem. // In it, I remove any creature that passes outside the viewport by setting its creatures[] spot to null. // Hence the if(creatures[i] == null) test is used to recreate "dead" creatures. Inelegant, I know. private void CreateCreatures() { for (int i = 0; i < creatures.Length; i++) { if (creatures[i] == null) { // In reality, the name selection is randomized creatures[i] = new BerryCreature("ant"); // Load content and texture (which I create elsewhere) creatures[i].LoadContent( FindAnimation(creatures[i].Name), creatureTextures[creatures[i].Name]); } } } private ActorAnimationGroupData FindAnimation(string animationName) { int yourAnimation = -1; for (int i = 0; i < animations.Length; i++) { if (animations[i].name == animationName) { yourAnimation = i; break; } } return animations[yourAnimation]; } public void Update(GameTime gameTime) { for (int i = 0; i < creatures.Length; i++) { creatures[i].Update(gameTime); } } } class Reader { public ActorAnimationGroupData[] LoadAnimations() { ActorAnimationGroupData[] animationGroup; XmlReader file = new XmlTextReader(filename); // Do loading... // Then later file.Close(); return animationGroup; } } class BerryCreature { private ActorAnimation animation; private string name; public BerryCreature(string name) { this.name = name; } public void LoadContent(ActorAnimationGroupData animationData, Texture2D sprite) { animation = new ActorAnimation(animationData); animation.LoadContent(sprite); } public void Update(GameTime gameTime) { animation.Update(gameTime); } } class ActorAnimation { private ActorAnimationGroupData animation; public ActorAnimation(ActorAnimationGroupData animation) { this.animation = animation; } public void LoadContent(Texture2D sprite) { this.sprite = sprite; } public void Update(GameTime gameTime) { animation.Update(gameTime); } } struct ActorAnimationGroupData { // There are lots of other members of this struct, but the timer is the only one I'm worried about. // TimerData is another struct private TimerData timer; public ActorAnimationGroupData() { timer = new TimerData(2); } public void Update(GameTime gameTime) { timer.Update(gameTime); } } struct TimerData { public float currentTime; public float maxTime; public TimerData(float maxTime) { this.currentTime = 0; this.maxTime = maxTime; } public void Update(GameTime gameTime) { currentTime += (float)gameTime.ElapsedGameTime.TotalSeconds; if (currentTime >= maxTime) { currentTime = maxTime; } } }

    Read the article

  • Changing the action of a hyperlink in a Silverlight RichTextArea

    - by Marc Schluper
    The title of this post could also have been "Move over Hyperlink, here comes Actionlink" or "Creating interactive text in Silverlight." But alas, there can be only one. Hyperlinks are very useful. However, they are also limited because their action is fixed: browse to a URL. This may have been adequate at the start of the Internet, but nowadays, in web applications, the one thing we do not want to happen is a complete change of context. In applications we typically like a hyperlink selection to initiate an action that updates a part of the screen. For instance, if my application has a map displayed with some text next to it, the map would react to a selection of a hyperlink in the text, e.g. by zooming in on a location and displaying additional locational information in a popup. In this way, the text becomes interactive text. It is quite common that one company creates and maintains websites for many client companies. To keep maintenance cost low, it is important that the content of these websites can be updated by the client companies themselves, without the need to involve a software engineer. To accommodate this scenario, we want the author of the interactive text to configure all hyperlinks (without writing any code). In a Silverlight RichTextArea, the default action of a Hyperlink is the same as a traditional hyperlink, but it can be changed: if the Command property has a value then upon a click event this command is called with the value of the CommandParameter as parameter. How can we let the author of the text specify a command for each hyperlink in the text, and how can we let an application react properly to a hyperlink selection event? We are talking about any command here. Obviously, the application would recognize only a specific set of commands, with well defined parameters, but the approach we take here is generic in the sense that it pertains to the RichTextArea and any command. So what do we require? We wish that: As a text author, I can configure the action of a hyperlink in a (rich) text without writing code; As a text author, I can persist the action of a hyperlink with the text; As a reader of persisted text, I can click a hyperlink and the configured action will happen; As an application developer, I can configure a control to use my application specific commands. In an excellent introduction to the RichTextArea, John Papa shows (among other things) how to persist a text created using this control. To meet our requirements, we can create a subclass of RichTextArea that uses John's code and allows plugging in two command specific components: one to prompt for a command definition, and one to execute the command. Since both of these plugins are application specific, our RichTextArea subclass should not assume anything about them except their interface. public interface IDefineCommand { void Prompt(string content, // the link content Action<string, object> callback); // the method called to convey the link definition } public interface IPerformCommand : ICommand {} The IDefineCommand plugin receives the content of the link (the text visible to the reader) and displays some kind of control that allows the author to define the link. When that's done, this (possibly changed) content string is conveyed back to the RichTextArea, together with an object that defines the command to execute when the link is clicked by the reader of the published text. The IPerformCommand plugin simply implements System.Windows.Input.ICommand. Let's use MEF to load the proper plugins. In the example solution there is a project that contains rudimentary implementations of these. The IDefineCommand plugin simply prompts for a command string (cf. a command line or query string), and the IPerformCommand plugin displays a MessageBox showing this command string. An actual application using this extended RichTextArea would have its own set of commands, each having their own parameters, and hence would provide more user friendly application specific plugins. Nonetheless, in any case a command can be persisted as a string and hence the two interfaces defined above suffice. For a Visual Studio 2010 solution, see my article on The Code Project.

    Read the article

  • Read Mobi eBooks on Kindle for PC

    - by Matthew Guay
    Do you use your PC as a eBook reader?  Kindle for PC makes it easy to read thousands of books from the Kindle Store on your computer. What you may not know is that is also works with .mobi format too, so you can increase the amount of books you can read. Amazon has jumpstarted the eBook market with their popular Kindle device.  Last fall Amazon unveiled Kindle for PC, and we reviewed how you can Read Kindle Books On Your Computer with Kindle for PC.  Whether or not you own a Kindle or other eBook reader, this is a great way to take advantage of the thousands of eBooks available from the Kindle Store today. It supports azw, prc, and tpz format, which are sold from the Kindle store, but it also supports Mobipocket (.mobi) eBooks that are not DRM protected.  Here’s how you can add them to Kindle for PC so you can easily read them on your PC Getting Started: First, make sure you have Kindle for PC (link below) installed on your computer. Sign in with your Amazon account when you first run it. Kindle for PC lets you easily read eBooks downloaded from the Kindle Store, but it doesn’t have any way to add other eBooks directly from the program. To add eBooks, you can sometimes download and double-click on the books, and they will open in Kindle for PC and be automatically added to the library.  However, this does not always seem to work. So instead, browse to your Documents folder (simply click on the Documents link on your Start menu), and double-click on the My Kindle Content folder. This folder contains all the Kindle books you have downloaded.  If you have other eBooks you would like to add to Kindle for PC, simply drag-and-drop or copy and paste them into this folder.  Here we have a .mobi formatted book downloaded from the Gutenberg Project that we’re dragging into the folder. Now, close and reopen Kindle for PC.  It should now show your new eBook right beside the eBooks you have downloaded from the Kindle Store. These eBooks work just the same as the ones downloaded from the Kindle store, and you can change font size and add bookmarks just as with other eBooks. The eBooks downloaded this way may show up with either a Amazon logo or a mobile device icon.  You should only see the mobile device icon on .mobi files formatted for mobile devices; other ones should show up with the Amazon logo.  In this screen, Pilgrim’s Progress is a standard .mobi book, The Adventures of Sherlock Holmes is a mobipocket book, and the others are downloaded from the Kindle Store. Conclusion This is a great way to read eBooks from across the internet on Kindle for PC.  Wikipedia’s Kindle page has a list of websites that offer eBooks formatted for the Kindle, so be sure to check it out for more books. Links Download Kindle for PC List of websites that offer eBooks that will work on Kindle – via Wikipedia Similar Articles Productive Geek Tips Read Kindle Books On Your Computer with Kindle for PCInstall Adobe PDF Reader on Ubuntu EdgyHow to Access your Box.Net Account from Ubuntu the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • Is Oracle Policy Automation a Fit for My Agency? I'll bet it is.

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Recently, I stumbled upon a new(-ish) whitepaper now posted on the Oracle Technology Network around Oracle Policy Automation (OPA). This paper is certain to become a must read for any customer interested in rules automation. What is OPA?  If you are not sitting in your favorite Greek restaurant waiting for that order of Saganaki to appear, OPA is Oracle’s solution for automated streamlining, standardizing, and the maintenance of policy. It is a specialized rules platform that simplifies the automation of rules and policies, putting the analysis in the hands of the analysts, not the IT organization. In other words, OPA allows the organization to be more efficient by eliminating (or at a minimum, reducing the engagement of) the middle man from the process. The whitepaper I mention above is titled, “Is Oracle Policy Automation a Good Fit for My Business?”. This short document walks the reader through use cases and advice for the reader to consider when deciding if OPA is right for their agency. The paper outlines many different scenarios, different uses of OPA in production today and, where OPA may not be a good fit. Many of the use case examples revolve around end user questionnaires or analyst research. What is often overlooked is OPA’s ability to act as a rules engine behind the scenes. That is, take inputs from one source (e.g., personnel data), process that data in OPA and send the output (e.g., pay data with benefits deductions) to a second source. The rules have been automated, no necessary human intervention to perform analysis. A few of my customers have used the embedded OPA solution to improve transaction processing and reduce the time spent analyzing exceptions. I suggest any reader whose organization is reliant on or deals with high complexity, volume or volatility in rules that are based on documentation – or which need to be documented – take a look at Oracle Policy Automation. You can find the white paper on Oracle Technology Network. You can find the white paper in the Oracle Policy Automation of the OTN. You can find more information around OPA on oracle.com. Finally, you can send me a question any time at [email protected] Thank you for reading. If you have any topics around Oracle Applications in the Federal or Public Sector industries you would like to see addressed in this blog, please leave suggestions in the comments section and I will do my best to address in a future post.

    Read the article

  • Adding array of images to Firebase using AngularFire

    - by user2833143
    I'm trying to allow users to upload images and then store the images, base64 encoded, in firebase. I'm trying to make my firebase structured as follows: |--Feed |----Feed Item |------username |------epic |---------name,etc. |------images |---------image1, image 2, etc. However, I can't get the remote data in firebase to mirror the local data in the client. When I print the array of images to the console in the client, it shows that the uploaded images have been added to the array of images...but these images never make it to firebase. I've tried doing this multiple ways to no avail. I tried using implicit syncing, explicit syncing, and a mixture of both. I can;t for the life of me figure out why this isn;t working and I'm getting pretty frustrated. Here's my code: $scope.complete = function(epicName){ for (var i = 0; i < $scope.activeEpics.length; i++){ if($scope.activeEpics[i].id === epicName){ var epicToAdd = $scope.activeEpics[i]; } } var epicToAddToFeed = {epic: epicToAdd, username: $scope.currUser.username, userImage: $scope.currUser.image, props:0, images:['empty']}; //connect to feed data var feedUrl = "https://epicly.firebaseio.com/feed"; $scope.feed = angularFireCollection(new Firebase(feedUrl)); //add epic var added = $scope.feed.add(epicToAddToFeed).name(); //connect to added epic in firebase var addedUrl = "https://epicly.firebaseio.com/feed/" + added; var addedRef = new Firebase(addedUrl); angularFire(addedRef, $scope, 'added').then(function(){ // for each image uploaded, add image to the added epic's array of images for (var i = 0, f; f = $scope.files[i]; i++) { var reader = new FileReader(); reader.onload = (function(theFile) { return function(e) { var filePayload = e.target.result; $scope.added.images.push(filePayload); }; })(f); reader.readAsDataURL(f); } }); } EDIT: Figured it out, had to connect to "https://epicly.firebaseio.com/feed/" + added + "/images"

    Read the article

  • C# Can I return HttpWebResponse result to iframe - Uses Digest authentication

    - by chadsxe
    I am trying to figure out a way to display a cross-domain web page that uses Digest Authentication. My initial thought was to make a web request and return the entire page source. I currently have no issues with authenticating and getting a response but I am not sure how to properly return the needed data. // Create a request for the URL. WebRequest request = WebRequest.Create("http://some-url/cgi/image.php?type=live"); // Set the credentials. request.Credentials = new NetworkCredential(username, password); // Get the response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Get the stream containing content returned by the server. Stream dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Clean up the streams and the response. reader.Close(); dataStream.Close(); response.Close(); return responseFromServer; My problems are currently... responseFromServer is not returning the entire source of the page. I.E. missing body and head tags The data is encoded improperly in responseFromServer. I believe this has something to do with the transfer encoding being of the type chunked. Further more... I am not entirely sure if this is even possible. If it matters, this is being done in ASP.NET MVC 4 C#. Thanks, Chad

    Read the article

  • Difference between DataTable.Load() and DataTable = dataSet.Tables[];

    - by subbu
    Hi All , I have a doubt i use the following piece of code get data from a SQLlite data base and Load it into a data table "SQLiteConnection cnn = new SQLiteConnection("Data Source=" + path); cnn.Open(); SQLiteCommand mycommand = new SQLiteCommand(cnn); string sql = "select Company,Phone,Email,Address,City,State,Zip,Country,Fax,Web from RecordsTable"; mycommand.CommandText = sql; SQLiteDataReader reader = mycommand.ExecuteReader(); dt.Load(reader); reader.Close(); cnn.Close();" In some cases when I try to load it Gives me "Failed to enable constraints exception" But when I tried this below given piece of code for same table and same set of records it worked "SQLiteConnection ObjConnection = new SQLiteConnection("Data Source=" + path); SQLiteCommand ObjCommand = new SQLiteCommand("select Company,Phone,Email,Address,City,State,Zip,Country,Fax,Web from RecordsTable", ObjConnection); ObjCommand.CommandType = CommandType.Text; SQLiteDataAdapter ObjDataAdapter = new SQLiteDataAdapter(ObjCommand); DataSet dataSet = new DataSet(); ObjDataAdapter.Fill(dataSet, "RecordsTable"); dt = dataSet.Tables["RecordsTable"]; " Can any one tell me what is the difference between two

    Read the article

  • IDataRecord.IsDBNull causes an System.OverflowException (Arithmetic Overflow)

    - by Ciddan
    Hi! I have a OdbcDataReader that gets data from a database and returns a set of records. The code that executes the query looks as follows: OdbcDataReader reader = command.ExecuteReader(); while (reader.Read()) { yield return reader.AsMovexProduct(); } The method returns an IEnumerable of a custom type (MovexProduct). The convertion from an IDataRecord to my custom type MovexProduct happens in an extension-method that looks like this (abbrev.): public static MovexProduct AsMovexProduct(this IDataRecord record) { var movexProduct = new MovexProduct { ItemNumber = record.GetString(0).Trim(), Name = record.GetString(1).Trim(), Category = record.GetString(2).Trim(), ItemType = record.GetString(3).Trim() }; if (!record.IsDBNull(4)) movexProduct.Status1 = int.Parse(record.GetString(4).Trim()); // Additional properties with IsDBNull checks follow here. return movexProduct; } As soon as I hit the if (!record.IsDBNull(4)) I get an OverflowException with the exception message "Arithmetic operation resulted in an overflow." StackTrace: System.OverflowException was unhandled by user code Message=Arithmetic operation resulted in an overflow. Source=System.Data StackTrace: at System.Data.Odbc.OdbcDataReader.GetSqlType(Int32 i) at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i) at System.Data.Odbc.OdbcDataReader.IsDBNull(Int32 i) at JulaAil.DataService.Movex.Data.ExtensionMethods.AsMovexProduct(IDataRecord record) [...] I've never encountered this problem before and I cannot figure out why I get it. I have verified that the record exists and that it contains data and that the indexes I provide are correct. I should also mention that I get the same exception if I change the if-statemnt to this: if (record.GetString(4) != null). What does work is encapsulating the property-assignment in a try {} catch (NullReferenceException) {} block - but that can lead to performance-loss (can it not?). I am running the x64 version of Visual Studio and I'm using a 64-bit odbc driver. Has anyone else come across this? Any suggestions as to how I could solve / get around this issue? Many thanks!

    Read the article

  • SQL Quey slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • .NET Fingerprint SDK

    - by Kishore A
    I am looking for a comparison of Fingerprint reader SDKs available for .NET (we are using .Net 3.5). My requirements are 1. Associate more than one fingerprint with a person. 2. Store the fingerprints it self. (So I do not have to change the Database for my program.) 3. Work in both event and no-event mode. (Event Mode: Give notification if someone swipes a finger on the reader; No-Event mode: I activate the reader in synchronous mode). 4. Should provide API for either confirming a user or Identifying a user. (Confirm API: I send the person's ID/unique number and it confirms that it is the same person; Identifying API: The sdk sends the person's ID after it looks up the person using the fingerprint) I would also like to get a comparison of Fingerprint readers if anybody knows of one available on the internet. Hope I was clear. Thanks, Kishore.

    Read the article

  • Trouble converting an MP3 file to a WAV file using Naudio

    - by WebDevHobo
    Naudio Library: http://naudio.codeplex.com/ I'm trying to convert an MP3 file to a WAV file, but I've run in to a small error. I know what's going wrong, but I don't really know how to go about fixing it. Here's the piece of code I'm running: private void button1_Click(object sender, EventArgs e) { using(Mp3FileReader reader = new Mp3FileReader(@"path\to\MP3")) { using(WaveFileWriter writer = new WaveFileWriter(@"C:\test.wav", new WaveFormat())) { int counter = 0; while(reader.Read(test, counter, test.Length + counter) != 0) { writer.WriteData(test, counter, test.Length + counter); counter += 512; } } } } reader.Read() goes into the Mp3FileReader class, and the method looks like this: public override int Read(byte[] sampleBuffer, int offset, int numBytes) { if (numBytes % waveFormat.BlockAlign != 0) //throw new ApplicationException("Must read complete blocks"); numBytes -= (numBytes % waveFormat.BlockAlign); return mp3Stream.Read(sampleBuffer, offset, numBytes); } mp3Stream is an object of the Stream class. The problem is: I'm getting an ArgumentException. MSDN says that this is because the sum of offset and numBytes is greater than the length of sampleBuffer. Documentation: http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx This happens because I increase the counter every time, but the size of the byte array test remains the same. What I've been wondering is: do I need to increase the size of the array dynamically, or do I need to find out the needed size at the beginning and set it right away? And also, instead of 512, the method in Mp3FileReader returns 365 the first time. Which is the size of a whole block. But I'm writing the full 512. I'm basically just using the read to check if I'm not at the end of the file yet. Do I need to catch the return value and do something with that, or am I good here?

    Read the article

  • Lucene stop words not removed during searching need a substitute for AnalyzingQueryParser

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser. Update portion of code that invokes AnalyzingQueryParser AnalyzingQueryParser parser = new AnalyzingQueryParser(Version.LUCENE_CURRENT,"description", analyzer); // fuzzy matching preparation String fuzzyStr = TextQuery.prepareFuzzy(tq.text, fuzzyDist); Query query = parser.parse(fuzzyStr); TopScoreDocCollector collector = TopScoreDocCollector.create(numHits, true); searcher.search(query, collector);

    Read the article

  • How to get a file path from c drive in android

    - by Thomas Mathew
    I was trying to extract the content of a pdf file and display its content in android. I tired the code in java and it worked.But when i am coding it in android, its not displaying anything. I think, its not getting the file path. Is there any way by which i could get a file stored in c drive and store its path in a string? The code is given below: public class hello extends Activity { /** Called when the activity is first created. */ private static String INPUTFILE = "FirstPdf.pdf"; String str; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); try { Document document = new Document(); document.open(); PdfReader reader = new PdfReader(INPUTFILE); int n = reader.getNumberOfPages(); str=PdfTextExtractor.getTextFromPage(reader, 2); } catch (IOException e) { //e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } TextView tv = new TextView(this); tv.setText(str); setContentView(tv); } } I would be very grateful if the problem is solved.

    Read the article

  • iText PDFReader Extremely Slow To Open

    - by Wbmstrmjb
    I have some code that combines a few pages of acro forms (with acrofields in tact) and then at the end writes some JS to the entire document. It is the PdfReader in the function adding the JS that is taking extremely long to instantiate (about 12 seconds for a 1MB file). Here is the code (pretty simple): public static byte[] AddJavascript(byte[] document, string js) { PdfReader reader = new PdfReader(new RandomAccessFileOrArray(document), null); MemoryStream msOutput = new MemoryStream(); PdfStamper stamper = new PdfStamper(reader, msOutput); PdfWriter writer = stamper.Writer; writer.AddJavaScript(js); stamper.Close(); reader.Close(); byte[] withJS = msOutput.GetBuffer(); return withJS; } I have benchmarked the above and the line that is slow is the first one. I have tried reading it from a file instead of memory and tried using a MemoryStream instead of the RandomAccessFileOrArray. Nothing makes it any faster. If I add JS to a single page document, it is very fast. So my thought is that the code that combines the pages is somehow making the PDF slow to read for the PdfReader. Here is the combine code: public static byte[] CombineFiles(List<byte[]> sourceFiles) { MemoryStream output = new MemoryStream(); PdfCopyFields copier = new PdfCopyFields(output); try { output.Position = 0; foreach (var fileBytes in sourceFiles) { PdfReader fileReader = new PdfReader(fileBytes); copier.AddDocument(fileReader); } } catch (Exception exception) { //throw } finally { copier.Close(); } byte[] concat = output.GetBuffer(); return concat; } I am using PdfCopyFields because I need to preserve the form fields and so cannot use the PdfCopy or PdfSmartCopy. This combine code is very fast (few ms) and produces working documents. The AddJS code above is called after it and the PdfReader open is the slow piece. Any ideas?

    Read the article

  • C# Find and Replace RegEx with wildcard search and addition of value

    - by fraXis
    Hello, The below code is from my other questions that I have asked here on SO. Everyone has been so helpful and I almost have a grasp with regards to RegEx but I ran into another hurdle. This is what I basically need to do in a nutshell. I need to take this line that is in a text file that I load into my content variable: X17.8Y-1.Z0.1G0H1E1 I need to do a wildcard search for the X value, Y value, Z value, and H value. When I am done, I need this written back to my text file (I know how to create the text file so that is not the problem). X17.8Y-1.G54G0T2 G43Z0.1H1M08 I have code that the kind users here have given me, except I need to create the T value at the end of the first line, and use the value from the H and increment it by 1 for the T value. For example: X17.8Y-1.Z0.1G0H5E1 would translate as: X17.8Y-1.G54G0T6 G43Z0.1H5M08 The T value is 6 because the H value is 5. I have code that does everything (does two RegEx functions and separates the line of code into two new lines and adds some new G values). But I don't know how to add the T value back into the first line and increment it by 1 of the H value. Here is my code: StreamReader reader = new StreamReader(fDialog.FileName.ToString()); string content = reader.ReadToEnd(); reader.Close(); content = Regex.Replace(content, @"X[-\d.]+Y[-\d.]+", "$0G54G0"); content = Regex.Replace(content, @"(Z(?:\d*\.)?\d+)[^H]*G0(H(?:\d*\.)?\d+)\w*", "\nG43$1$2M08"); //This must be created on a new line This code works great at taking: X17.8Y-1.Z0.1G0H5E1 and turning it into: X17.8Y-1.G54G0 G43Z0.1H5M08 but I need it turned into this: X17.8Y-1.G54G0T6 G43Z0.1H5M08 (notice the T value is added to the first line, which is the H value +1 (T = H + 1). Can someone please modify my RegEx statement so I can do this automatically? I tried to combine my two RegEx statements into one line but I failed miserably. Thanks so much, Shawn

    Read the article

  • This property cannot be set after writing has started! on a C# WebRequest Object

    - by EBAGHAKI
    I want to reuse a WebRequest object so that cookies and session would be saved for later request to the server. Below is my code. If i use Post function twice on the second time at request.ContentLength = byteArray.Length; it will throw an exception This property cannot be set after writing has started! But as you can see dataStream.Close(); Should close the writing process! Anybody knows what's going on? static WebRequest request; public MainForm() { request = WebRequest.Create("http://localhost/admin/admin.php"); } static string Post(string url, string data) { request.Method = "POST"; byte[] byteArray = Encoding.UTF8.GetBytes(data); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream(); dataStream.Write(byteArray, 0, byteArray.Length); dataStream.Close(); WebResponse response = request.GetResponse(); Console.WriteLine(((HttpWebResponse)response).StatusDescription); dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); reader.Close(); dataStream.Close(); response.Close(); request.Abort(); return responseFromServer; }

    Read the article

  • Lucene stop words not removed during searching

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. As I post this problem, I found the bug. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser.

    Read the article

  • SQL Query slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • Find and Replace RegEx with wildcard search and addition of value

    - by fraXis
    The below code is from my other questions that I have asked here on SO. Everyone has been so helpful and I almost have a grasp with regards to RegEx but I ran into another hurdle. This is what I basically need to do in a nutshell. I need to take this line that is in a text file that I load into my content variable: X17.8Y-1.Z0.1G0H1E1 I need to do a wildcard search for the X value, Y value, Z value, and H value. When I am done, I need this written back to my text file (I know how to create the text file so that is not the problem). X17.8Y-1.G54G0T2 G43Z0.1H1M08 I have code that the kind users here have given me, except I need to create the T value at the end of the first line, and use the value from the H and increment it by 1 for the T value. For example: X17.8Y-1.Z0.1G0H5E1 would translate as: X17.8Y-1.G54G0T6 G43Z0.1H5M08 The T value is 6 because the H value is 5. I have code that does everything (does two RegEx functions and separates the line of code into two new lines and adds some new G values). But I don't know how to add the T value back into the first line and increment it by 1 of the H value. Here is my code: StreamReader reader = new StreamReader(fDialog.FileName.ToString()); string content = reader.ReadToEnd(); reader.Close(); content = Regex.Replace(content, @"X[-\d.]+Y[-\d.]+", "$0G54G0"); content = Regex.Replace(content, @"(Z(?:\d*\.)?\d+)[^H]*G0(H(?:\d*\.)?\d+)\w*", "\nG43$1$2M08"); //This must be created on a new line This code works great at taking: X17.8Y-1.Z0.1G0H5E1 and turning it into: X17.8Y-1.G54G0 G43Z0.1H5M08 but I need it turned into this: X17.8Y-1.G54G0T6 G43Z0.1H5M08 (notice the T value is added to the first line, which is the H value +1 (T = H + 1). Can someone please modify my RegEx statement so I can do this automatically? I tried to combine my two RegEx statements into one line but I failed miserably.

    Read the article

  • ftp .net getdirectory size

    - by Xaver
    hi i write method which must to know that is size of specified directory i get response from server which contains flags of file name size and other info and on the different ftp servers format of answer is different how to know format of answer? unsigned long long GetFtpDirSize(String^ ftpDir) { unsigned long long size = 0; int j = 0; StringBuilder^ result = gcnew StringBuilder(); StreamReader^ reader; FtpWebRequest^ reqFTP; reqFTP = (FtpWebRequest^)FtpWebRequest::Create(gcnew Uri(ftpDir)); reqFTP->UseBinary = true; reqFTP->Credentials = gcnew NetworkCredential("anonymous", "123"); reqFTP->Method = WebRequestMethods::Ftp::ListDirectoryDetails; reqFTP->KeepAlive = false; reqFTP->UsePassive = false; try { WebResponse^ resp = reqFTP->GetResponse(); Encoding^ code; code = Encoding::GetEncoding(1251); reader = gcnew StreamReader(resp->GetResponseStream(), code); String^ line = reader->ReadToEnd(); array<Char>^delimiters = gcnew array<Char>{ '\r', '\n' }; array<Char>^delimiters2 = gcnew array<Char>{ ' ' }; array<String^>^words = line->Split(delimiters, StringSplitOptions::RemoveEmptyEntries); array<String^>^DetPr; System::Collections::IEnumerator^ myEnum = words->GetEnumerator(); while ( myEnum->MoveNext() ) { String^ word = safe_cast<String^>(myEnum->Current); DetPr = word->Split(delimiters2); } }

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >