Search Results

Search found 12603 results on 505 pages for 'shadow copy'.

Page 137/505 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • How to auto-mount a copied encrypted home

    - by LedZ
    How can I auto-mount and use my encrypted home that I copied to another partition on the same hard disk? I'm running Ubuntu 11.10. My encrypted home is on sda1. There I've 2 users: userA and userB. Another partition is sda3 on which I have some other Data. BTW, sda1 is formatted as EXT4, sda3 is formatted as EXT3. I did the following: I logged out from GUI (Gnome) and changed (using Ctrl+Alt+F1) to the shell. From there I logged in, changed to sudo (using sudo -s) . After then I created a new mountpoint (tmp) under /mnt (mkdir /mnt/tmp) mounted /dev/sda3 on that mountpoint /mnt/tmp (mount /dev/sda3 /mnt/tmp) copied my encrypted /home to /mnt/tmp using rsync (rsync --acvxASXH --progress --stats /home/ /mnt/tmp/). After the “copy-procedure” I looked to my “new home” in /mnt/tmp and there I found the following 3 folders: userA, userB, .ecryptfs My structure for /dev/sda3 mounted on /mnt/tmp looks like the following (userB in ecryptfs I've not listed): -userA ¦ +userB ¦ +.ecryptfs ¦ +userA ¦ + auto-mount ¦ + auto-umount ¦ + Private.mnt ¦ + Private.sig ¦ + wrapped-passphrase ¦ + .wrapped-passphrase.recorded ¦ + .Private + (encrypted file_1) + (encrypted file_2) + (encrypted file_n) Now I would like that this copy of the original home-directory should act with the same behavior as the original home-directory means, that it should be auto-mounted at reboot and give me access to my unencrypted files and after logout all my files should be encrypted again. Any suggestions?

    Read the article

  • Pasting from vim in terminal to Google Docs (Firefox + Vimperator) - need to understand

    - by LIttle Ancient Forest Kami
    I had some trouble with copy-pasting text from vim in terminal to Google Docs (aka Drive) document (hereafter GDd) in FF browser (with Vimperator). Note: I have a file opened in Vim 7.2 in terminal :version displays both +clipboard and +xterm-clipboard I'm on Ubuntu 10.04 LTS, so I don't think that's Unity-related I want to use Vim, not GVim, nor gedit... I'm avid fan of mouseless navigation, so solution with mouse was not what I wanted. I have the solution, but I need understanding. What I tried and where it gets me: Yanking whole file text via: ggvGy allows me to: paste it via mouse middle button, NOT with Ctrl+v or Shift+Insert here, in text area for entering question text in gedit but NOT in GDd where I want it pasted, even if I switch Vimperator to pass-through mode with Insert does NOT show in XClip after xclip -o From gedit, I can copy-paste the text into GDd (Vimperator's pass-through mode not required). :%! !xclip -i (or :first, last) reports whole file (all lines, to be precise) as filtered, though shell returns 1 `xclip -o' returns nothing (is empty) or returns previously copied value with 2. no surprise, but I can't paste at all not only to GDd but also to gedit or here setting clipboard (:set clipboard=unnamed) to unnamed doesn't help using "+y or "*y on whole file text actually does the trick So, the question (it's actually three, say "split" and I will): why middle mouse button pastes different things than Ctrl+v and how to know what will be pasted with each? why just yanking (without registers) works with mouse but not with keyboard / XClip? why didn't unnamed register help? After setting, it should make unnamed and * registers same?

    Read the article

  • how to install Ubuntu on a fresh hard drive

    - by Herman Wiegman
    I attempted to install Ubuntu from a USB stick to my Intel 4 3GHz computer with 80GB HDD. The installer was doing well, then it said something to the effect of "errors on the source USB, or the target HDD" The recommendation was to download the installer again. I suspected my HDD was going bad so I figured I would investigate. What I found was a partially formatted 80GB HDD. I repartitioned it via a different computer. Now a fresh copy of the Ubuntu USB installer is not able to move past the start-up screen (it freezes). I was able to purchase a new / clean HDD, but still the fresh copy of the installer still locks up after the initial opening screen (locks up after about 2 screens worth of installations steps). Does this sounds like a HDD NTHS issue or a CPU/hardware/memory issue? or should I move to a CD image file rather than my USB stick? Now my computer is stuck... no OS.. no way to go back to Windows (upgrade OS CD only). Any insight would be greatly appreciated. Stuck in Schenectady Herman Wiegman

    Read the article

  • StyleCop Custom Rules

    - by Aligned
    There are several blogs on how to do this (http://scottwhite.blogspot.com/2008/11/creating-custom-stylecop-rules-in-c.html, etc). I’ve found a few useful things to point out: Debugging is difficult, but here are the steps (thanks to Tintin’s answer). “One way: 1) Delete your custom rules 2) Open Visual Studio (for dev), open your custom rule solution 3) Build & Deploy custom rules (a PostBuild action to copy the rules into the StyleCop folder is handy) 4) Open Visual Studio (for test) 5) Use VS (dev) and Attach to process devenv.exe (the test VS instance), set breakpoints in the rules you want to debug 6) Use VS’ (test) and right-click on project, Run StyleCop 7) Debug” ~ it worked once, now I’m having problems getting it to work again ~ I also get the message “Cannot evaluate expression because the code of the current method is optimized.” when I try to look at properties. Looking at the source code of the StyleCop.CSharp.Rules.dll that comes with the install. I used JustDecompile from Telerik. Create one xml file and name it the same as the one cs file (CodingGuildelineRules.cs and CodingGuidelinRules.xml) Deploy: 1. Build in Visual Studio 2. Close Visual Studio (Style cop is running so you can’t override your dll without closing) 3. Copy the dll from the bin to the C: \Program Files (x86)\StyleCop 4.7\ 4. Open the settings file or re-open Visual Studio

    Read the article

  • Error -12 hibernation image. Not enough free memory (sometimes)

    - by user99306
    I am having a problem with hibernation in Ubuntu 12.10, it had worked fine in 12.04. When I try and hibernate it sometimes appears to hibernate throws up an error and returns me to the desktop. The error I get is this: PM: Creating hibernation image: PM: Need to copy 375021 pages PM: Normal pages needed: 117957 + 1024, available pages: 110205 PM: Not enough free memory PM: Error -12 creating hibernation image Now I understand what the error means, but it doesn't make sense. My swap file is 5GB and is seldom ever used as I have 4GB of RAM. I know it is recommended to have 1.5 times more swap than RAM etc, but space doesn't seem to be the problem, despite the error. For example, I rarely use more than about 25%-30% of my RAM, yet still have the problem above. Moreover on a fresh boot and login, with no programs open and using only about 12% of RAM, I can get the above error - yet at other times I can hibernate whilst using 25% of my RAM. Also if I keep trying to hibernate, it eventually does after throwing up the above error four or five times. A successful hibernation looks like this: PM: Creating hibernation image: PM: Need to copy 295511 pages PM: Normal pages needed: 95534 + 1024, available pages: 132627 Is there some setting that I need to tweak or something I need to do before hibernating to avoid this problem? I guess the question could be better interpreted as: Is there some way of safely flushing the buffers and the cache before hibernation? Other than attempting to hibernate several times until it is successful! Thanks in advance.

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • I'm a SubVersion geek, why I should consider or not consider Mercurial or Git or any other DRCS?

    - by Pierre 303
    I tried to understand the benefits of DRCS. I must recognize I still doesn't get it. Here are my current beliefs. I'm ready to destroy them thanks to your expertise. I know I'm probably resisting to change. I just want to evaluate how much that change will cost me. Merging hell can be solved by just applying good practices such as continuous integration. There is no such good practice than having a private branch for a few days when you are in a self managing team with real collaboration. I use branching for that for very rare cases, and I keep a branch for every major version, in which I fix bugs merged from the trunk. I see the value of committing offline then pushing online. But continuous integration can help on this too. I work on very large projects, and I never noticed SubVersion to be slow even when the server is 5000km away on the internet and my small connection (less than 1024D/128U). Harddisk space is cheap, so having a copy of source code locally doesn't look like a problem to me. I already have a full copy of the last version on my disk. I don't understand the distributed thing there (maybe THIS IS the key to my understanding?) I not new in the industry, and judging by my difficulty to understand, I don't think DRCS are easier to understand than SubVersion like. If fact, I don't understand... Doctor, give me your diagnostic.

    Read the article

  • how to architect this to make it unit testable

    - by SOfanatic
    I'm currently working on a project where I'm receiving an object via web service (WSDL). The overall process is the following: Receive object - add/delete/update parts (or all) of it - and return the object with the changes made. The thing is that sometimes these changes are complicated and there is some logic involved, other databases, other web services, etc. so to facilitate this I'm creating a custom object that mimics the original one but has some enhanced functionality to make some things easier. So I'm trying to have this process: Receive original object - convert/copy it to custom object - add/delete/update - convert/copy it back to original object - return original object. Example: public class Row { public List<Field> Fields { get; set; } public string RowId { get; set; } public Row() { this.Fields = new List<Field>(); } } public class Field { public string Number { get; set; } public string Value { get; set; } } So for example, one of the "actions" to perform on this would be to find all Fields in a Row that match a Value equal to something, and update them with some other value. I have a CustomRow class that represents the Row class, how can I make this class unit testable? Do I have to create an interface ICustomRow to mock it in the unit test? If one of the actions is to sum all of the Values in the Fields that have a Number equal to 10, like this function, how can design the custom class to facilitate unit tests. Sample function: public int Sum(FieldNumber number) { return row.Fields.Where(x => x.FieldNumber.Equals(number)).Sum(x => x.FieldValue); } Am I approaching this the wrong way?

    Read the article

  • How to move ubuntu 12.04 on another drive

    - by Maksim
    How I can move my ubuntu on another drive? I know about clonezilla but problem is that destination drive is smaller the source one. Gparted can't copy-paste partition if destination not the end last partition. I tried dpkg --selected-packages and apt-clone. First one just not install all my packages and removed existed that now I have no full unity and not my all packages. Second one just fail on configuration package. But before I did that way I copy-paste my /etc to new system. My partition table destination : gpt 1 1049kB 106MB 105MB fat32 EFI System ??????????? 2 106MB 12,1GB 12,0GB ext4 3 12,1GB 66,3GB 54,2GB ext4 source: msdos 1 1049kB 12,0GB 12,0GB primary ext4 ??????????? 2 12,0GB 492GB 480GB primary ext4 3 492GB 500GB 8107MB primary linux-swap(v1) Gpt not working with ubuntu that use grub 1.99. I don't know why but my laptop can't boot any device with uefi just black screen and ubuntu detect it on fresh install.

    Read the article

  • How To Export/Import a Website in IIS 7.x

    - by Tray Harrison
    IIS 6 had a great feature called ‘Save Configuration to a File’ which would allow you to easily export a website’s configuration, to be later used to import either on the same server or another box.  This came in handy anytime you wanted to duplicate a site in order to do some testing without impacting the existing application.  So naturally, Microsoft decided to do away with this feature in IIS 7. The process to export/import a site is still fairly simple, though not as obvious as it was in previous versions.  Here are the steps: 1. Open a command prompt and navigate to C:\Windows\System32\inetsrv and run the following command: appcmd list site /name:<sitename> /config /xml > C:\output.xml So if you were wanting to export a website named EAC, you would run the following: If you’ll be setting up another copy of the site on the same server, you’ll now need to edit the output.xml file before importing it.  This is necessary in order to avoid conflicts such as bindings, Site ID, etc.  To do this, edit the XML and change the values.  Go ahead and make a copy of the home directory, and rename it to whatever folder name you specified in the output – /EAC2 in this example.  If you decide to change the app pool, make sure you go ahead and create the new app pool as well. Once these edits have been made, we are now ready to import the site.  To do that run: appcmd add sites /in < c:\output.xml So for our example it would look like this: That’s it.  You should now see your site listed when opening up Inet Manager.  If for some reason the site fails to start, that’s probably because you forgot to create the new app pool or there is a problem with one of the other parameters you changed.  Look at the System log to identify any issues like this.

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

  • How are objects modelled in a functional programming language?

    - by Giorgio
    In an answer to this question (written by Pete) there are some considerations about OOP versus FP. In particular, it is suggested that FP languages are not very suitable for modelling (persistent) objects that have an identity and a mutable state. I was wondering if this is true or, in other words, how one would model objects in a functional programming language. From my basic knowledge of Haskell I thought that one could use monads in some way, but I really do not know enough on this topic to come up with a clear answer. So, how are entities with an identity and a mutable persistent state normally modelled in a functional language? EDIT Here are some further details to clarify what I have in mind. Take a typical Java application in which I can (1) read a record from a database table into a Java object, (2) modify the object in different ways, (3) save the modified object to the database. How would this be implemented e.g. in Haskell? I would initially read the record into a record value (defined by a data definition), perform different transformations by applying functions to this initial value (each intermediate value is a new, modified copy of the original record) and then write the final record value to the database. Is this all there is to it? How can I ensure that at each moment in time only one copy of the record is valid / accessible? One does not want to have different immutable values representing different snapshots of the same object to be accessible at the same time.

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Enable [command] key to register as something other than just [ctrl]?

    - by gojomo
    I'm running 10.04LTS inside VMWare Fusion on a Mac. The [command] key (aka [windows] on many keyboards) is almost always behaving as if it was [ctrl], even though I done anything explicit to request that behavior. In fact, in SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior, 'default' is chosen (rather than the 'Control is mapped to Win keys' option). However, choosing other options there do not seem to change the handling of [command], at least not as tested in the SystemPreferenceKeyboard Shortcuts app. (No matter what I've tried, [command]-x is always detected as [Ctrl]-x in that app.) I've tried: various options under SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior toggling the VMWare Fusion Preferences KKeyboard & Mouse Key Mappings setup which claims to map '[command]' to '[windows]', and restarting the VM in each position the xmodmap lines suggested at https://help.ubuntu.com/community/MappingWindowsKey And yet, it's clear that all Ubuntu apps aren't merging [ctrl] and [command], because in 'Terminal', [shift]-[ctrl]-c will Copy, but [shift]-[command]-c will not. If the [command]/[windows] key was recognized as anything else ('Super', 'Meta', 'Hyper'? I don't care as long as it's not 'Control'), then I could achieve my real goal (which happens to be enabling CMD-based cut/copy/paste in PyCharm, while leaving CTRL-X/etc available for emacs-like bindings). I think any solution which manages to make [command]-x appear as something other than [ctrl]-x in PreferencesKeyboard Shortcuts will probably do the trick.

    Read the article

  • Setting up a LAMP VM server for Development and Testing?

    - by TdotThomas
    Info: I would like to set up a VM server on my local computer which will serve pages in the exact same way as my current hosting (but only to me on my local computer). I currently pay a big web hosting company to host my website & web store and they are doing a great job, but I would like to be able to work on my Web site and its corresponding MySQL DB, HTML, and PHP code without being at risk of messing something completely up on the live servers. My current plan of action: Set up a VM webserver with Debian, MySQL, PHP, Apache. Copy web store (PHP/HTML) code to VM server. Copy my current MySQL databases from my hosting provider and install on VM server. Modify and test new features on VM server. Upload MySQL DB and HTML/PHP code back to web host's server where it should work as before but with new modifications. Questions: Now I'm pretty sure I have steps one and two down correctly but I can't for the life of me figure out how to proceed next, so here are my questions. I have my /etc/host file set up so www.MySite.test redirects to the IP address of the local VM webserver. Once I import my PHP/HTML files and MySQL file whats the best way to navigate around the fact that all of my files and DBs will reference www.MySite.com. I can export my MySQL dbs but do I also have to export my MySQL users and passwords to access those db or are those coded into my html/php code?

    Read the article

  • Motivation for a service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such scenario. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - why not just import the BL.DLL (and the dal.dll) to the other UI, and whenever making a change re-copy the dlls, it might not be so 'neat', but still less hassle than one more layer? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Drive reporting incorrect free space

    - by Oli
    So I swapped my shiny SATA SSD for an even shinier PCI-E SSD. I run my core OS on the SSD because it's silly-fast. I did this on my old SSD so I created a new EXT4 partition and then just dded the data across (sorry I don't know the exact command I ran anymore) and after reinstalling grub, I booted onto the PCI-E SSD. At first glance everything had worked perfectly and things were running faster than ever. But then I noticed the free disk space on the new, larger drive: it was almost exactly the same as it was on the other disk... A disk that was half its size. So it looks as if I've copied the files across incorrectly and it's copied some of the filesystem metadata along with it. Tools like du and Disk Usage Analyzer come back with the correct figures. Things that look at the partition (and not the files) seem to think the drive is 120GB I've been using this drive for a week now so it's way out of sync with the old SSD so dumping the data and starting again isn't a job that fills me with joy but two questions: Is there a way to fix my filesystem so it knows what it's really on about? fsck e2fsck and badblocks all seem to be able to scan it without finding a problem with it. If I do plug my old SSD back in, copy the data off my PCI-E on to it and then copy it back onto a fresh filesystem (eg juggle the data around), what's the best way of doing that? I obviously want to keep all the permissions and softlinks where they are.

    Read the article

  • Optimizing Jaro-Winkler algorithm

    - by Pentium10
    I have this code for Jaro-Winkler algorithm taken from this website. I need to run 150,000 times to get distance between differences. It takes a long time, as I run on an Android mobile device. Can it be optimized more? public class Jaro { /** * gets the similarity of the two strings using Jaro distance. * * @param string1 the first input string * @param string2 the second input string * @return a value between 0-1 of the similarity */ public float getSimilarity(final String string1, final String string2) { //get half the length of the string rounded up - (this is the distance used for acceptable transpositions) final int halflen = ((Math.min(string1.length(), string2.length())) / 2) + ((Math.min(string1.length(), string2.length())) % 2); //get common characters final StringBuffer common1 = getCommonCharacters(string1, string2, halflen); final StringBuffer common2 = getCommonCharacters(string2, string1, halflen); //check for zero in common if (common1.length() == 0 || common2.length() == 0) { return 0.0f; } //check for same length common strings returning 0.0f is not the same if (common1.length() != common2.length()) { return 0.0f; } //get the number of transpositions int transpositions = 0; int n=common1.length(); for (int i = 0; i < n; i++) { if (common1.charAt(i) != common2.charAt(i)) transpositions++; } transpositions /= 2.0f; //calculate jaro metric return (common1.length() / ((float) string1.length()) + common2.length() / ((float) string2.length()) + (common1.length() - transpositions) / ((float) common1.length())) / 3.0f; } /** * returns a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1. * * @param string1 * @param string2 * @param distanceSep * @return a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1 */ private static StringBuffer getCommonCharacters(final String string1, final String string2, final int distanceSep) { //create a return buffer of characters final StringBuffer returnCommons = new StringBuffer(); //create a copy of string2 for processing final StringBuffer copy = new StringBuffer(string2); //iterate over string1 int n=string1.length(); int m=string2.length(); for (int i = 0; i < n; i++) { final char ch = string1.charAt(i); //set boolean for quick loop exit if found boolean foundIt = false; //compare char with range of characters to either side for (int j = Math.max(0, i - distanceSep); !foundIt && j < Math.min(i + distanceSep, m - 1); j++) { //check if found if (copy.charAt(j) == ch) { foundIt = true; //append character found returnCommons.append(ch); //alter copied string2 for processing copy.setCharAt(j, (char)0); } } } return returnCommons; } } I mention that in the whole process I make just instance of the script, so only once jaro= new Jaro(); If you are going to test and need examples so not break the script, you will find it here, in another thread for python optimization.

    Read the article

  • unsigned char* buffer (FreeType2 Bitmap) to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } // as *.bmp array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; } Update FT_Bitmap *bitmap = &face->glyph->bitmap; // stride must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat); array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); target->UnlockBits(bitmapData);

    Read the article

  • Postmortem debugging with WinDBG.

    - by Drazar
    I have an WCF-service running on an server, and occasionally(1-2 times every month) it throws an COMException with the informative message ”Unknown error (0x8005008)”. When i googled for this particular error I only got threads about problems when creating virtual directories in IIS. And the source code hasn’t anything with making a virtual directory in IIS. DirectoryServiceLib.LdapProvider.Directory - CreatePost - Could not create employee for 195001010000,000000000000: System.Runtime.InteropServices.COMException (0x80005008): Unknown error (0x80005008) at System.DirectoryServices.PropertyValueCollection.PopulateList I've taken a memorydump when I catch the Exception for further analysis in WinDBG. After switching to the right thread I executed the !CLRStack command: 000000001b8ab6d8 000000007708671a [NDirectMethodFrameStandalone: 000000001b8ab6d8] Common.MemoryDump.MiniDumpWriteDump(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab680 000007ff002808d8 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab780 000007ff00280812 Common.MemoryDump.CreateMiniDump(System.String) 000000001b8ab7e0 000007ff0027b218 DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8ad6d8 000007fef8816869 [HelperMethodFrame: 000000001b8ad6d8] 000000001b8ad820 000007feec2b6c6f System.DirectoryServices.PropertyValueCollection.PopulateList() 000000001b8ad860 000007feec225f0f System.DirectoryServices.PropertyValueCollection..ctor(System.DirectoryServices.DirectoryEntry, System.String) 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) 000000001b8ad8f0 000007ff00274d34 Common.DirectoryEntryExtension.GetStringAttribute(System.String) 000000001b8ad940 000007ff0027f507 DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) 000000001b8ad980 000007ff0027a7cf DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adbe0 000007ff00279532 DirectoryServiceLib.WCFDirectory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adc60 000007ff001f47bd DynamicClass.SyncInvokeCreatePost(System.Object, System.Object[], System.Object[]) My conclusion is that it fails when the code is calling System.DirectoryServices.PropertyCollection.get_Item(System.String). So after issuing an !CLRStack -a I get this result: 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) PARAMETERS: this = <no data> propertyName = <no data> LOCALS: <CLR reg> = 0x0000000001dcef78 <no data> My very first question is why does it display no data on the propertyname? I am kinda new on Windbg. However I executed an dumpobject on = 0x0000000001dcef78: 0:013> !do 0x0000000001dcef78 Name: System.String MethodTable: 000007fef66d6960 EEClass: 000007fef625eec8 Size: 74(0x4a) bytes File: C:\Windows\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll String: personalprescriptioncode Fields: MT Field Offset Type VT Attr Value Name 000007fef66dc848 40000ed 8 System.Int32 1 instance 24 m_stringLength 000007fef66db388 40000ee c System.Char 1 instance 70 m_firstChar 000007fef66d6960 40000ef 10 System.String 0 shared static Empty >> Domain:Value 0000000000174e10:00000000019d1420 000000001a886f50:00000000019d1420 << So when the source code wants to fetch the personalprescriptioncode from Active Directory(what is used for persistence layer) it fails. Looking back at the stack it is when issuing the Copy method. DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) So looking in the sourcecode: DirectoryPost postInLimbo = DirectoryPostFactory.Instance().GetDirectoryPost(LdapConfigReader.Instance().GetConfigValue("LimboDN"), idGenPerson.ID.UserId); if (postInLimbo != null) newPost.Copy(postInLimbo); This code is looking for another post in OU=limbo with the same UserId and if it finds one it copies the attributes to the new post. In this case it does and it fails with personalprescriptioncode. I've looked in Active Directory under OU=Limbo and the post exist there with the attribute personalprescriptioncode=31243. Question 1: Why does it display no data for some of the PARAMETERS and LOCALS? Is it the GC who has cleaned up before the memorydump had been created. Question 2: Is there anymore i can do to get to the solution to this problem?

    Read the article

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • Using VBA / Macro to highlight changes in excel

    - by Zaj
    I have a spread sheet that I send out to various locations to have information on it updated and then sent back to me. However, I had to put validation and lock the cells to force users to input accurate information. Then I can to use VBA to disable the work around of cut copy and paste functions. And additionally I inserted a VBA function to force users to open the excel file in Macros. Now I'm trying to track the changes so that I know what was updated when I recieve the sheet back. However everytime i do this I get an error when someone savesthe document and randomly it will lock me out of the document completely. I have my code pasted below, can some one help me create code in the VBA forum to highlight changes instead of through excel's share/track changes option? ThisWorkbook (Code): Option Explicit Const WelcomePage = "Macros" Private Sub Workbook_BeforeClose(Cancel As Boolean) Call ToggleCutCopyAndPaste(True) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Evaluate if workbook is saved and emulate default propmts With ThisWorkbook If Not .Saved Then Select Case MsgBox("Do you want to save the changes you made to '" & .Name & "'?", _ vbYesNoCancel + vbExclamation) Case Is = vbYes 'Call customized save routine Call CustomSave Case Is = vbNo 'Do not save Case Is = vbCancel 'Set up procedure to cancel close Cancel = True End Select End If 'If Cancel was clicked, turn events back on and cancel close, 'otherwise close the workbook without saving further changes If Not Cancel = True Then .Saved = True Application.EnableEvents = True .Close savechanges:=False Else Application.EnableEvents = True End If End With End Sub Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Call customized save routine and set workbook's saved property to true '(To cancel regular saving) Call CustomSave(SaveAsUI) Cancel = True 'Turn events back on an set saved property to true Application.EnableEvents = True ThisWorkbook.Saved = True End Sub Private Sub Workbook_Open() Call ToggleCutCopyAndPaste(False) 'Unhide all worksheets Application.ScreenUpdating = False Call ShowAllSheets Application.ScreenUpdating = True End Sub Private Sub CustomSave(Optional SaveAs As Boolean) Dim ws As Worksheet, aWs As Worksheet, newFname As String 'Turn off screen flashing Application.ScreenUpdating = False 'Record active worksheet Set aWs = ActiveSheet 'Hide all sheets Call HideAllSheets 'Save workbook directly or prompt for saveas filename If SaveAs = True Then newFname = Application.GetSaveAsFilename( _ fileFilter:="Excel Files (*.xls), *.xls") If Not newFname = "False" Then ThisWorkbook.SaveAs newFname Else ThisWorkbook.Save End If 'Restore file to where user was Call ShowAllSheets aWs.Activate 'Restore screen updates Application.ScreenUpdating = True End Sub Private Sub HideAllSheets() 'Hide all worksheets except the macro welcome page Dim ws As Worksheet Worksheets(WelcomePage).Visible = xlSheetVisible For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVeryHidden Next ws Worksheets(WelcomePage).Activate End Sub Private Sub ShowAllSheets() 'Show all worksheets except the macro welcome page Dim ws As Worksheet For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVisible Next ws Worksheets(WelcomePage).Visible = xlSheetVeryHidden End Sub Private Sub Workbook_Activate() Call ToggleCutCopyAndPaste(False) End Sub Private Sub Workbook_Deactivate() Call ToggleCutCopyAndPaste(True) End Sub This is in my ModuleCode: Option Explicit Sub ToggleCutCopyAndPaste(Allow As Boolean) 'Activate/deactivate cut, copy, paste and pastespecial menu items Call EnableMenuItem(21, Allow) ' cut Call EnableMenuItem(19, Allow) ' copy Call EnableMenuItem(22, Allow) ' paste Call EnableMenuItem(755, Allow) ' pastespecial 'Activate/deactivate drag and drop ability Application.CellDragAndDrop = Allow 'Activate/deactivate cut, copy, paste and pastespecial shortcut keys With Application Select Case Allow Case Is = False .OnKey "^c", "CutCopyPasteDisabled" .OnKey "^v", "CutCopyPasteDisabled" .OnKey "^x", "CutCopyPasteDisabled" .OnKey "+{DEL}", "CutCopyPasteDisabled" .OnKey "^{INSERT}", "CutCopyPasteDisabled" Case Is = True .OnKey "^c" .OnKey "^v" .OnKey "^x" .OnKey "+{DEL}" .OnKey "^{INSERT}" End Select End With End Sub Sub EnableMenuItem(ctlId As Integer, Enabled As Boolean) 'Activate/Deactivate specific menu item Dim cBar As CommandBar Dim cBarCtrl As CommandBarControl For Each cBar In Application.CommandBars If cBar.Name <> "Clipboard" Then Set cBarCtrl = cBar.FindControl(ID:=ctlId, recursive:=True) If Not cBarCtrl Is Nothing Then cBarCtrl.Enabled = Enabled End If Next End Sub Sub CutCopyPasteDisabled() 'Inform user that the functions have been disabled MsgBox " Cutting, copying and pasting have been disabled in this workbook. Please hard key in data. " End Sub

    Read the article

  • XSL match some but not all

    - by Willb
    I have a solution from an earlier post that was kindly provided by Dimitre Novatchev. <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:my="my:my"> <xsl:output method="xml" version="1.0" encoding="iso-8859-1" indent="yes"/> <xsl:key name="kPhysByName" match="KB_XMod_Modules" use="Physician"/> <xsl:template match="/"> <result> <xsl:apply-templates/> </result> </xsl:template> <xsl:template match="/*/*/*[starts-with(name(), 'InfBy')]"> <xsl:variable name="vCur" select="."/> <xsl:for-each select="document('doc2.xml')"> <xsl:variable name="vMod" select="key('kPhysByName', $vCur)"/> <xsl:copy> <items> <item> <label> <xsl:value-of select="$vMod/Physician"/> </label> <value> <xsl:value-of select="$vMod/XModID"/> </value> </item> </items> </xsl:copy> </xsl:for-each> </xsl:template> </xsl:stylesheet> I now need to use additional fields in my source XML and need the existing labels intact but I'm having problems getting this going. <instance> <NewTag>Hello</newTag1> <AnotherNewTag>Everyone</AnotherNewTag> <InfBy1>Dr Phibes</InfBy1> <InfBy2>Dr X</InfBy2> <InfBy3>Dr Chivago</InfBy3> </instance> It drops the additional labels and outputs <result xmlns:my="my:my"> HelloEveryone <items> <item> <label>Dr Phibes</label> <value>60</value> </item> </items> ... I've been experimenting a lot with <xsl:otherwise> <xsl:copy-of select="."> </xsl:copy-of> </xsl:otherwise> but being an xsl newbie I can't seem to get this to work. I've a feeling I'm barking up the wrong tree! Does anyone have any ideas? Thanks, Will

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >