Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 132/480 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • getResourceAsStream returns HttpInputStream not of the entire file

    - by khue
    Hi, I am having a web application with an applet which will copy a file packed witht the applet to the client machine. When I deploy it to webserver and use: InputStream in = getClass().getResourceAsStream("filename") ; The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer. The InputStream is of type HttpInputStream (sun.net.protocol.http.HttpUrlConnection$httpInputStream). But while I test applet in applet viewer, the files are copied fine, with the InputStream returned is of type BufferedInputStream, which has the file's byte sizes. I guess that when getResourceStream in file system the BufferedInputStream will be used and when at http protocol, HttpInputStream will be used. How will I copy the file completely, is there a size limited for HttpInputStream? Thanks a lot.

    Read the article

  • Could I use Revert after abrupting Merge?

    - by John
    Hallo all, Just now I tried to upload a modified working copy to its branch in the following steps: 1. Update 2. Commit Then I attempted to Merge the changes in the trunk to this branch. However during editing of the conflicts I realized there were so many conflicting codes that I could not address completely today, then I gived up the Merge, and the working copy got an exclamation mark immediately. Thru Check for modifications I found that many many files had been modified or had conflicts. It seems that the Merge has been somehow wrongly carried out. My question: could I return to the state before the Merge simply using Revert? Thanks a lot in advance, John

    Read the article

  • Working with VSS and ASP.NET

    - by Tyzak
    hello, i created a project to search textfiles with lucene.net. [asp.net/vs08] these textfiles are in a VSS server. i'm looking for a way how to "check out" or "copy" a Document (later on the whole vss structure with documents) and put it on in a folder on an IIS Server. how can i do that? -- copy a document from vss TO Folder on IIS Server [ Later all documents in the original strucutre] bye the way, its important that the documents keep their original creationdate. thanks in advance

    Read the article

  • Javascript one-liners

    - by peoro
    Often I find some really cool javascript one liners that you can copy and paste into your browser address bar in order to get some fancy effects or even useful ones. This, for example, will let you edit anything on the page. javascript:document.body.contentEditable='true'; document.designMode='on'; void 0 What is your favorite? EDIT: I know that technically all these snippets are just javascript scripts that gets evaluated by the browser as if they were defined in the page. I also know that many browsers have got extensions to let you run javascript code (also letting you store scripts somewhere, providing a good editor etc etc). However that's not so practical; I'm not a javascript developer, haven't got firebug installed, and I can't install it anywhere I go. My idea idea is that of collecting the best "mini-scripts" that whoever can just copy and paste in his browser without the need of installing extensions and stuff.

    Read the article

  • Handling hundreds of dependencies with ant

    - by Roberto
    Hi guys, I have to refactor an ant xml file. Basicly I have one big task that checkouts (using cvs) a lot of dependencies, build them, and then copy all the jar/wsdl generated by building them to a directory that I specify. If one dependency version changes, I have to change the name in at least 3 places on the xml file (cvs checkout, build, copy). What I'd like to have is just a single place where I can specify my dependencies name, without having to search & replace the dependency name through the code. One of the problems is that the cvs project could be /path1/path2/project with tag=v12 but then the jars generated by the single project build could be several with different names, so it seems to be a bit complicated. Do you have any idea on how I can get this done?

    Read the article

  • Is it a good practice to pass struct object as parameter to a function in c++?

    - by tsubasa
    I tried an example live below: typedef struct point { int x; int y; } point; void cp(point p) { cout<<p.x<<endl; cout<<p.y<<endl; } int main() { point p1; p1.x=1; p1.y=2; cp(p1); } The result thats printed out is: 1 2 which is what I expected. My question is: Does parameter p get the full copy of object p1? If so, I wonder if this is a good practice? (I assumed when the struct gets big in size, this will create a lot of copy overhead).

    Read the article

  • Why does this script work in the current directory but fail when placed in the path?

    - by kiloseven
    I wish to replace my failing memory with a very small shell script. #!/bin/sh if ! [ –a $1.sav ]; then mv $1 $1.sav cp $1.sav $1 fi nano $1 is intended to save the original version of a script. If the original has been preserved before, it skips the move-and-copy-back (and I use move-and-copy-back to preserve the original timestamp). This works as intended if, after I make it executable with chmod I launch it from within the directory where I am editing, e.g. with ./safe.sh filename However, when I move it into /usr/bin and then I try to run it in a different directory (without the leading ./) it fails with: *-bash: /usr/bin/safe.sh: /bin/sh: bad interpreter: Text file busy* My question is, when I move this script into the path (verified by echo $PATH) why does it then fail? D'oh? Inquiring minds want to know how to make this work.

    Read the article

  • iPhone - Create non-persistent entities in core data

    - by ncohen
    Hi everyone, I would like to use entity objects but not store them... I read that I could create them like this: myElement = (Element *)[NSEntityDescription insertNewObjectForEntityForName:@"Element" inManagedObjectContext:managedObjectContext]; And right after that remove them: [managedObjectContext deleteObject:myElement]; then I can use my elements: myElement.property1 = @"Hello"; This works pretty well even though I think this is probably not the most optimal way to do it... Then I try to use it in my UITableView... the problem is that the object get released after the initialization. My table becomes empty when I move it! Thanks edit: I've also tried to copy the element ([myElement copy]) but I get an error...

    Read the article

  • Visual studio do not add my Component (from a dll) to the toolbox even if I reference it

    - by Fire-Dragon-DoL
    As stated in the title, I copied my dll in visual studio project, set it to "content" and "copy always". Added a reference to this dll and set it to "copy locally". I successfully managed to instance my component to a form through code but it doesn't appear in the toolbox, really boring. How can I solve this issue? If I link directly the dll project to this project it works, but now I'm treating the dll as "external" so it's not part of the same solution of the dll project. Thanks for any help

    Read the article

  • Developing a sector based partition copying program?

    - by baltusaj
    Hi, I want to develop a program that copies a partition's 'data' only, to another partition. And I want to do it such that the program starts from the first sector of source partition and checks if a sector is used. If it is used copy it to the destination parition. Else don't copy. In other words it's like copying only the contents of a partition to another, sector-by-sector. Question: Is there a way to check if a particular sector on harddisk is used or not? The programming language I am using is C++ and the underlying filesystem in NTFS. Thanks a lot.

    Read the article

  • How do I get a remote tracking branch to stay up to date with remote origin in a bare Git repository?

    - by Beau Simensen
    I am trying to maintain a bare copy of a Git repository and having some issues keeping the remote tracking branches up to date. I create the remote tracking branches like this: git branch -t 0.1 origin/0.1 This seems to do what I need to do for that point in time. However, if I make changes to origin and then fetch with the bare repo, things start to fall apart. My workflow looks like this: git fetch origin It looks like all of the commits come in at that point, but my local copy of 0.1 is not being updated. I can see that the changes have been brought into the repository by doing the following: git diff 0.1 refs/remotes/origin/0.1 What do I need to do to get my tracking branch updated with the remote's updates? I feel like I must be missing a step or a flag somewhere.

    Read the article

  • How often do you implement the big three?

    - by Neil Butterworth
    I was just musing about the number of questions here that either are about the "big three" (copy constructor, assignment operator and destructor) or about problems caused by them not being implemented correctly, when it occurred to me that I could not remember the last time I had implemented them myself. A swift grep on my two most active projects indicate that I implement all three in only one class out of about 150. That's not to say I don't implement/declare one or more of them - obviously base classes need a virtual destructor, and a large number of my classes forbid copying using the private copy ctor & assignment op idiom. But fully implemented, there is this single lonely class, which does some reference counting. So I was wondering am I unusual in this? How often do you implement all three of these functions? Is there any pattern to the classes where you do implement them?

    Read the article

  • Solr; What does this mean?

    - by Camran
    At the end of the README.txt file which is located in the example directory under solr, I find this line: NOTE: This Solr example server references SolrCell jars outside of the server directory with statements in the solrconfig.xml. If you make a copy of this example server and wish to use the ExtractingRequestHandler (SolrCell), you will need to copy the required jars into solr/lib or update the paths to the jars in your solrconfig.xml What does this mean? Do I have to make some adjustment before uploading solr to my server? Also, if you know, what is Solr-nightly:s difference to regular solr? The tutorial states "solr-nightly.zip" but on their download section I cant find it.

    Read the article

  • F# powerpack and distribution

    - by rwallace
    I need arbitrary precision rational numbers, which I'm given to understand are available in the F# powerpack. My question is about the mechanics of distribution; my program needs to be able to compile and run both on Windows/.Net and Linux/Mono at least, since I have potential users on both platforms. As I understand it, the best procedure is: Download the powerpack .zip, not the installer. Copy the DLL into my program directory. Copy the accompanying license file into my program directory, to make sure everything is above board. Declare references and go ahead and use the functions I need. Ship the above files along with my source and binary, and since the DLL uses byte code, it will work fine on any platform. Is this the correct procedure? Am I missing anything?

    Read the article

  • Unexpected cross threading issue

    - by haughtonomous
    I'm trying to do something very simple in principal, but I keep getting a cross-threading exception which has me stumped because I'm not setting out to use multiple threads. I have a Windows Forms application. It launches another Windows Forms application (using the System.Diagnostics.Process class) , and catches the Exited event when that application is closed. My application event handler then tries to copy text from the clipboard to a control on the current displayed form. At this point a Cross-threading exception is thrown. I assume that the problem is that the event from the closing application is in another thread (I'm outside my comfort zone here, so bear with me), so the question boils down to "How do I prevent this exception?" I'm somewhat constrained into having to copy from the clipboard, but I could launch the other application a different way if that would solve the problem.

    Read the article

  • Using the same modules in multiple projects

    - by Andreas Vinther
    I'm using Visual Studio 2010 and coding in VB.NET. My problem is that I've collected all the modules I've written and intend to reuse and placed them in a separate folder. When I want to add a module from the above folder to any given project, it takes a copy of the module and places in the project's source code folder, instead of referencing the module in the folder containing all the other modules. Is it possible to include a module in my project and leave it in the folder with all the other modules, so that when I improve upon a module, it'll affect all the projects that uses/references that module. Instead of me having to manually copy the new module to all the projects that uses/references the module. Right now I have multiple instances of the exact same module that i need to update manually when I improve code or add functionality?

    Read the article

  • access EF classes from a Class Library - exactly how do configure/test the connection string in the

    - by Greg
    Hi, I'm getting very confused about how to call my EF classes in a Class Library from the Client Project I have? Things worked fine when they were in the same project. Now I'm getting errors such as "Unable to load the specified metadata resource ". I've see various ideas / suggestions re how to fix the connection string (e.g. create an App.config in your client project & copy the connection string config from your class library, something about change the connection settings to copy to output, etc) QUESTION - Can someone provide a solid way on how to get EF class access from a separate project working? (i.e. how to get the correct connection information to the client somehow) thanks

    Read the article

  • How prevent anyone from stealing my shared_ptr?

    - by Kyle
    So, I use boost::shared_ptr for all the various reference-counting benefits it provides -- reference counting for starters, obviously, but also the ability to copy, assign, and therefore store in STL Containers. The problem is, if I pass it to just one "malicious" function or object, the object can save the ptr and then I'll never be able to de-allocate it without the foreign function or object nicely relinquishing its ownership. Ultimately, I try to keep object ownership explicit. I accomplish this by having the owner keep the only shared_ptr to the object, and "guest" objects only store weak_ptrs to the object. I really don't want the "shared" part of shared_ptr, but I'm required to use shared_ptr in order to make weak_ptrs. I want to use scoped_ptr, but it's extremely limited since you can't copy it. You can't store it in a container, you can't lend out weak_ptrs from it, and you can't transfer ownership to a new manager. What's the solution?

    Read the article

  • Getting repeat values after copying 2-3 items to clipboard in android?

    - by Gurpreet singh
    I am using clipboard code like below in my app.Everything is working fine if i copy one item at a time.But when we copy 2-3 items one after other and paste somewhere it starts retrieving repeated past value from clipboard rather than the current value.After googling a lot i came to know that its a problem with Samsung phones and i need to clear clipboard history for that.But i could not find any way to clear clipboard history. Public void CopyToClipboard { int pos = (Integer) v.getTag(); StatusEntity obj=getItem(pos); ClipboardManager clipmanager= (ClipboardManager)getContext().getSystemService(getContext().CLIPBOARD_SERVICE); ClipData clip=ClipData.newPlainText("data",obj.getStatus()); clipmanager.setPrimaryClip(clip); Toast.makeText(getContext(), "Copied to clipboard", 1000).show(); } Hoping that anyone of you can help me regarding this .Any help would be appreciated.

    Read the article

  • ASP.net MVC How to run multiple instances of the same app at the same time in different subdomains?

    - by basilmir
    ASP.net MVC How to run multiple instances of the same app at the same time in different subdomains? I have a dozen or so subdomains. sub1.website.com sub2.website.com and the folder structure like this \website\sub1 \website\sub2 If i need to run the same app for all of the subdomains, what would be the best appoach? Host is in the root \website\ and have it figure out where to look based on the "user"? (i imagine i need to implement de logic in the code) OR Just copy the app in each of the subdomains, and have the app "not knowing" that it is actually an instance? (this would mean that when i update the app, i have to copy it everywhere) What other approaches are there to this kind of issue? Each app will use a different database name so that will need to we coded in somekind of external file.

    Read the article

  • C# Keeping DLLs with the associated library

    - by SimonN
    We have a library built on the back of eldos' Secure Black Box. We use copy local to ensure that the appropriate runtime DLLs are included. If we now reference our library in another project with a copy local our library is copied into the bin folder of our main project but the Eldos SBB libraries aren't. We could reference SBB in the main project but there are no direct calls to SBB so any time the code is refactored the references may be removed as unused. What is the best way of handling this issue? Simon

    Read the article

  • Socket left in TIME_WAIT after file transfer via netcat

    - by com
    Using Copying by NetCat I am trying to copy files throught network by NetCat. From console it work pretty well. First I run listening netcat on the destination machine and after I run sending on source machine. The problem is it's doen't work from script from the source machine: ssh -f user@$desthost 'nc -l 1234 | tar xvf - /dev/null &' #listening on destination host tar cv /tmp/file | nc $desthost 1234 #sending to destination host I saw that after running port 1234 is still was open and status of the socket was TIME_WAIT. If you know what's the problem, please, help me out. And by the way, after copying how can I validate that the content is identical? Thanks! Addendum: I found one very strange thing, the same implementation with screen on destination work works, but not stable, sometimes it doesn't copy a file. ssh user@$desthost screen -dm -S test 'nc -l 1234 | tar xvf - ' #listening on destination host Maybe there is an issue with timeout?

    Read the article

  • Customized generation/filtering resources with maven

    - by zamza
    I wonder what is the Maven way in my situation. My application has a bunch of configuration files, let's call them profiles. Each profile configuration file is a *.properties file, that contains keys/values and some comments on these keys/values semantics. The idea is to generate these *.properties to have unified comments in all of them. My plan is to create a template.properties file that contains something like #Comments for key1/value1 key1=${key1.value} #Comments for key2/value2 key2=${key2.value} and a bunch of files like #profile_data_1.properties key1.value=profile_1_key_1_value key2.value=profile_1_key_2_value #profile_data_2.properties key1.value=profile_2_key_1_value key2.value=profile_2_key_2_value Then bind to generate-resources phase to create a copy of template.properties per profile_data_*, and filter that copy with profile_data_*.properties as a filter. The easiest way is probably to create an ant build file and use antrun plugin. But that is not a Maven way, is it? Other option is to create a Maven plugin for that tiny task. Somehow, I don't like that idea (plugin deployment is not what I want very much).

    Read the article

  • organizing external libraries and include files

    - by stijn
    Over the years my projects use more and more external libraries, and the way I did it starts feeling more and more awkward (although, that has to be said, it does work flawlessly). I use VS on Windows, CMake on others, and CodeComposer for targetting DSPs on Windows. Except for the DSPs, both 32bit and 64bit platforms are used. Here's a sample of what I am doing now; note that as shown, the different external libraries themselves are not always organized in the same way. Some have different lib/include/src folders, others have a single src folder. Some came ready-to-use with static and/or shared libraries, others were built /path/to/projects /projectA /projectB /path/to/apis /apiA /src /include /lib /apiB /include /i386/lib /amd64/lib /path/to/otherapis /apiC /src /path/to/sharedlibs /apiA_x86.lib -->some libs were built in all possible configurations /apiA_x86d.lib /apiA_x64.lib /apiA_x64d.lib /apiA_static_x86.lib /apiB.lib -->other libs have just one import library /path/to/dlls -->most of this directory also gets distributed to clients /apiA_x86.dll and it's in the PATH /apiB.dll Each time I add an external libary, I roughly use this process: build it, if needed, for different configurations (release/debug/platform) copy it's static and/or import libraries to 'sharedlibs' copy it's shared libraries to 'dlls' add an environment variable, eg 'API_A_DIR' that points to the root for ApiA, like '/path/to/apis/apiA' create a VS property sheet and a CMake file to state include path and eventually the library name, like include = '$(API_A_DIR)/Include' and lib = apiA.lib add the propertysheet/cmake file to the project needing the library It's especially step 4 and 5 that are bothering me. I am pretty sure I am not the only one facing this problem, and would like see how others deal with this. I was thinking to get rid of the environment variables per library, and use just one 'API_INCLUDE_DIR' and populating it with the include files in an organized way: /path/to/api/include /apiA /apiB /apiC This way I do not need the include path in the propertysheets nor the environment variables. For libs that are only used on windows I even don't need a propertysheet at all as I can use #pragmas to instruct the linker what library to link to. Also in the code it will be more clear what gets included, and no need for wrappers to include files having the same name but are from different libraries: #include <apiA/header.h> #include <apiB/header.h> #include <apiC_version1/header.h> The withdrawal is off course that I have to copy include files, and possibly** introduce duplicates on the filesystem, but that looks like a minor price to pay, doesn't it? ** actually once libraries are built, the only thing I need from them is the include files and thie libs. Since each of those would have a dedicated directory, the original source tree is not needed anymore so can be deleted..

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >