Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 112/480 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Async task ASP.net HttpContext.Current.Items is empty - How do handle this?

    - by GuruC
    We are running a very large web application in asp.net MVC .NET 4.0. Recently we had an audit done and the performance team says that there were a lot of null reference exceptions. So I started investigating it from the dumps and event viewer. My understanding was as follows: We are using Asyn Tasks in our controllers. We rely on HttpContext.Current.Items hashtable to store a lot of Application level values. Task<Articles>.Factory.StartNew(() => { System.Web.HttpContext.Current = ControllerContext.HttpContext.ApplicationInstance.Context; var service = new ArticlesService(page); return service.GetArticles(); }).ContinueWith(t => SetResult(t, "articles")); So we are copying the context object onto the new thread that is spawned from Task factory. This context.Items is used again in the thread wherever necessary. Say for ex: public class SomeClass { internal static int StreamID { get { if (HttpContext.Current != null) { return (int)HttpContext.Current.Items["StreamID"]; } else { return DEFAULT_STREAM_ID; } } } This runs fine as long as number of parallel requests are optimal. My questions are as follows: 1. When the load is more and there are too many parallel requests, I notice that HttpContext.Current.Items is empty. I am not able to figure out a reason for this and this causes all the null reference exceptions. 2. How do we make sure it is not null ? Any workaround if present ? NOTE: I read through in StackOverflow and people have questions like HttpContext.Current is null - but in my case it is not null and its empty. I was reading one more article where the author says that sometimes request object is terminated and it may cause problems since dispose is already called on objects. I am doing a copy of Context object - its just a shallow copy and not a deep copy.

    Read the article

  • waveInProc / Windows audio question...

    - by BTR
    I'm using the Windows API to get audio input. I've followed all the steps on MSDN and managed to record audio to a WAV file. No problem. I'm using multiple buffers and all that. I'd like to do more with the buffers than simply write to a file, so now I've got a callback set up. It works great and I'm getting the data, but I'm not sure what to do with it once I have it. Here's my callback... everything here works: // Media API callback void CALLBACK AudioRecorder::waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { // Data received if (uMsg == WIM_DATA) { // Get wav header LPWAVEHDR mBuffer = (WAVEHDR *)dwParam1; // Now what? for (unsigned i = 0; i != mBuffer->dwBytesRecorded; ++i) { // I can see the char, how do get them into my file and audio buffers? cout << mBuffer->lpData[i] << "\n"; } // Re-use buffer mResultHnd = waveInAddBuffer(hWaveIn, mBuffer, sizeof(mInputBuffer[0])); // mInputBuffer is a const WAVEHDR * } } // waveInOpen cannot use an instance method as its callback, // so we create a static method which calls the instance version void CALLBACK AudioRecorder::staticWaveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2) { // Call instance version of method reinterpret_cast<AudioRecorder *>(dwParam1)->waveInProc(hWaveIn, uMsg, dwInstance, dwParam1, dwParam2); } Like I said, it works great, but I'm trying to do the following: Convert the data to short and copy into an array Convert the data to float and copy into an array Copy the data to a larger char array which I'll write into a WAV Relay the data to an arbitrary output device I've worked with FMOD a lot and I'm familiar with interleaving and all that. But FMOD dishes everything out as floats. In this case, I'm going the other way. I guess I'm basically just looking for resources on how to go from LPSTR to short, float, and unsigned char. Thanks much in advance!

    Read the article

  • null terminating a string

    - by robUK
    Hello, gcc 4.4.4 c89 just wondering what is the standard way to null terminate a string. i.e. However, when I use the NULL I get the warning message. *dest++ = 0; *dest++ = '\0'; *dest++ = NULL; /* Warning: Assignment takes integer from pointer without a cast */ source code I am using: size_t s_strscpy(char *dest, const char *src, const size_t len) { /* Copy the contents from src to dest */ size_t i = 0; for(i = 0; i < len; i++) *dest++ = *src++; /* Null terminate dest */ *dest++ = 0; return i; } Just another quick question. I deliberately commented out the line that null terminates. However, it still correctly printed out the contents of the dest. The caller of this function would send the length of the string by either included the NULL or not. i.e. strlen(src) + 1 or stlen(src). size_t s_strscpy(char *dest, const char *src, const size_t len) { /* Copy the contents from src to dest */ size_t i = 0; /* Don't copy the null terminator */ for(i = 0; i < len - 1; i++) *dest++ = *src++; /* Don't add the Null terminator */ /* *dest++ = 0; */ return i; } Many thanks for any advice,

    Read the article

  • What is the proper syntax for getting a Makefile to print the output directory of one of its output zip files?

    - by 9exceptionThrower9
    I'm trying to edit an Android Makefile in the hopes of getting it to print out the directory (path) location of one the ZIP files it creates. Ideally, since the build process is long and does many things, I would like for it print out the pathway to the ZIP file to a text file in a different directory I can access later: Pseudo-code idea: # print the desired pathway to output file print(getDirectoryOf(variable-name.zip)) > ~/Desktop/location_of_file.txt The Makefile snippet where I would like to insert this new bit of code is shown below. I am interested in finding the directory of $(name).zip (that is specific file I want to locate): # ----------------------------------------------------------------- # A zip of the directories that map to the target filesystem. # This zip can be used to create an OTA package or filesystem image # as a post-build step. # name := $(TARGET_PRODUCT) ifeq ($(TARGET_BUILD_TYPE),debug) name := $(name)_debug endif name := $(name)-target_files-$(FILE_NAME_TAG) intermediates := $(call intermediates-dir-for,PACKAGING,target_files) BUILT_TARGET_FILES_PACKAGE := $(intermediates)/$(name).zip $(BUILT_TARGET_FILES_PACKAGE): intermediates := $(intermediates) $(BUILT_TARGET_FILES_PACKAGE): \ zip_root := $(intermediates)/$(name) # $(1): Directory to copy # $(2): Location to copy it to # The "ls -A" is to prevent "acp s/* d" from failing if s is empty. define package_files-copy-root if [ -d "$(strip $(1))" -a "$$(ls -A $(1))" ]; then \ mkdir -p $(2) && \ $(ACP) -rd $(strip $(1))/* $(2); \ fi endef

    Read the article

  • Help renaming svn repository

    - by rascher
    Here is the deal: I created an SVN repository, say, foo. It is at http://www.example.com/foo. Then I did an svn checkout. I made some updates and changes to my local copy of the code over the week. I haven't committed yet. I realized that I wanted to rename the repository. So I did this: svn copy http://example.com/foo http://example.com/bar svn delete http://example.com/foo I finish my changes (and local svn still thinks I'm working under "foo".) svn commit fails because the repo has been renamed. I try to use svn switch --relocate but it yells at me because svn is awful. I try using the script here to replace "foo" with "bar" in my billion .svn/ folders. This replace is taking a long time. I wonder if something hung? Or maybe sshfs failed? I kill it. Ctrl-C. I look and see that half my files have "foo" and the others have "bar" in the URLs in the sundry .svn/ folders. All I want to do is commit my files with the new name. I could re-checkout the branch, but then I have no way to remember which files I changed, which is why I was using version control in the first place, and svn is so godawful at moving and renaming things. What do I need to do to: Have a "clean" copy of my "bar" svn branch? and, most importantly: Commit the changes I made?

    Read the article

  • Complicated API issue with calling assemblies dynamically?

    - by Stefanos Tses
    I have an interesting challenge that I'm wondering if anyone here can give me some direction. I'm writing a .Net windows forms application that runs on a network and uses an SQL Server to save and pull data. I want to offer a mini "plugin" API, where developers can build their own assemblies and implement a specific interface (IDataManipulate). These assemblies then can be used by my application to call the interface functions and do something. I can create assemblies using my API, copy the file to a folder in my local hard drive and configure my application to use Reflection to call a specific function from the implemented interface (IDataManipulate.Execute). The problem: Since the application will be installed in multiple workstations in the network, is impossible to copy the plugin dlls the users will create to each machine. Solutions I tried: Solution 1 Copy the API dll to a network share. Problem: Requires AllowPartiallyTrustedCallersAttribute, which requires .Net singing, which I can't force from my users. Solution 2 (preferred) Serialize the dll object, save it to the database, deserialize it and call IDataManipulate.Execute. Problem: After deserialization, I try cast it to a IDataManipulate object but returns an error looking for the actual dll file. Solution 3 Save the dll bytes as byte[] to the database and recreate the dll at the local PC every time the user starts my application. Problem: Dll may have dependencies, which I don't know if I can detect. Any suggestions will be greatly appreciated. Thanks

    Read the article

  • assign characters to key combinations in XP or Visual Studio .Net

    - by cpj
    I'm running Mac OSX on a MacBookPro (UK keyboard). I run windows XP under parallels in a VM. I run Visual Studio .Net 2003 and 2008 in XP in the VM when i need to. I have English United Kingdom and English United states keyboards setup in XP. (they switch sometimes for no apparent reason) There is no hash "#" key on my mac's keyboard. However, in OSX I can get a hash with an alt+3 key combination. But In Windows XP... I can not make a "#" character. I can go to the character map in windows and copy a hash.. switch into OSX and copy a hash.. search in code and copy a hash.. but I can not make a hash in XP using my keyboard without typing U+0023: ... which you can imagine is annoying. coding anything with hash symbols is becoming a choir. Anyone got any advice or key mapping tricks I can use to get hash characters working in XP using my mac UK keyboard?

    Read the article

  • only default controller is loading for all request - Critical

    - by Jayapal Chandran
    Hi, My codeigniter project is in live. I have two copies of it. One in the root and another in a subfolder. Both are configered to work normal. The root copy if the one which was made after testing in a subfolder. While running from the a subfolder all worked well. But when copied to the root folder the default controller is loading for all requests. But were as in subfolders and in other servers it is working well. It is like the following A true copy in root folder like sitename.com and another true copy in a subfolder like sitename.com/abc when requesting like this sitename.com/gallery the default controller is loaded instead of gallery controller. When i tried like this sitename.com/index.php/gallery/ then it worked well... but sitename.com/gallery/ is showing only the default controller. that is the index page. here is my htaccess... php_flag magic_quotes_gpc off php_flag short_open_tag on RewriteEngine on RewriteCond $1 !^(index\.php|images|css|static|font|xml|flash|galleryimages|htc|store|robots\.txt) RewriteRule ^(.*)$ index.php/$1 [L] The server is Linux barracuda.elinuxservers.com 2.6.27.18-21 #1 SMP Tue Aug 25 18:13:37 UTC 2009 i686 PHP Version 5.2.9

    Read the article

  • Twitter Bootstrap on page tabs: not hiding tab content

    - by user973424
    I'm trying to get the twitter on page tabbed content to work. I have the tabs working with switching around active class on the tabs. I've included jquery and the bootstrap-tabs.js but the following code can't seem to get the tabbed content to hide / display as they should. Any help on what may be a simple fix would be appreciated. <div class="span8"> <ul class="tabs" data-tabs="tabs"> <li class="active"><a href="#2009">2009</a></li> <li><a href="#2010">2010</a></li> <li><a href="#2011">2011</a></li> </ul> <div class="pill-content"> <div class="active" id="2009"> 2009 copy </div> <div id="2010"> 2010 copy </div> <div id="2011"> 2011 copy </div> </div> <script> $(function () { $('.tabs').tabs() }) </script> </div><!-- end span 8 -->

    Read the article

  • How to retrieve data from a dialog box?

    - by Ralph
    Just trying to figure out an easy way to either pass or share some data between the main window and a dialog box. I've got a collection of variables in my main window that I want to pass to a dialog box so that they can be edited. They way I've done it now, is I pass in the list to the constructor of the dialog box: private void Button_Click(object sender, RoutedEventArgs e) { var window = new VariablesWindow(_templateVariables); window.Owner = this; window.ShowDialog(); if(window.DialogResult == true) _templateVariables = new List<Variable>(window.Variables); } And then in there, I guess I need to deep-copy the list, public partial class VariablesWindow : Window { public ObservableCollection<Variable> Variables { get; set; } public VariablesWindow(IEnumerable<Variable> vars) { Variables = new ObservableCollection<Variable>(vars); // ... So that when they're edited, it doesn't get reflected back in the main window until the user actually hits "Save". Is that the correct approach? If so, is there an easy way to deep-copy an ObservableCollection? Because as it stands now, I think my Variables are being modified because it's only doing a shallow-copy.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • How to be a better software designer

    - by Bmw
    I feel as if I am a hack programmer. I do things over and over again until they finally work. I copy/paste code. I don't think about why something makes sense, if it works I go with it. I have my undergraduate in computer science, and I've done this the entire time. Somehow I made it through the degree by doing this and now I'm in my second year of programming professionally and I am still able to do this and get away with it. Here's the thing, I don't want to be this way anymore. I am not proud of the work I do, I feel like I'm just a copy/paste programmer. How do I become better? I want to be able to design something on my own without copying code. Have you ever been in this situation, if so how did you move beyond it to become a better programmer? To add some background about myself, I started as an asp.net c# programmer, now I'm doing vbscript (which actually makes my tendency to copy/paste/hack a lot worse I must say!) with classic asp…

    Read the article

  • How to transfer Eclipse workspace and project from Windows to Linux and Mac

    - by Li Ma
    We have a a product developed on Windows for years. The product is composed of one Eclipse workspace and about 20 projects. On Windows, we ask every developer check out projects into d:\dev\product folder, and copy a unified Workspace to d:\dev\prod_workspace. This way, whenever a new machine is set, we simply copy files to the same folder, and we can start working immediately. No We need to move our development environment to Linux and Mac. But there's no D:\ on Unix. And home folder for Linux is mostly like /home/username and /Users/username for Mac. We found Eclipse keeps absolute path in workspace when referring to projects, so simply copy workspace over does not work anymore. Even when we manually create/configure workspace on a Linux machine, it still cannot be copied over to another user, because the absolute path is changed. I guess our goal is to allow easy setup of development environment. Do you have any suggestion to move eclipse workspace around? Thanks! Li

    Read the article

  • How to use CriticalSection - MFC?

    - by mapples
    I' am working on a small example and am a bit of curious using criticalsection in my example. What I'am doing is,I have a CStringArray(which has 10 elements added to it).I want to copy these 10 elements(string) to another CStringArray(am doing this to understand threading and Critical section),I have created 2 threads,Thread1 will copy the first 5 element to another CStringArray and Thread2 will copy the rest.Here two CStringArray are being used,I know only 1 thread can access it at a time.I wanted to know how this can be solved by using criticalsection or any other method. void CThreadingEx4Dlg::OnBnClickedOk() { // TODO: Add your control notification handler code here thread1 = AfxBeginThread((AFX_THREADPROC)MyThreadFunction1,this); thread2 = AfxBeginThread((AFX_THREADPROC)MyThreadFunction2,this); } UINT MyThreadFunction1(LPARAM lparam) { CThreadingEx4Dlg* pthis = (CThreadingEx4Dlg*)lparam; pthis->MyFunction(0,5); return 0; } UINT MyThreadFunction2(LPARAM lparam) { CThreadingEx4Dlg* pthis = (CThreadingEx4Dlg*)lparam; pthis->MyFunction(6,10); return 0; } void CThreadingEx4Dlg::MyFunction(int minCount,int maxCount) { for(int i=minCount;i<=maxCount;i++) { CString temp; temp = myArray.GetAt(i); myShiftArray.Add(temp); } }

    Read the article

  • How can I use SCM on linked files in VS2008 projects?

    - by Tom Bushell
    Background: I'm using Visual-SVN V. 1.7.5 with VS2008. I'm fairly new to SVN. I have a Solution that uses source files that will be shared with other Solutions. I've put these files in a folder called "Shared", and added them to my Solution using "Add - Existing Item... - Add As Link" which works fine as far as VS2008 is concerned. But when I try to add the linked files to SVN using the "Add to Suversion" menu item on the file's context menu, I get a warning: "...not added to Subversion because it is out of working copy. Please setup working copy root using Visual SVN - Set Working Copy Root menu". I tried this, but this seems to change the root directory of the whole solution - not what I want to do. Googling and searching SO indicates that I may want to set up some SVN Externals. I tried to follow the examples, using the command line for the first time with Visual-SVN. But I just got a bunch of error messages I didn't understand. Questions: Are Externals the way to go here? If so, can someone provide some detailed, step-by-step help on how to do this with Visual-SVN?

    Read the article

  • How to resolve "svn: Can't find a temporary directory: Internal error"?

    - by HorusKol
    I have already googled the message, and I have plenty of disk space available on the SVN server (it's about 4% usage of 150 GB). I have noticed that when I try echo $TMPDIR at the command prompt on the SVN server I get nothing. What is making this a little confusing is that I only get this message from one location when I do an svn diff (that I've tested so far) - this error is not coming up when I try from three other computers (one of which is testing against the exact same repository, the other two are different repositories on the same svn server). About the only difference I can see is that the broken working copy is connecting to the server by an IP address where all the others are using a server name (although this resolves over DNS to the same IP Address). I'm hoping that I don't have to scratch the broken working copy and checkout a new one - unfortunately, this is a legacy project and not all changes have been properly revisioned.

    Read the article

  • Microphone not working in Windows Virtual PC (on Windows 7)

    - by Clay Nichols
    I"m using Windows Virtual PC on Windows 7 (host) running Windows XP (as the Guest O/S) I'm trying to get the Microphone working. When I Enable Integration Features: Microphone does not work When I run the Sound Recorder, the record button is disabled. If I look at Sound settings, there are no options for the Mic (it's all disabled "grayed out"). Speakers work Copy & Paste works When I Disable Integration Features: Microphone and speakers work Copy and Paste does not (as expected) Drag'n Drop copying does not work in either situation. What I've Tried Verified that the Windows XP Mode Virtual PC guest also has the same symptoms (Mic doesn't work) and audio out (speakers) do work. I"m going to try (but have little hope) to: -Uninstall and Reinstall the Integration addin for Virtual PC

    Read the article

  • SBS 2008 Backup Drive Full - Error Code '2147942512'

    - by HK1
    We are using Windows Backup on SBS 2008 SP2 and backing up to 1TB external hard drives. Recently after switching drives our backup started failing because the backup drive is full and auto-delete isn't automatically deleting older backups/show copies. I'm trying to get more information to help me effectively prevent this problem from reoccurring in the future. How I can tell that the drive is getting full: In the event viewer under Windows Logs Application, I'm seeing Event ID 517 but it fails to show an intelligible description. However, under Applications and Services Logs Microsoft Windows Backup Operational, I'm seeing an event with the ID of 5 and a description like this: Backup started at '10/4/2011 12:30:12 PM' failed with following error code '2147942512'. One of the most informative posts I've found on this error is located on Microsoft's Technet Forums here. In that post, a Microsoft representative gives this hazy explanation: auto-delete feature to ensure that at least some old backup copies are maintained on the disk -- does not automatically delete backups if space utilization by older copies is less than 1/8 of the disk size or in other words, 13% of the disk size. that means if the one full backup copy does not fit in the 7/8 of the disk size, backup may fail with disk full error. auto-delete will not automatically delete older versions to reclaim more older versions of backup. In the above explanation, I do not understand what is meant by "older copies" except that it appears that anything older than the very last shadow copy would be considered "older copies". I'm going to make the assumption that this problem where auto-delete will not work will affect any hard drive that is large enough to make an effective backup drive, or in other words, any hard drive that is large enough to hold more than one backup/shadow copy at once. The same MS representative proposes the solution of using a larger backup drive. I can't understand how this will help. It appears to me it will simply delay the problem until a later date. In order to resolve this problem for now, I did the following: Assign the backup drive a disk letter under disk management. Run the command line with Administrative rights. diskshadow.exe [enter] delete shadows oldest x: [enter] (where X: is the letter you assigned your backup drive) I manually ran the above command some 60 or 80 times to free up about 200 GB of space on my 1 Terrabyte External Hard drive. However, I do not feel this is a satisfactory solution to prevent the problem from happening again in the future. Does anyone have a solution to prevent your Windows Server backup drive from getting full?

    Read the article

  • Migrate SBS 2008 with Exchange 2007 to new hardware

    - by MikeT505
    Hello, We have run out of hard disk space on our existing Small Business Server 2008 and simply wish to upgrade both of the hard drives (currently raid 1) - without too much hassle. My main concern is how to copy across the embedded version of Microsoft Exchange 2007. Is there a simple way to copy all the data across and upgrade? Or is it best to backup and do a clean install? - The difficulty is that it's the same server, so we can't replicate mailboxes for exchange. Any hints or tips welcomed!?

    Read the article

  • Microsoft Word 2007 opening all docs with field codes toggled off

    - by WilliamKF
    Recently, something changed with my Microsoft Word 2007 installation/preferences on Windows XP, such that whenever I open a word document, all the field codes are displayed raw instead of as their expanded value. For example, my header reads: My Name { TITLE \* MERGEFORMAT } Version { REVNUM \* MERGEFORMAT } But, if I copy and paste it here, it reads expanded: My Name My Doc Title Version 42 I expect to see the copy and paste version directly inside Word, I can work around this by right clicking on each such field and choosing toggle field codes, however, I never had to do that before, as previously, the document opened with all such field codes expanded. Another example is the Table of Contents which shows as: { TOC \o "1-3" \h \z \u } Instead of the full table of contents. I searched the word options dialog, but could not find anything that appeared relevant. Please suggest how to restore the old behavior.

    Read the article

  • Cannot access local resource (C drive) on remote desktop

    - by Robert Massa
    I've recently upgraded my client PC to Windows 7, and ever since I can't get local resource sharing for remote desktop to work. I'm connecting to a 2003 server which isn't is my current domain. All my optical and virtual drives are being shared, but the C drive stays hidden. I checked the options, and do indicate that I want to share my C drive. Is there any permission I should change for this to work? The server is configured correctly because when connecting from an XP client this problem doesn't occur. I've tried accessing the share directly by opening the \\tsclient\c path, but this doesn't work neither. \\tsclient only shows the other drives. Also copy 'n paste doesn't seem to work neither(tried restarting rdpclip to no avail), getting Cannot copy file File.dat, the device is not connected.

    Read the article

  • Remotely sync Time Machine drives

    - by Off Rhoden
    I have an Xserve that runs Time Machine to a local terabyte drive. I also connected my external terabyte drive for a time period and had Time Machine use it to establish the seed data. I plan to take my drive back home with me (out of state) and have the Xserve return to using its local drive for Time Machine. But when I get back home, is there a way to keep my external drive's copy of the Time Machine Backups folder in sync with the Backups folder back on the Xserve? I'm wanting a full copy of the history (makes an awesome remote backup). I've thought of using the unix command rsync. In fact, that's how I had been doing it but I was looking the compactness that Time Machine was able to achieve. Thanks.

    Read the article

  • Problem using psexec to remotely GAC a file

    - by Andrew Dunaway
    As part of a deployment process I am trying to GAC a series of files. The actual build process occurs on a build server, and I am trying to use psexec to GAC the files on whichever machine has requested the build. The current line I am trying to execute is: C:\PsToolspsexec.exe \COMPUTER -u USER -p PASS gacutil.exe -i Assembly.dll -f The error that I am getting back is: Failure adding assembly to the cache: The system cannot find the file specified. So apparently the dll reference is on the remote box, and unfortunately the dll is sitting on the build box. Is there any way to just do this with psexec somehow, or do I need to copy it to some temporary location on the \\COMPUTER? I know there are commands to copy the executable as part of the psexec process, but I can't seem to find anything similar for supporting files.

    Read the article

  • Robocopy permission denied

    - by Edoode
    Robocopy is preinstalled with Windows 7. I've used it many times in the past. I tried to copy a folder to a remote share with robocopy c:\source "\\server\share\path" /s /r:2 /w:2` As a result I get permission denied. Using explorer I can copy files to this share. I've opened a command prompt with administrator permissions with the same result. The share is read/write for public. [EDIT] I've successfully mapped a driveletter to the share, but robocopy still fails EDIT I've added the /B switch without success. The exact error is: 2009/09/26 20:43:14 ERROR 5 (0x00000005) Accessing Destination Directory \drobo \Drobo\fotos__NEW\Ericsson\

    Read the article

  • How to abort robocopy on first error

    - by Yurik
    When using robocopy windows utility, what flags do I set so that robocopy aborts on the very first error it sees, similar to xcopy /dry command? I need to mirror two dirs, and on occasion some files would be locked. I do not want robocopy to continue trying to copy files, or override the files that are not locked - rather the very first error should stop the whole copy process. UPDATE: I already have the /R set to 0 - unfortunately that it only applies to a single file, NOT to the whole copying process. Hence, the first file is ignored (instead of stopping the copying), but subsequent files are copied.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >