Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 257/692 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Best Practice for Uploading Many (2000+) Images to A Server

    - by bob
    Hello, I have a general question about this. When you have a gallery, sometimes people need to upload 1000's of images at once. Most likely, it would be done through a .zip file. What is the best way to go about uploading this sort of thing to a server. Many times, server have timeouts etc. that need to be accounted for. I am wondering what kinds of things should I be looking out for and what is the best way to handle a large amount of images being uploaded. I'm guessing that you would allow a user to upload a zip file (assuming the timeout does not effect you), and this zip file is uploaded to a specific directory, lets assume in this case a directory is created for each user in the system. You would then unzip the directory on the server and scan the user's folder for any directories containing .jpg or .png or .gif files (etc.) and then import them into a table accordingly. I'm guessing labeled by folder name. What kind of server side troubles could I run into? I'm aware that there may be many issues. Even general ideas would be could so I can then research further. Thanks! Also, I would be programming in Ruby on Rails but I think this question applies accross any language.

    Read the article

  • No test coverage files generated for Unit Test bundle in Xcode

    - by John Gallagher
    The Problem I've got a Cocoa project on the desktop and I'm using Xcode 3.2.1 on Snow Leopard 10.6.2. I want to generate code coverage files for my Unit Test Target in Xcode. What I've Tried As articles like this one suggest, I've adjusted the build settings to: “Generate Test Coverage Files” checked “Instrument Program Flow” checked “-lgcov” added to “Other Linker Flags” I've also set the Run Script section of the test target to have the following: # Run the unit tests in this test bundle. "${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests" # Run gcov on the framework getting tested if [ "${CONFIGURATION}" = 'Coverage' ]; then FRAMEWORK_NAME=LapsusInterpretationEngine FRAMEWORK_OBJ_DIR=${OBJROOT}/${FRAMEWORK_NAME}.build/${CONFIGURATION}/EngineTests.build/Objects-normal/${NATIVE_ARCH} mkdir -p coverage pushd coverage find ${OBJROOT} -name *.gcda -exec gcov -o ${FRAMEWORK_OBJ_DIR} {} \; popd fi Since my Framework name is LapsusInterpretationEngine but my target is named EngineTests, I put this directly into the FRAMEWORK_OBJ_DIR but this didn't seem to help. I've tried cleaning before building. I've made sure all the above build settings apply to both the Unit Test Target and the Application Target. What I Get No .gcda or .gcno files anywhere in the build directory I'm using. I point CoverStory to the Objects-normal directory in my builds folder and it complains that there's nothing there for it to read. I must be doing something really obvious wrong. Anyone any ideas? I have tried the "EngineTests.build" directory being ${FRAMEWORK_NAME} and this gives the same results.

    Read the article

  • Exclude string from wildcard in bash

    - by Peter O'Doherty
    Hi, I'm trying to adapt a bash script from "Sams' Teach Yourself Linux in 24 Hours" which is a safe delete command called rmv. The files are removed by calling rmv -d file1 file2 etc. In the original script a max of 4 files can by removed using the variables $1 $2 $3 $4. I want to extend this to an unlimited number of files by using a wildcard. So I do: for i in $* do mv $i $HOME/.trash done The files are deleted okay but the option -d of the command rmv -d is also treated as an argument and bash objects that it cannot be found. Is there a better way to do this? Thanks, Peter #!/bin/bash # rmv - a safe delete program # uses a trash directory under your home directory mkdir $HOME/.trash 2>/dev/null # four internal script variables are defined cmdlnopts=false delete=false empty=false list=false # uses getopts command to look at command line for any options while getopts "dehl" cmdlnopts; do case "$cmdlnopts" in d ) /bin/echo "deleting: \c" $2 $3 $4 $5 ; delete=true ;; e ) /bin/echo "emptying the trash..." ; empty=true ;; h ) /bin/echo "safe file delete v1.0" /bin/echo "rmv -d[elete] -e[mpty] -h[elp] -l[ist] file1-4" ;; l ) /bin/echo "your .trash directory contains:" ; list=true ;; esac done if [ $delete = true ] then for i in $* do mv $i $HOME/.trash done /bin/echo "rmv finished." fi if [ $empty = true ] then /bin/echo "empty the trash? \c" read answer case "$answer" in y) rm -i $HOME/.trash/* ;; n) /bin/echo "trashcan delete aborted." ;; esac fi if [ $list = true ] then ls -l $HOME/.trash fi

    Read the article

  • .NET load assembly error metabase config

    - by peter
    Hi, Yesterday I found out that the mailroot directory had moved to a different location on the C drive. I don't know how this happened as no configs were changed, using IIS7 with IIS6 SMTP service on Windows 2008 R2 web edition. I wanted to restore it to the default c:\inetpub\mailroot location. So I opened system32/inetsrv/MetaBase.xml, and in the node was the problem, BadMailDirectory, DropDirectory etc were all pointing to the wrong location... I changed them back to the default c:\inetpub\mailroot. Later I discovered an ASP.NET application was throwing this error: An error occurred in the Microsoft .NET Framework while trying to load assembly id 1. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error: System.IO.FileNotFoundException: Could not load file or assembly 'microsoft.sqlserver.types, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I figured it had to do with my changes to the mailroot locations, so I gave ASP.NET access to the mailroot directory and it was working.... My question is, how is this possible? Why does APS.NET require access to the mailroot directory to load assemblies? Thanks

    Read the article

  • Unable to create website error (NEW)

    - by salvationishere
    I copied my ClickOnce deployment to my C:/Inetpub/ folder on my webserver and I deleted my Virtual directory. I deleted the WpfApplication1 folder beneath wwwroot in Win Explorer. Then I turned on Web Sharing for this folder. Then I viewed my IIS Manager and this new Share name appeared under wwwroot. So now under Inetpub folder on my web server I have the following directory path: C:\Inetpub\WpfApplication1\ with contents: Application Files publish.htm setup.exe WpfApplication1.application Next, I remapped both the publishing and installation URL's for the project to http://myserver/WpfApplication1/ And I clicked Publish Now. But after I performed a Publish Now operation, I got the following error on my development server (D610-M): Error 1 Failed to connect to 'http://myserver/WpfApplication1/' with the following error: Unable to create the Web site 'http://myserver/WpfApplication1/'. The Web server does not appear to have any authentication methods enabled. It asked for user authentication, but did not send a WWW-Authenticate header. 1 1 WpfApplication1 On my webserver, when I click Browse from the IIS Manager on the WpfApplication1 directory, it shows me the Install page. But after I click the Browse button, it returns an error which says: The remote name could not be resolved: 'd610-m' (D610-M is the name of my development server). How do I fix this?

    Read the article

  • Installing Mercurial on Mac OS X 10.6 Snow Leopard

    - by Matthew Rankin
    Installing Mercurial on Mac OS X 10.6 Snow Leopard I installed Mercurial 1.3.1 on Mac OS X 10.6 Snow Leopard from source using the following: cd ~/src curl -O http://mercurial.selenic.com/release/mercurial-1.3.1.tar.gz tar xzvf mercurial-1.3.1.tar.gz cd mercurial-1.3.1 make ALL sudo make install This installs the site-packages files for Mercurial in /usr/local/lib/python2.6/site-packages/. I know that installing Mercurial from the Mac Disk Image will install the files into /Library/Python/2.6/site-packages/, which is the site-packages directory for the Mac OS X default Python install. I have Python 2.6.2+ installed as a Framework with its site-packages directory in: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages With Mercurial installed this way, I have to issue: PYTHONPATH=/usr/local/lib/python2.6/site-packages:"${PYTHONPATH}" in order to get Mercurial to work. Questions How can I install Mercurial from source with the site-packages in a different directory? Is there an advantage or disadvantage to having the site-packages in the current location? Would it be better in one of the Python site-package directories that already exist? Do I need to be concerned about virtualenv working correctly since I have modified PYTHONPATH (or any other conflicts for that matter)? Reasons for Installing from Source Dan Benjamin of Hivelogic provides the benefits of and instructions for installing Mercurial from source in his article Installing Mercurial on Snow Leopard.

    Read the article

  • datanucleus enhancer & javaw: "the parameter is incorrect"

    - by Riley
    I'm on windows XP using eclipse and the datanucleus enhancer for a gwt + gae app. When I run the enhancer, I get an error: Error Thu Oct 21 16:33:57 CDT 2010 Cannot run program "C:\Program Files\Java\jdk1.6.0_18\bin\javaw.exe" (in directory "C:\ag\dev"): CreateProcess error=87, The parameter is incorrect java.io.IOException: Cannot run program "C:\Program Files\Java\jdk1.6.0_18\bin\javaw.exe" (in directory "C:\ag\dev"): CreateProcess error=87, The parameter is incorrect at java.lang.ProcessBuilder.start(Unknown Source) at com.google.gdt.eclipse.core.ProcessUtilities.launchProcessAndActivateOnError(ProcessUtilities.java:213) at com.google.appengine.eclipse.core.orm.enhancement.EnhancerJob.runInWorkspace(EnhancerJob.java:154) at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: java.io.IOException: CreateProcess error=87, The parameter is incorrect at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) ... 5 more I've had this problem before, and it was due to a long classpath. I just spent an hour and a half shortening my classpath by moving libraries around and even moving my eclipse install, but with no luck. Any ideas about where I should start to look for an answer? The error message doesn't include any information about what directory it's in or anything. It's kind of infuriating! Is it possible to make the output of javaw more verbose? Is it possible to get around this class-path size bug?

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Installing Silverstripe on 000webhost.com (free web host)

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it just refreshes the install.php page. It doesn't even overwrite the .htaccess file. 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how. EDIT: I managed to get to the next page but now there is another warning which is stopping it installing: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701

    Read the article

  • Problem building STLport NDK r5/ Android

    - by user558299
    Hi all, I'm trying to build STLport for Android. I got the following steps, but they are not working: 1 - Clone STLport repository using: git clone git://stlport.git.sourceforge.net/gitroot/stlport/stlport 2 - Configure environment using : ./configure --target=arm-eabi --with-extra-cxxflags="-fshort-enums" --with-extra-cflags="-fshort-enums" 3 - From src directory build it using make SYSROOT"{MY NDK path}/platforms/android-5/arch-arm/" release-static But I got the following errors: In file included from ../stlport/stl/_alloc.h:45, from ../stlport/memory:29, from dll_main.cpp:41: ../stlport/stl/_new.h:45:24: error: new: No such file or directory In file included from ../stlport/stl/_limits.h:36, from ../stlport/limits:29, from dll_main.cpp:48: ../stlport/stl/_cwchar.h:26:30: error: cstddef: No such file or directory In file included from ../stlport/stl/_utility.h:35, from ../stlport/utility:35, from dll_main.cpp:40: ../stlport/type_traits:889: error: 'declval' was not declared in this scope ../stlport/type_traits:889: error: expected primary-expression before '>' token ../stlport/type_traits:889: error: expected primary-expression before ')' token ../stlport/type_traits:889: error: 'declval' was not declared in this scope ../stlport/type_traits:889: error: expected primary-expression before '>' token ../stlport/type_traits:889: error: expected primary-expression before ')' token ../stlport/type_traits:889: error: ISO C++ forbids declaration of 'decltype' with no type ../stlport/type_traits:889: error: ISO C++ forbids in-class initialization of non-const static member 'decltype' ../stlport/type_traits:889: error: template declaration of 'int std::tr1::detail::decltype' ../stlport/type_traits:942: error: ISO C++ forbids declaration of 'decltype' with no type ../stlport/type_traits:942: error: ISO C++ forbids in-class initialization of non-const static member 'decltype' ../stlport/type_traits:942: error: template declaration of 'int std::tr1::detail::decltype' make: *** [obj/arm-eabi-gcc/so/dll_main.o] Error 1 Is there any include dir or configuration I´m missing? Thanks, Sergio

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • Setting project for eclipse using maven

    - by egaga
    Hi, I'm trying to start modifying an existing application with Eclipse. Actually I had it working before, but I deleted the project, and now with "mvn eclipse:eclipse" I get the following: [INFO] Resource directory's path matches an existing source directory. Resources will be merged with the source directory src/main/resources [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:583) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:512) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:482) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:330) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:291) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) at org.apache.maven.cli.MavenCli.main(MavenCli.java:287) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xm l], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.plugin.eclipse.EclipseSourceDir.merge(EclipseSourceDir.java:302) at org.apache.maven.plugin.eclipse.EclipsePlugin.extractResourceDirs(EclipsePlugin.java:1605) at org.apache.maven.plugin.eclipse.EclipsePlugin.buildDirectoryList(EclipsePlugin.java:1490) at org.apache.maven.plugin.eclipse.EclipsePlugin.createEclipseWriterConfig(EclipsePlugin.java:1180) at org.apache.maven.plugin.eclipse.EclipsePlugin.writeConfiguration(EclipsePlugin.java:1043) at org.apache.maven.plugin.ide.AbstractIdeSupportMojo.execute(AbstractIdeSupportMojo.java:511) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:451) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:558) ... 16 more

    Read the article

  • File-level filesystem change notification in Mac OS X

    - by Paul J. Lucas
    I want my code to be notified when any file under (either directly or indirectly) a given directory is modified. By "modified", I mead I want my code to be notified whenever a file's contents are altered, it's renamed, or it's deleted; or if a new file is added. For my application, there can be thousands of files. I looked as FSEvents, but its Technology Overview says, in part: The important point to take away is that the granularity of notifications is at a directory level. It tells you only that something in the directory has changed, but does not tell you what changed. It also says: The file system events API is also not designed for finding out when a particular file changes. For such purposes, the kqueues mechanism is more appropriate. However, in order to use kqueue on a given file, one has to open the file to obtain a file descriptor. It's impractical to manage thousands of file descriptors (and would probably exceed the maximum allowable number of open file descriptors anyway). Curiously, under Windows, I can use the ReadDirectoryChangesW() function and it does precisely what I want. So how can one do what I want under Mac OS X? Or, asked another way: how would one go about writing the equivalent of ReadDirectoryChangesW() for Mac OS X in user-space (and do so very efficiently)?

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • Azure : The process cannot access the file "" because it is being used by another process.

    - by Shantanu
    Hi all, I am trying to get a matlab-compiled exe running on Azure cloud, and for that purpose need to get a v78.zip onto the local storage of the cloud and unzip it, before I can try to run an exe on the cloud. The program works fine when executed locally, but on deployment gives and error at line marked below in the code. The error is : The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. Exception Details: System.IO.IOException: The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. The code is given below: string localPath = RoleEnvironment.GetLocalResource("LocalStorage1").RootPath; Response.Write(localPath + " \n"); Directory.SetCurrentDirectory(localPath); CloudBlob mblob = GetProgramContainer().GetBlobReference("v78.zip"); CloudBlockBlob mbblob = mblob.ToBlockBlob; CloudBlob zipblob = GetProgramContainer().GetBlobReference("7z.exe"); string zipPath = Path.Combine(localPath, "7z.exe"); string matlabPath = Path.Combine(localPath, "v78.zip"); IEnumerable<ListBlockItem> blocklist = mbblob.DownloadBlockList(); BlobStream stream = mbblob.OpenRead(); FileStream fs = File.Create(matlabPath); (Exception occurs here) It'll be great help if someone could tell me where I'm going wrong. Thanks! Shan

    Read the article

  • Node.js/ v8: How to make my own snapshot to accelerate startup

    - by Anand
    I have a node.js (v0.6.12) application that starts by evaluating a Javascript file, startup.js. It takes a long time to evaluate startup.js, and I'd like to 'bake it in' to a custom build of Node if possible. The v8 source directory distributed with Node, node/deps/v8/src, contains a SconScript that can almost be used to do this. On line 302, we have LIBRARY_FILES = ''' runtime.js v8natives.js array.js string.js uri.js math.js messages.js apinatives.js date.js regexp.js json.js liveedit-debugger.js mirror-debugger.js debug-debugger.js '''.split() Those javascript files are present in the same directory. Something in the build process apparently evaluates them, takes a snapshot of state, and saves it as a byte string in node/out/Release/obj/release/snapshot.cc (on Mac OS). Some customization of the startup snapshot is possible by altering the SconScript. For example, I can change the definition of the builtin Date.toString by altering date.js. I can even add new global variables by adding startup.js to the list of library files, with contents global.test = 1. However, I can't put just any javascript code in startup.js. If it contains Date.toString = 1;, an error results even though the code is valid at the node repl: Build failed: -> task failed (err #2): {task: libv8.a SConstruct -> libv8.a} make: *** [program] Error 1 And it obviously can't make use of code that depends on libraries Node adds to v8. global.underscore = require('underscore'); causes the same error. I'd ideally like a tool, customSnapshot, where customSnapshot startup.js evaluates startup.js with node and then dumps a snapshot to a file, snapshot.cc, which I can put into the node source directory. I can then build node and tell it not to rebuild the snapshot.

    Read the article

  • Problem Linking Boost Filesystem Library in Microsoft Visual C++

    - by Scott
    Hello. I am having trouble getting my project to link to the Boost (version 1.37.0) Filesystem lib file in Microsoft Visual C++ 2008 Express Edition. The Filesystem library is not a header-only library. I have been following the Getting Started on Windows guide posted on the official boost web page. Here are the steps I have taken: I used bjam to build the complete set of lib files using: bjam --build-dir="C:\Program Files\boost\build-boost" --toolset=msvc --build-type=complete I copied the /libs directory (located in C:\Program Files\boost\build-boost\boost\bin.v2) to C:\Program Files\boost\boost_1_37_0\libs. In Visual C++, under Project Properties Additional Library Directories I added these paths: C:\Program Files\boost\boost_1_37_0\libs C:\Program Files\boost\boost_1_37_0\libs\filesystem\build\msvc-9.0express\debug\link-static\threading-multi I added the second one out of desperation. It is the exact directory where libboost_system-vc90-mt-gd-1_37.lib resides. In Configuration Properties C/C++ General Additional Include Directories I added the following path: C:\Program Files\boost\boost_1_37_0 Then, to put the icing on the cake, under Tools Options VC++ Directories Library files, I added the same directories mentioned in step 3. Despite all this, when I build my project I get the following error: fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-gd-1_37.lib' Additionally, here is the code that I am attempting to compile as well as a screen shot of the aformentioned directory where the (assumedly correct) lib file resides: #include "boost/filesystem.hpp" // includes all needed Boost.Filesystem declarations #include <iostream> // for std::cout using boost::filesystem; // for ease of tutorial presentation; // a namespace alias is preferred practice in real code using namespace std; int main() { cout << "Hello, world!" << endl; return 0; } Can anyone help me out? Let me know if you need to know anything else. As always, thanks in advance.

    Read the article

  • Maven String Replace of Text Web Resources

    - by Jaco van Niekerk
    I have a Maven web application with text files in src/main/webapp/textfilesdir As I understand it, during the package phase this textfilesdir directory will be copied into the target/project-1.0-SNAPSHOT directory, which is then zipped up into a target/project-1.0-SNAPSHOT.war Problem Now, I need to do a string replacement on the contents of the text files in target/project-1.0-SNAPSHOT/textfilesdir. This must then be done after the textfilesdir is copied into target/project-1.0-SNAPSHOT, but prior to the target/project-1.0-SNAPSHOT.war file being created. I believe this is all done during the package phase. How can a plugin (potentially maven-antrun-plugin), plug into the package phase to do this. The text files don't contain properties, like ${property-name} to filter on. String replacement is likely the only option. Options Modify the text files after the copy into target/project-1.0-SNAPSHOT directory, yet prior to the WAR creation. After packaging, extract the text files from WAR, modify them, and add them back into the WAR. I'm thinking there is another option here I'm missing. Thoughts anyone?

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • nested include in php

    - by aeonsleo
    The directory structure: C:/wamp/www/application/model/data_access/data_object.php C:/wamp/www/application/model/users/user.class.php C:/wamp/www/application/controller/projects.php C:/wamp/www/application/controller/links/links.php I have 2 php files data_object.php and user.class.php Now user.class.php has an include statement for data_object.php wchih is relative to user.class.php.These two files are under different directory hierarchy. Now I have to include this user.class.php in various files (like projects.php, links.php-which themselves are under different hierarchy) whenever i want to create a User() object. The problem is the relative path for file inclusion of data_object.php does work for say projects.php but if i open links.php the error message says it could not open file data_object.php in user.class.php. What i think is for relative inclusion of data_object.php it is considering the path of the file in which user.class.php is included. I am facing such problems in more than one scenarios I have to keep my directory structure the way it is but have to find a way to work with nested includes. I used Document root of session it give root path as C:/wamp/www/ i appended the path for data_object.php include but this is not working. (note: the forward slash is present after www) I am currently running on wamp server's localhost but after completion i have to host the solution on a domain. Pls help

    Read the article

  • Security implications of writing files using PHP

    - by susmits
    I'm currently trying to create a CMS using PHP, purely in the interest of education. I want the administrators to be able to create content, which will be parsed and saved on the server storage in pure HTML form to avoid the overhead that executing PHP script would incur. Unfortunately, I could only think of a few ways of doing so: Setting write permission on every directory where the CMS should want to write a file. This sounds like quite a bad idea. Setting write permissions on a single cached directory. A PHP script could then include or fopen/fread/echo the content from a file in the cached directory at request-time. This could perhaps be carried out in a Mediawiki-esque fashion: something like index.php?page=xyz could read and echo content from cached/xyz.html at runtime. However, I'll need to ensure the sanity of $_GET['page'] to prevent nasty variations like index.php?page=http://www.bad-site.org/malicious-script.js. I'm personally not too thrilled by the second idea, but the first one sounds very insecure. Could someone please suggest a good way of getting this done?

    Read the article

  • Trying to Use LoadMoreElement in Monotouch.Dialog

    - by user1487581
    I am using Monotouch to write an Ipad app. The app uses tables to browse down through a directory tree and then select a file. I have used Monotouch.Dialog to browse the directories and I set up the directory tables as the app starts.However there are too many files to set up in a table as the app starts and so I want to set up the 'file table' as the file is selected from the lowest level directory table. I am trying to use LoadMoreElement to do this but I cannot make it work or find any examples online. I have coded the 'Elements API Walkthrough' in the Xamarin tutorial at:- http://docs.xamarin.com/ios/tutorials/MonoTouch.Dialog I then add a new section to the code:- _addButton.Clicked += (sender, e) => { ++n; var task = new Task{Name = "task " + n, DueDate = DateTime.Now}; var taskElement = new RootElement (task.Name){ new Section () { new EntryElement (task.Name, "Enter task description", task.Description) }, new Section () { new DateElement ("Due Date", task.DueDate) }, new Section() { new LoadMoreElement("Passive","Active", delegate {MyAction();}) } }; _rootElement [0].Add (taskElement); Where MyAction is:- public void MyAction() { Console.WriteLine ("we have been actioned"); } The problem is that MyAction is triggered and Console.Writeline writes the message but the table stays in the active state and never returns to passive. the documentation says:- Once your code in the NSAction is finished, the UIActivity indicator stops animating and the normal caption is displayed again. What am I missing? Ian

    Read the article

  • How can I display an ASP.NET MVC html part from one application in another

    - by Frank Sessions
    We have several asp.net MVC apps in the following setup SecurityApp (root application - handles forms auth for SSO and has a profile edit page) Application1 (virtual directory) Application2 (virtual directory) Application3 (virtual directory) so that domain.com points to SecurityApp and domain.com/Application1 etc point to their associated virtual directories. All of our Single Sign On (SSO) is working properly using forms authentication. Based on the users permissions when logging in a menu that lists their available applications and a logout link will be generated and saved in the cache - this menu displays fine whenever the user is in the SecurityApp (editing their profile) but we cannot figure out how to get the Applications in the virtual directories to display the same application menu. We have tried: 1) Using JSONP to do an request that will return the html for the menu. The ajax call returns the HTML with the html; however, because User.IsAuthenticated is false the menu comes back empty. 2) We created a user control and include it along with the dll's for the SecurityApp project and this works; however, we dont want to have to include all the dlls for the SecurityApp project in every application that we create (along with all the app settings in the web.config) We would like this to be as simple as possible to implement so that anyone creating a new app can add the menu to their application in as few steps as possible... Any ideas? To Clarify - we are using ASP.NET MVC 1.0 since these apps are in production and we do not have the okay to go to ASP.NET MVC 2.0 (unfortunately)

    Read the article

  • What is the purpose of the garbage (files) that Qt Creator auto-generates and how can I tame them?

    - by Venemo
    Hello Everyone, I'm fairly new to Qt, and I'm using the new Nokia Qt SDK beta and I'm working to develop a small application for my Nokia N900 in my free time. Fortunately, I was able to set up everything correctly, and also to run my app on the device. I've learned C++ in school, so I thought it won't be so difficult. I use Qt Creator as my IDE, because it doesn't work with Visual Studio. I also wish to port my app to Symbian, so I have run the emulator a few times, and I also compile for Windows to debug the most evil bugs. (The debugger doesn't work correctly on the device.) I come from a .NET background, so there are some things that I don't understand. When I hit the build button, Qt Creator generates a bunch of files to my project directory: moc_*.cpp files - I don't know their purpose. Could someone tell me? *.o files - I assume these are the object code *.rss files - I don't know their purpose, but they definitely don't have anything to do with RSS Makefile and Makefile.Debug - I have no idea AppName (without extension) - the executable for Maemo, and AppName.sis - the executable for Symbian, I guess? AppName.loc - I have no idea AppName_installer.pkg and AppName_template.pkg - I have no idea qrc_Resources.cpp - I guess this is for my Qt resources (where AppName is the name of the application in question) I noticed that these files can be safely deleted, Qt Creator simply regenerates them. The problem is that they pollute my source directory. Especially because I use version control, and if they can be regenerated, there is no point in uploading them to SVN. So, could someone please tell me what the exact purpose of these files is, and how can I ask Qt Creator to place them into another directory? Thank you in advance!

    Read the article

  • Can someone help me with m Django localization?

    - by alex
    I have a template with has text in it. It's located in /templates under my project directory. I'm trying to do Japanese now. I create a directory called "locale" in my project directory. Then, I set up this in my settings: gettext = lambda s: s LANGUAGES = ( ('de', gettext('German')), ('en', gettext('English')), ('ja', gettext('Japanese')), ) After that, I run this command: django-admin.py makemessages -l ja The only problem is, this doesn't work! In my locale/ja/LC_MESSAGES/django.po: Isn't it supposed to scan my templates with .html extension and grab all the strings? # SOME DESCRIPTIVE TITLE. # Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER # This file is distributed under the same license as the PACKAGE package. # FIRST AUTHOR <EMAIL@ADDRESS>, YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: PACKAGE VERSION\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2010-05-20 22:45+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Language-Team: LANGUAGE <[email protected]>\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: settings.py:101 msgid "German" msgstr "" #: settings.py:102 msgid "English" msgstr "" #: settings.py:103 msgid "Japanese" msgstr ""

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >