Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 239/1086 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • MSBuild: Add additional files to compile without altering the project file

    - by Craig Norton
    After looking around I can't find a simple answer to this problem. I am trying to create an MSBuild file to allow me to easily use SpecFlow and NUnit within Visual Studio 2010 express. The file below is not complete this is just a proof of concept and it needs to be made more generic. <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <BuildDependsOn> BuildSolution; SpecFlow; BuildProject; NUnit; </BuildDependsOn> </PropertyGroup> <PropertyGroup> <Solution>C:\Users\Craig\Documents\My Dropbox\Cells\Cells.sln</Solution> <CSProject>C:\Users\Craig\Documents\My Dropbox\Cells\Configuration\Configuration.csproj</CSProject> <DLL>C:\Users\Craig\Documents\My Dropbox\Cells\Configuration\bin\Debug\Configuration.dll</DLL> </PropertyGroup> <Target Name="Build" DependsOnTargets="$(BuildDependsOn)"> </Target> <Target Name="BuildSolution"> <MSBuild Projects="$(Solution)" Properties="Configuration=Debug" /> </Target> <Target Name="SpecFlow"> <Exec Command="SpecFlow generateall $(CSProject)" /> </Target> <Target Name="BuildProject"> <MSBuild Projects="$(CSProject)" Properties="Configuration=Debug" /> </Target> <Target Name="NUnit"> <Exec Command='NUnit /run "$(DLL)"' /> </Target> The SpecFlow Task looks in the .csproj file and creates a SpecFlowFeature1.feature.cs. I need to include this file when building the .csproj so that NUnit can use it. I know I could modify (either directly or on a copy) the .csproj file to include the generated file but I'd prefer to avoid this. My question is: Is there a way to use the MSBuild Task to build the project file and tell it to include an additional file to include in the build? Thank you.

    Read the article

  • Howto - Running Redmine on mongrel as a service on windows

    - by Achilles
    I use Redmine on Mongrel as a project manager and I use a batch file (start-redmine.bat) to start the redmine in mongrel. There are 2 issues with my setup: 1. I have a running IIS on the server that occupies the HTTP port (80) 2. The start-redmine.bat must be periodically checked to see if it's stopped after a restart that is caused by windows update service. for the first issue, I have no choice but running mongrel on a port like 3000 and for the second issue I have to create a windows service that runs automatically in the background when the windows starts; and here comes the trouble! There are at least 3 ways to run redmine as a service that I'm aware of; none of them can satisfy a performance view on this subject. you may read about them on http://stackoverflow.com/questions/877943/how-to-configure-a-rails-app-redmine-to-run-as-a-service-on-windows I tried them all. The easiest way to setup such a service is using mongrel_service approach; in 3 lines of command you're done. but the performance is significantly lower than running that batch file... Now, I wanna show you my approach: First suppose we have ruby installed into c:\ruby and we have issued the command gem install mongrel to get the mongrel gem installed into c:\ruby\bin Also, suppose we have installed the Redmine into a folder like c:\redmine; and we have ruby's path (i.e. c:\ruby\bin) in our PATH environment variable. Now Download and install the Windows NT Resource Kit Tools from microsoft website. Open the command-line tool that comes with the Resource Kit (from start menu). Use instsrv to install a dummy service called Redmine using the following command: "[path-to-instsrv.exe]\instsrv" Redmine "[path-to-srvany.exe]\srvany.exe" in my case (which is the default case) it was something like this: "C:\Program Files\Windows Resource Kits\Tools\instsrv" Redmine "C:\Program Files\Windows Resource Kits\Tools\srvany.exe" Now create the batch file. Open a notepad and paste these instructions into it and then save it as "c:\redmine\start-redmine.bat" @echo off cd c:\redmine\ mongrel_rails start -a 0.0.0.0 -p 3000 -e production Now we need to configure that dummy service we had created before. WATCH OUT WHAT YOU'RE DOING FROM HERE ON, OR YOU MAY CORRUPT YOUR WINDOWS. To configure that service, open windows registry editor (Start - Run - regedit) and navigate to this node: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Redmine Right-Click on "Redmine" node and using the context menu, create a new key called Parameters (New - Key) Right-Click on "Parameters" and create a String Value property called Application. Do this again and create another String Value called AppParameters. Now Double-click on "Application" and put cmd.exe into "Value data" section. Then Double-click on "AppParameters" and put /C "C:\redmine\start-redmine.bat" into Value data section. We're done! issue this command to run the redmine on mongrel as a service: net start Redmine

    Read the article

  • How do I configure Mercurial to use environment variables in mercurial.ini

    - by Coda
    How can I modify the mercurial.ini file to include an environment variable such as %userprofile%. Specific situation: I am learning to use Mercurial. I have modified the [ui] section of Mercurial.ini (in my home path) to include: ignore = c:\users\user\.hgignore Where user is my username literal. The .hgignore file includes filters that that ignore the filenames correctly at commit time. But how can I alter it from being the a literal user to an environment variable $user?

    Read the article

  • VS 2010 Coded UI Test - Launch Referenced Application

    - by Cory
    I'm using Visuial Studio's Coded UI Tests to run Automated UI tests on a WPF Application everytime a build runs on my TFS server. The problem I am running into is dynamically launching the executable based on the path where it was just built to, including the configuration(x86, x64). Is there any way to get the path to an executable in a referenced project so that I can launch the application dynamically from my test project?

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • Extracting pair member in lambda expressions and template typedef idiom

    - by Nazgob
    Hi, I have some complex types here so I decided to use nifty trick to have typedef on templated types. Then I have a class some_container that has a container as a member. Container is a vector of pairs composed of element and vector. I want to write std::find_if algorithm with lambda expression to find element that have certain value. To get the value I have to call first on pair and then get value from element. Below my std::find_if there is normal loop that does the trick. My lambda fails to compile. How to access value inside element which is inside pair? I use g++ 4.4+ and VS 2010 and I want to stick to boost lambda for now. #include <vector> #include <algorithm> #include <boost\lambda\lambda.hpp> #include <boost\lambda\bind.hpp> template<typename T> class element { public: T value; }; template<typename T> class element_vector_pair // idiom to have templated typedef { public: typedef std::pair<element<T>, std::vector<T> > type; }; template<typename T> class vector_containter // idiom to have templated typedef { public: typedef std::vector<typename element_vector_pair<T>::type > type; }; template<typename T> bool operator==(const typename element_vector_pair<T>::type & lhs, const typename element_vector_pair<T>::type & rhs) { return lhs.first.value == rhs.first.value; } template<typename T> class some_container { public: element<T> get_element(const T& value) const { std::find_if(container.begin(), container.end(), bind(&typename vector_containter<T>::type::value_type::first::value, boost::lambda::_1) == value); /*for(size_t i = 0; i < container.size(); ++i) { if(container.at(i).first.value == value) { return container.at(i); } }*/ return element<T>(); //whatever } protected: typename vector_containter<T>::type container; }; int main() { some_container<int> s; s.get_element(5); return 0; }

    Read the article

  • Unable to retrieve information form HP-UX pst_status object

    - by bogertron
    I am attempting to get process information by using the HP-UX C/C++ library. After scouring the internet, I have discovered that HP-UX has the pstat.h header file which allows programmers to retrieve the process information. After attempting to understand code from the internet hp website, I attempted to create a little test sample to comprehend what the code does. I attempted to use example 3, however, I ran into several issues. The first issue came when I attempted to execute the following line of code: (void)printf("pid is %d, command is %s\n", pst[i].pst_pid, pst[i].pst_ucomm); When I attempted to print the string, I hit a memory fault. So I decided to attempt to see what the string is and came up with the following: #include <sys/param.h> #include <sys/pstat.h> #include <sys/unistd.h> #include <string.h> int main(int argc, char** argv) { #define BURST ((size_t)10) struct pst_status pst[BURST]; int i, count; int idx = 0; /* index within the context */ int index = 0; /* loop until count == 0, will occur all have been returned */ while ((count=pstat_getproc(pst, sizeof(pst[0]),BURST,idx))>0) { index = 0; printf("index: %d", index); /* got count (max of BURST) this time. process them */ while (pst[i].pst_ucomm[index] != '\0') { printf("%c", pst[i].pst_ucomm[index]); index++; } printf("\n"); for (i = 0; i < count; i++) { printf("pid is %d, command is \n", pst[i].pst_pid); } /* * now go back and do it again, using the next index after * the current 'burst' */ idx = pst[count-1].pst_idx + 1; } if (count == -1) perror("pstat_getproc()"); #undef BURST } Unfortunately, what happens is that I get the first process printed, then pid is 2, command is pid is 2, command is pid is 2, command is... I know that I must be doing something foolish since my C/C++ skills are not that great, but I cannot figure out what the issue is since the code is largely copied from the hp website. So here's the question(s) for clarity: 1. Why can't printf("%s", pst[i].pst_ucomm); handle strings? 2. Why can't I iterate over the processes in the system? Any help is greatly appreciated.

    Read the article

  • django custom command and cron

    - by Hellnar
    Hello I want my customly made django command to be executed every minute, however it seems like python /path/to/project/myapp/manage.py mycommand doesn't seem to work while at the directory python manage.py mycommand works perfectly. How can I achieve this ? I use /etc/crontab with: ****** root python /path/to/project/myapp/manage.py mycommand

    Read the article

  • NSData's writeToFile method failing with server address

    - by Ricky
    Hi. I am trying to write an NSData object to a directory like so; [myData writeToFile:[NSString stringWithFormat:@"%@/%@.txt", path, filename] atomically:YES]; I receive no errors or warnings but I am assuming the write fails because the path variable has the format of afp://10.0.0.20/username/Desktop. I am connected to the networked share. Do I need to modify the string or take a different approach here? TIA, Ricky.

    Read the article

  • bitwise XOR a string in Bash

    - by ricky2002
    Hi. I am trying to accomplish a work in Bash scripting. I have a string which i want to XOR with my key. #!/bin/sh PATH=/bin:/usr/bin:/sbin:/usr/sbin export PATH teststring="abcdefghijklmnopqr" Now how do i XOR the value of teststring and store it in a variable using bash? Any help will be appreciated.

    Read the article

  • WPF binding only inside XAML (simple question)

    - by 0xDEAD BEEF
    Why this works <myToolTip:UserControl1> <TextBlock Text="{Binding Path=TestString, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type myToolTip:UserControl1}}}"/> </myToolTip:UserControl1> BUT this does not <myToolTip:UserControl1 x:Name="userControl"> <TextBlock Text="{Binding Path=TestString, ElementName=userControl}"/> </myToolTip:UserControl1> and is there realy no shorter (faster) fay, to acces usercontrols elements?! Sincerely, 0xDEAD BEEF

    Read the article

  • how to fix facebook Circular Redirect?

    - by user1057679
    I have a page that redirects to other page I try to test my url on: https://developers.facebook.com/tools/debug i get this error: Errors That Must Be Fixed: Circular Redirect:? Circular redirect path detected (see Redirect Path section for details). Warnings That Should Be Fixed: ?The og:url property should be explicitly provided, even if a value can be inferred from other tags.? how can i fix this problem? how to detect facebook and if it is facebook dont redirect?

    Read the article

  • Possible reasons for tellg() failing?

    - by Andreas Bonini
    ifstream::tellg() is returning -13 for a certain file. Basically, I wrote a utility that analyzes some source code; I open all files alphabetically, I start with "Apple.cpp" and it works perfectly.. But when it gets to "Conversion.cpp", always on the same file, after reading one line successfully tellg() returns -13. The code in question is: for (int i = 0; i < files.size(); ++i) { /* For each .cpp and .h file */ TextIFile f(files[i]); while (!f.AtEof()) // When it gets to conversion.cpp (not on the others) // first is always successful, second always fails lines.push_back(f.ReadLine()); The code for AtEof is: bool AtEof() { if (mFile.tellg() < 0) FATAL(format("DEBUG - tellg(): %d") % mFile.tellg()); if (mFile.tellg() >= GetSize()) return true; return false; } After it reads successfully the first line of Conversion.cpp, it always crashes with DEBUG - tellg(): -13. This is the whole TextIFile class (wrote by me, the error may be there): class TextIFile { public: TextIFile(const string& path) : mPath(path), mSize(0) { mFile.open(path.c_str(), std::ios::in); if (!mFile.is_open()) FATAL(format("Cannot open %s: %s") % path.c_str() % strerror(errno)); } string GetPath() const { return mPath; } size_t GetSize() { if (mSize) return mSize; const size_t current_position = mFile.tellg(); mFile.seekg(0, std::ios::end); mSize = mFile.tellg(); mFile.seekg(current_position); return mSize; } bool AtEof() { if (mFile.tellg() < 0) FATAL(format("DEBUG - tellg(): %d") % mFile.tellg()); if (mFile.tellg() >= GetSize()) return true; return false; } string ReadLine() { string ret; getline(mFile, ret); CheckErrors(); return ret; } string ReadWhole() { string ret((std::istreambuf_iterator<char>(mFile)), std::istreambuf_iterator<char>()); CheckErrors(); return ret; } private: void CheckErrors() { if (!mFile.good()) FATAL(format("An error has occured while performing an I/O operation on %s") % mPath); } const string mPath; ifstream mFile; size_t mSize; }; Platform is Visual Studio, 32 bit, Windows. Edit: Works on Linux. Edit: I found the cause: line endings. Both Conversion and Guid and others had \n instead of \r\n. I saved them with \r\n instead and it worked. Still, this is not supposed to happen is it?

    Read the article

  • Unable to rename file with c# ftp methods when current user directory is different from root

    - by Agata
    Hello everyone, Remark: due to spam prevention mechanizm I was forced to replace the beginning of the Uris from ftp:// to ftp. I've got following problem. I have to upload file with C# ftp method and afterwards rename it. Easy, right? :) Ok, let's say my ftp host is like this: ftp.contoso.com and after logging in, current directory is set to: users/name So, what I'm trying to achieve is to log in, upload file to current directory as file.ext.tmp and after upload is successful, rename the file to file.ext The whole difficulty is, as I guess, to properly set the request Uri for FtpWebRequest. MSDN states: The URI may be relative or absolute. If the URI is of the form "ftp://contoso.com/%2fpath" (%2f is an escaped '/'), then the URI is absolute, and the current directory is /path. If, however, the URI is of the form "ftp://contoso.com/path", first the .NET Framework logs into the FTP server (using the user name and password set by the Credentials property), then the current directory is set to UserLoginDirectory/path. Ok, so I upload file with the following URI: ftp.contoso.com/file.ext.tmp Great, the file lands where I wanted it to be: in directory "users/name" Now, I want to rename the file, so I create web request with following Uri: ftp.contoso.com/file.ext.tmp and specify rename to parameter as: file.ext and this gives me 550 error: file not found, no permissions, etc. I traced this in Microsoft Network Monitor and it gave me: Command: RNFR, Rename from CommandParameter: /file.ext.tmp Ftp: Response to Port 53724, '550 File /file.ext.tmp not found' as if it was looking for the file in the root directory - not in the current directory. I renamed the file manually using Total Commander and the only difference was that CommandParameter was without the first slash: CommandParameter: file.ext.tmp I'm able to successfully rename the file by supplying following absolute URI: ftp.contoso.com/%2fusers/%2fname/file.ext.tmp but I don't like this approach, since I would have to know the name of current user's directory. It can probably be done by using WebRequestMethods.Ftp.PrintWorkingDirectory, but it adds extra complexity (calling this method to retrieve directory name, then combining the paths to form proper URI). What I don't understand is why the URI ftp.contoso.com/file.ext.tmp is good for upload and not for rename? Am I missing something here? The project is set to .NET 4.0, coded in Visual Studio 2010.

    Read the article

  • traveling salesman problem, 2-opt algorithm c# implementation

    - by TAB
    Hello Can someone give me a code sample of 2-opt algorithm for traveling salesman problem. For now im using nearest neighbour to find the path but this method is far from perfec, and after some research i found 2-opt algorithm that would correct that path to the acceptable level. I found some sample apps but withoud source code.

    Read the article

  • How to deploy 2 Sharepoint page leyouts

    - by mickey
    I have a problem to deploy two page layouts, I can deploy one page layout with no problem but if I add another one the first one gets the same layout as the newly added second page layout... Here is the XML I add to Elements.xml <File Path="masterpage\CustomLayout.aspx" Url="CustomLayout.aspx" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="CustomLayout" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\HomePageLayout.aspx" Url="HomePageLayout.aspx" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="HomePageLayout" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\masterpage.master" Url="masterpage.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="My Custom masterpage" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\masterpage2.master" Url="masterpage2.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="My Custom masterpage 2" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File>

    Read the article

  • Flash - Firefox issue

    - by Manish
    I realised that if I do have a flash object and do not include a 'wmode' attribute, I will not be able to overlay HTML because the flash will always play on top. But if I do include a 'wmode:transparent' , the flash object completely disappears in firefox, whereas if I use 'wmode:opaque' , I get a white box in place of the Flash object. I've looked at many forums - many questions - but somehow everyone's problem gets resolved when they use one of the aforementioned attributes. So....HELP !!! Note: using either attribute works in IE.

    Read the article

  • SQL Server 2008: FileStream Insertion Failure w/ .NET 3.5SP1

    - by James Alexander
    I've configured a db w/ a FileStream group and have a table w/ File type on it. When attempting to insert a streamed file and after I create the table row, my query to read the filepath out and the buffer returns a null file path. I can't seem to figure out why though. Here is the table creation script: /****** Object: Table [dbo].[JobInstanceFile] Script Date: 03/22/2010 18:05:36 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[JobInstanceFile]( [JobInstanceFileId] [int] IDENTITY(1,1) NOT NULL, [JobInstanceId] [int] NOT NULL, [File] [varbinary](max) FILESTREAM NULL, [FileId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Created] [datetime] NOT NULL, CONSTRAINT [PK_JobInstanceFile] PRIMARY KEY CLUSTERED ( [JobInstanceFileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup], UNIQUE NONCLUSTERED ( [FileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup] GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[JobInstanceFile] ADD DEFAULT (newid()) FOR [FileId] GO Here's my proc I call to create the row before streaming the file: /****** Object: StoredProcedure [dbo].[JobInstanceFileCreate] Script Date: 03/22/2010 18:06:23 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO create proc [dbo].[JobInstanceFileCreate] @JobInstanceId int, @Created datetime as insert into JobInstanceFile (JobInstanceId, FileId, Created) values (@JobInstanceId, newid(), @Created) select scope_identity() GO And lastly, here's the code I'm using: public int CreateJobInstanceFile(int jobInstanceId, string filePath) { using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["ConsumerMarketingStoreFiles"].ConnectionString)) using (var fileStream = new FileStream(filePath, FileMode.Open)) { connection.Open(); var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted); try { //create the JobInstanceFile instance var command = new SqlCommand("JobInstanceFileCreate", connection) { Transaction = tran }; command.CommandType = CommandType.StoredProcedure; command.Parameters.AddWithValue("@JobInstanceId", jobInstanceId); command.Parameters.AddWithValue("@Created", DateTime.Now); int jobInstanceFileId = Convert.ToInt32(command.ExecuteScalar()); //read out the filestream transaction context to stream the file for storage command.CommandText = "select [File].PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() from JobInstanceFile where JobInstanceFileId = @JobInstanceFileId"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@JobInstanceFileId", jobInstanceFileId); using (SqlDataReader dr = command.ExecuteReader()) { dr.Read(); //get the file path we're writing out to string writePath = dr.GetString(0); using (var writeStream = new SqlFileStream(writePath, (byte[])dr.GetValue(1), FileAccess.ReadWrite)) { //copy from one stream to another byte[] bytes = new byte[65536]; int numBytes; while ((numBytes = fileStream.Read(bytes, 0, 65536)) 0) writeStream.Write(bytes, 0, numBytes); } } tran.Commit(); return jobInstanceFileId; } catch (Exception e) { tran.Rollback(); throw e; } } } Can someone please let me know what I'm doing wrong. In the code, the following expression is returning null for the file path and shouldn't be: //get the file path we're writing out to string writePath = dr.GetString(0); The server is different then the computer the code is running on but the necessary shares appear to be in order and I have also run the following: EXEC sp_configure filestream_access_level, 2 Any help would be greatly appreciated. Thanks!

    Read the article

  • MongoMapper and migrations

    - by Clint Miller
    I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model: class SomeModel include MongoMapper::Document key :some_key, String end Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this: class SomeModel include MongoMapper::Document key :some_key, String key :some_new_key, String, :required => true end How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this class SomeModel include MongoMapper::Document key :some_new_key, String, :required => true end But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space? With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 - version2 migration) and to delete the values for some_key (in the version2 - version3 migration). What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist? EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.

    Read the article

  • No page number for divider pages in LaTeX

    - by joec
    I have a report in LaTeX, and i have used the following commands to create my Appendix, however, my lecturer states that any divider pages should be unnumbered. \documentclass{report} \usepackage{appendix} \begin{document} \include{chap1} \include{appendix} \end{document} Then in appendix.tex \appendix \pagestyle{empty} \appendixpage \noappendicestocpagenum \addappheadtotoc This creates the Appendices divider page, but still puts a page number on it in the footer. There is no page number in the TOC, as expected. How can i remove it from the footer? Thanks

    Read the article

  • How can I use django.core.files.File

    - by Jake
    The docs for django.core.files.File imply I can do this: print File(open(path)).url but the File object has no attribute 'url' However, django.db.models.fields.files.FieldFile extends File and does have all the attributes described in the docs for File, but I can't create one without giving it a model field. All I want it something that does what the docs for django.core.files.File (link above) say it does, take a python file and give it attributes like 'url' and 'path' and 'name', can anyone help? Cheers, Jake

    Read the article

  • My httpd.conf was accidentally deleted, can't start apache. How do I recreate httpd.conf in Godaddy'

    - by mdm414
    Hi, I don't know if it's appropriate to ask for this but I really need to know what's the initial content of /var/www/vhosts/mydomain.com/conf/httpd.include on Godaaddy's VDS because the conf directory was accidentally deleted by somebody and now I cannot start Apache. We didn't purchase an assited service plan so Godaddy won't help me with this. So please, if you're using Godaddy's VDS plan, please help me with this. ** I also need to include in httpd.conf the lines needed for Plesk to work. Thanks a lot

    Read the article

  • WCF streaming on asmx ?

    - by phenevo
    Hi, I'he got wcf service for wcf straming. I works. But I must integrate it with our webserice. is there any way, to have webmethod like this: [webmethod] public Stream GetStream(string path) { return Iservice.GetStream(path); } I service is a class which I copy from WCF service to my asmx. And is there any way to integrate App.config from wcf with web.config ?

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >