Search Results

Search found 16602 results on 665 pages for 'directory'.

Page 600/665 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • How to write a custom solution using a python package, modules etc

    - by morpheous
    I am writing a packacge foobar which consists of the modules alice, bob, charles and david. From my understanding of Python packages and modules, this means I will create a folder foobar, with the following subdirectories and files (please correct if I am wrong) foobar/ __init__.py alice/alice.py bob/bob.py charles/charles.py david/david.py The package should be executable, so that in addition to making the modules alice, bob etc available as 'libraries', I should also be able to use foobar in a script like this: python foobar --args=someargs Question1: Can a package be made executable and used in a script like I described above? Question 2 The various modules will use code that I want to refactor into a common library. Does that mean creating a new sub directory 'foobar/common' and placing common.py in that folder? Question 3 How will the modules foo import the common module ? Is it 'from foobar import common' or can I not use this since these modules are part of the package? Question 4 I want to add logic for when the foobar package is being used in a script (assuming this can be done - I have only seen it done for modules) The code used is something like: if __name__ == "__main__": dosomething() where (in which file) would I put this logic ?

    Read the article

  • Perl: remove relative path components?

    - by jnylen
    I need to get Perl to remove relative path components from a Linux path. I've found a couple of functions that almost do what I want, but: File::Spec->rel2abs does too little. It does not resolve ".." into a directory properly. Cwd::realpath does too much. It resolves all symbolic links in the path, which I do not want. Perhaps the best way to illustrate how I want this function to behave is to post a bash log where FixPath is a hypothetical command that gives the desired output: '/tmp/test'$ mkdir -p a/b/c1 a/b/c2 '/tmp/test'$ cd a '/tmp/test/a'$ ln -s b link '/tmp/test/a'$ ls b link '/tmp/test/a'$ cd b '/tmp/test/a/b'$ ls c1 c2 '/tmp/test/a/b'$ FixPath . # rel2abs works here ===> /tmp/test/a/b '/tmp/test/a/b'$ FixPath .. # realpath works here ===> /tmp/test/a '/tmp/test/a/b'$ FixPath c1 # rel2abs works here ===> /tmp/test/a/b/c1 '/tmp/test/a/b'$ FixPath ../b # realpath works here ===> /tmp/test/a/b '/tmp/test/a/b'$ FixPath ../link/c1 # neither one works here ===> /tmp/test/a/link/c1 '/tmp/test/a/b'$ FixPath missing # should work for nonexistent files ===> /tmp/test/a/b/missing

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • In WMI, can I use a join (or something similar) to acquire the IisWebServer object for a site, given

    - by Precipitous
    Given a server name and a physical path, I'd like to be able to hunt down the IISWebServer object and ApplicationPool. Website url is also an acceptable input. Our technologies are IIS 6, WMI, and access via C# or Powershell 2. I'm certain this would be easier with IIS 7 its managed API. We don't have that yet. Here's what I can do: Get a list of IIS virtual directories from IISWebVirtualDirSetting and filter (offline) for the matching physical path. $theVirtualDir = gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername -authentication PacketPrivacy ` -class "IISWebVirtualDirSetting" ` | where-object {$_.Path -like $deployLocation} From the virtual directory object, I can get a name (like W3SVC/40565456/root). Given this name, I can get to other goodies, such as the IIS web server object. gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername ` -authentication PacketPrivacy ` -Query "SELECT * FROM IisWebServer WHERE Name='W3SVC/40589473'" The questions, restated: 1) This is a query language. Can I join or subquery so that 1 WMI query statement gets web servers based on IISWebVirtualDir.Path? How? 2) In solving 1, you'll have to explain how to query on the Path property. Why is this an invalid query? "SELECT * FROM IISWebVirtualDirSetting WHERE Path='D:\sites\globaldominator'"

    Read the article

  • HtmlAgilityPack - Vs 2010 - c# ASP - File Not found

    - by Janosch Geigowskoskilu
    First, I've already searched the web & StackOverflow for hours, and i did find a lot about troubleshooting HtmlAgilityPack and tried most of these but nothing worked. The Situation: I'm developing a C# ASP .NET WebPart in SharePoint Foundation. Everything works fine, now I want to Parse a HTML Page to get all ImagePaths and save the Images on HD/Temp. To do that I was downloading HtmlAgilityPack, current version, add reference to Project, everything looks OK, IntelliSense works fine. The Exception: But when I want to run the section where HtmlAgilityPack should be used my Browser shows me a FileNotFoundException - The File or Assembly could not be found. What I tried: After first searches i tried to include v1.4.0 of HtmlAgilityPack cause I read that the current version in some case is not really stable. This works fine to until the point I want to use HtmlAgilityPack, the same Exception. I also tried moving the HtmlAgilityPack direct to the Solution directory, nothing changed. I tried to insert HtmlAgilityPack via using and I tried direct call e.g. HtmlAgilityPack.HtmlDocument. Conclusion : When I compile no error occurs, the reference is set correct. When I trace the HtmlAgilityPack.dll with ProcMon the Path is shown correct, but sometimes the Result is 'File Locked with only Readers' but I don't know enough about ProcMon to Know what this means or if this is critical. It couldn't have something to do with File Permissions because if I check the DLL the permissions are all given.

    Read the article

  • Cygwin Cruisecontrol cannot execute commands

    I'm having what I hope to be a simple problem. However, it's had me stumped all day. I'm working with cruisecontrol in windows, being set up through Cygwin. I have some CC experience in the linux platform and much of what I'm doing is very similar. However, most any command I try to execute in the config.xml file's Schedule section is giving an error. Here's the exception: ExecBuilder - Could not execute command: /cygdrive/d/Program\ Files/Subversion/bin/svn net.sourceforge.cruisecontrol.CruiseControlException: Encountered an IO exception while attempting to execute 'net.sourceforge.cruisecontrol.builders.ExecScript@b80f1c'. CruiseControl cannot continue. at net.sourceforge.cruisecontrol.builders.ScriptRunner.runScript(ScriptRunner.java:133) Here are some examples of commands I've tried to run which give this type of error. <exec command="${CCLoc}/projects/${project.name}/IOSdllScript"/> -Runs a script that I tested outside of the cruisecontrol.bat and it runs. Includes #!/bin/sh as the first line <exec command="${CCLoc}/projects/${project.name}/EmptyFile"/> -Essentially an empty text file, proving that the problem had nothing to do with my script. <exec command="/cygdrive/d/Program\ Files/Subversion/bin/svn" args="cleanup" workingdir="${svndir}"/> -Trys svn cleanup on a directory. I double checked the pathing and spelling. One command that I tested worked and didn't give this error. That command was touch. <exec command="touch" args="ABC.txt"/> I'm not sure why only touch seems to work and nothing else does. Thank you for your help.

    Read the article

  • Ruby Fileutils.cp_r Permission Denied when :preserve => true

    - by slawley
    Hello, I am trying to implement a poor-man's backup/mirroring script and am having some trouble. I am on Windows-XP, using Ruby's FileUtils module to recursively copy files. So long as I don't set the :preserve flag to true, everything works fine. Works: FileUtils.cp_r('Source_dir', 'Dest_dir', :verbose => true) Doesn't work: FileUtils.cp_r('Source_dir', 'Dest_dir', :verbose => true, :preserve => true) I have full permissions on the Dest_dir as it's on the desktop of my local machine and I just created it. I can copy and delete files and folders, but apparently changing, or maintaining the file attributes with :preserve isn't working. I haven't had a chance to try this on a Mac or linux box, but from reading around online the :preserve flag is a normal stumbling block to come up against in a Windows environment. In a similar line of questioning, what is the default behavior for FileUtils.cp_r when it encounters an existing file at the destination directory? Simply overwrite and replace everything in Destination with whatever is in Source, or can I skip a file with conflicts and just log it for resolution later? (If this should be a separate question, just let me know and I'll make it one.) Thanks, Spencer

    Read the article

  • rake task via cron problem loading rubygems

    - by Matenia Rossides
    I have managed to get a cron job to run a rake task by doing the following: cd /home/myusername/approotlocation/ && /usr/bin/rake sendnewsletter RAILS_ENV=development i have checked with which ruby and which rake to make sure the paths are correct (from bash) the job looks like it wants to run as i get the following email from the cron daemon when it completes Missing these required gems: chronic whenever searchlogic adzap-ar_mailer twitter gdata bitly ruby-recaptcha You're running: ruby 1.8.7.22 at /usr/bin/ruby rubygems 1.3.5 at /home/myusername/gems, /usr/lib/ruby/gems/1.8 Run `rake gems:install` to install the missing gems. (in /home/myusername/approotlocation) my custom rake file within lib/tasks is as follows: task :sendnewsletter => :environment do require 'rubygems' require 'chronic' require 'whenever' require 'searchlogic' require 'adzap-ar_mailer' require 'twitter' require 'gdata' require 'bitly' require 'ruby-recaptcha' @recipients = Subscription.all(:conditions => {:active => true}) for user in @recipients Email.send_later(:deliver_send_newsletter,user) end end with or without the require items, it still gives me the same error ... can anyone shed some light on this? or alternatively advise me on how to make a custom file within the script directory that will run this function (I already have a cron job working that will run and process all my delayed_jobs. Cheers!

    Read the article

  • use exec for dsadd

    - by Daryl Gill
    I'm Programming on a Windows Server 2008 and I wish to have a WebUI to interact with the domains active directory. One of my main problems is this that i'm using dsadd from a HTML form but this is no succeeding. I know my command is correct, I have tested it out on the Servers Command line My Code is As Below: if (isset($_POST['Submit'])) { $DesiredUsername = $_POST['DesiredUsername']; $DesiredPassword = $_POST['DesiredPassword']; $DU = "{$DesiredUsername}"; // Desired Username $OU = "PHPCreatedUsers"; // Domain OU $DC1 = "slayerserv"; // Domain Part one $DC2 = "local"; // Domain Part Two $PWD = "{$DesiredPassword}"; // Password $ExecScript = 'dsadd user cn=$DesiredUsername,cn=PHPCreatedUsers,dc=slayerserv,dc=local -disabled no -pwd $DesiredPassword -mustchpwd yes'; exec($ExecScript, $output); mysql_query("INSERT INTO addedusers (`ID`, `DU`, `OU`, `DC1`, `DC2, `PWD`) VALUES ('', '$DU', '$OU', '$DC1', '$DC2', '$PWD')"); echo "<br><br>"; print_r($output); # echo "User: $DesiredUsername Has been Created"; } When I print_r($output); it Returns a blank array: Array ( ) Could anyone provide me with a solution or point me in the right direction? ++++ Below is a working example of my usage of exec $Script = 'ping 127.0.0.1 -n 1'; exec($Script, $Output); print_r($Output); print_r($Output); Gives: Array ( [0] = [1] = Pinging 127.0.0.1 with 32 bytes of data: [2] = Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 [3] = [4] = Ping statistics for 127.0.0.1: [5] = Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), [6] = Approximate round trip times in milli-seconds: [7] = Minimum = 0ms, Maximum = 0ms, Average = 0ms )

    Read the article

  • Monitoring all events in a class and sub-classes

    - by Basiclife
    Hi, I wonder if someone can help me. I've got a console App which I use to debug various components as I develop them. I'd like to be able to log to the console every time an event is fired either in the object I've instantiated or in anything it's instantiated [ad infinitum]. I wouldn't see some of these events normally due to them being consumed further down the chain). Ideally I would be able to log all public and private events but if only public are possible, I can live with that. I've Googled and all I can find is how to monitor a directory - So I'm not sure if this is not possible or simply has a name that I don't know. The sort of information I'm after is similar to what's found in an exception - Target Site, Source, Stack Trace, etc... Could I perhaps do this through reflection somehow? If someone could tell me if this is even possible and perhaps point me at some good resources, I'd be very grateful. Many thanks Basic To Give you an idea of the console App: Sub Main() Container = ContainerGenerate.GenerateContainer() Dim TemplateID As New Guid("5959b961-b347-46bc-b1b6-cba311304f43") Dim Templater = Container.Resolve(Of Interfaces.Mail.IMailGenerator)() Dim MyMessage = Templater.GenerateMail(TemplateID, Nothing, Nothing) Dim MySMTPClient = Container.Resolve(Of SmtpClient)() MySMTPClient.Send(MyMessage) Finish() End Sub

    Read the article

  • How do I stop the m2eclipse plugin interfering with command line mvn builds?

    - by locka
    I use the m2eclipse plugin in Eclipse so that I can import a Maven project. The plugin reads the pom.xml and sorts out the dependencies in the projects in an Eclipse friendly way so I'm not looking at a sea of broken references and errors. I use Eclipse for code development however I usually build the projects from the command line, e.g. "mvn clean install". Unfortunately when I do this, m2eclipse detects disk activity and attempts to rebuild the workspace. This interferes with the command line build and sometimes results in a race condition. For example the command line might be in its clean phase but fails because it tries to delete a file or directory which is locked during the workspace rebuild. Aside from that workspace rebuilding is incredibly slow, and between failed builds and wasted CPU my build process is 2-3x longer than it should be. It isn't an option to not use Eclipse (e.g. to use Netbeans), or to disable m2eclipse. It is a useful plugin except for this behaviour. So my question is, how do I stop m2eclipse from rebuilding the workspace all the time? Can I invoke a manual refresh and otherwise disable this behaviour?

    Read the article

  • How can I setup Hudson to use the same repository for different projects and maintain separate chang

    - by Allen
    I typically setup SVN to host 1 big project per repository but a lot of our infrastructure has changed and we now have one main SVN server that has a hierarchy like so Branches Tags Trunk Project1 files & folders Project2 files & folders Project3 files & folders Projects1,2, and 3 do not share anything amongst themselves, they are independent projects each with their own solution file to be built. I can setup projects in Hudson like so Repository Url: http://server/svn/MainRepository Local module directory (optional): /Trunk/Project1 And that will maintain a separate workspace for each project, but every time you commit to Project 2 or Project 3, a build gets kicked off in Hudson for every project based in that repository. Also, any commit made anywhere in the repository is pulled down and inserted into the Hudson changelog for all of them. I know the easiest solution would be to simply separate every project into its own repository. However, if I couldn't do that due to various reasons, is there a feasible way to achieve the functionality that having separate repositories gets me? I want commits to the sub folder of project 1 to only affect project 1. No other project's commits should cause project 1 to build and project 1's changelog in Hudson should only have commit notes from project 1.

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • The following code to check if a file exists on a server does not work

    - by xplorer2k
    Hi Everyone, I found the following code to check if a file exists on a server, but is not working for me. It tells me that "test1.tx" does not exist even though the file exists and its size is 498 bytes. If I try with Ftp.ListDirectory it tells me that the file does not exist. If I try with Ftp.GetFileSize it does not provide any results and the debugger's immediate gives me the following message: A first chance exception of type 'System.Net.WebException' occurred in System.dll. Using "request.UseBinary = true" does not make any difference. I have posted this same question at this link: http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/89e05cf3-189f-48b7-ba28-f93b1a9d44ae Could someone help me how to fix it? private void button1_Click(object sender, EventArgs e) { string ftpServerIP = txtIPaddress.Text.Trim(); string ftpUserID = txtUsername.Text.Trim(); string ftpPassword = txtPassword.Text.Trim(); try { FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://" + ftpServerIP + "//tmp/test1.txt"); request.Method = WebRequestMethods.Ftp.ListDirectory; //request.Method = WebRequestMethods.Ftp.GetFileSize; request.Credentials = new NetworkCredential(ftpUserID, ftpPassword); //request.UseBinary = true; using (FtpWebResponse response = (FtpWebResponse)request.GetResponse()) { // Okay. textBox1.AppendText(Environment.NewLine); textBox1.AppendText("File exist"); } } catch (WebException ex) { if (ex.Response != null) { FtpWebResponse response = (FtpWebResponse)ex.Response; if (response.StatusCode == FtpStatusCode.ActionNotTakenFileUnavailable) { // Directory not found. textBox1.AppendText(Environment.NewLine); textBox1.AppendText("File does not exist"); } } } } Thanks very much, xplorer2k

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • When is onBind or onCreate called in an android service browser plugin?

    - by anselm
    I have adapted the example plugin of the android source and the browser recognises the plugin without any problem. Here is an extract of AndroidManifest.xml: <application android:icon="@drawable/icon" android:label="@string/app_name" android:debuggable="true"> <service android:name="com.domain.plugin.PluginService"> <intent-filter> <action android:name="android.webkit.PLUGIN" /> </intent-filter> </service> </application> <uses-sdk android:minSdkVersion="7" /> <uses-permission android:name="android.webkit.permission.PLUGIN"></uses-permission> The actual Service class looks like so: public class PluginService extends Service { @Override public IBinder onBind(Intent arg0) { Log.d("PluginService", "onBind"); return null; } @Override public void onCreate() { Log.d("PluginService", "onCreate"); // TODO Auto-generated method stub super.onCreate(); AssetInstaller.getInstance(this).installAssets("/data/data/com.domain.plugin"); } } The AssetInstaller code is supposed to extract some files required by the actual plugin into the /data/data/com.domain.plugin directory, however wether onBind nor onCreate are called. But I get lot's of debug trace of the actual libnpplugin.so file I'm using. So the puzzle is when and under what circumstance is the Service bound or created in case of a browser plugin. As things look the service seems to be a dummy service. Having said that, is there another intent that can be executed at installation time probably? The only solution I see right now is installing the needed files from the native plugin code instead. Any ideas? I know this is quite a tricky question ;)

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • Using the Microsoft Ajax Minifier with Web Setup project & Source Control

    - by Rob
    I've just started investigating the Microsoft Ajax Minifer 4.0 for use with a Visual Studio 2008 Web Application I work on. It's proven easy enough to hook it into the .csproj file so it produced .min.js files for all scripts, however I'm stumped as to how to integrate this with the Web Setup project & Source Control. Essentially what I want to do is have the resultant .min.js files included in the Web Setup project without having them included in Source Control because: Having to check them out prior to the build being executing is a pain (the minifier cannot modify them if they're not checked out). As they're created as a "build artifact" it just seems wrong to have them stored under source control. The only option I've managed to come across so far is to explicitly include the .min.js files as part of the Setup project by right clicking on the Web Setup project and choosing "Add File", and then having the relevant folder hierarchy duplicated in "File System on Target Machine" so that I can force the file to the correct location. This is neither elegant or simple/robust as: It requires me to manually add every minified js file to the Web Setup project by hand Maintain a copy of the relevant directory structure in both the Web Application project and the Web Setup project Remember to add any new js files minified versions to the Web Setup project Is there a better way of doing this?

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • sem-dynamic cdn

    - by dwi kristianto
    i'm developing couple of websites using php (directory script, etc.) and wordpress as cms. i need to improve its performance, by using cdn for static files (css, js, images). the problem is, css and javascript files are generated on the fly. i did that due to yahoo and some expert advice to combine the files into one file. also changing basic color of css files. for the time being, i use couple of small vps but still its not fast enough. i already contact maxcdn and the support guy said that they dont have such kind of services. what i need is: a cdn that will serve the request from user/visitor and there's no file in local disk, the cdn will redirect/fetch it from another domain/server. in vps, it could be done easily using combination of .htaccess and php, but NOT in the cdn. most of cdn only support purely static files. is there any such cdn that will server semi-dynamic files?

    Read the article

  • Making RDoc Ruby Gem Default on Mac OS X

    - by jkale
    Hey all, I've recently installed RDoc version (2.4.3) through Ruby gems to replace the one shipped with Mac OS X (version 1.0.1). Unfortunately, I can still only use RDoc 1.0.1 when I call run "rdoc" at the command line. rdoc -v returns: RDoc V1.0.1 - 20041108 I tried amending the $PATH variable to point the first entry to the RDoc 2.4.3 folder but no luck. I couldn't find anything about this online either, so I thought I'd ask here. Cheers! Update: Running "gem list -d --version 1.0.1 rdoc" returns: *** LOCAL GEMS *** rdoc (2.4.3) Authors: Eric Hodel, Dave Thomas, Phil Hagelberg, Tony Strauss Rubyforge: http://rubyforge.org/projects/rdoc Homepage: http://rdoc.rubyforge.org Installed at: /usr/local/lib/ruby/gems/1.8 RDoc is an application that produces documentation for one or more Ruby source files Therefore, it's definitely the Mac OSX version of RDoc that's interfering with the Gems version. Update 2: I found out, using: `bash --debugger rdoc` that the old version of RDoc was in /opt/local/bin. I deleted it and added my gems directory to my $PATH `export PATH=/usr/local/lib/ruby/gems/1.8/gems/` I now have a fresh working copy of the latest RDoc!

    Read the article

  • Using A Local file path in a Streamwriter object ASP.Net

    - by Nick LaMarca
    I am trying to create a csv file of some data. I have wrote a function that successfully does this.... Private Sub CreateCSVFile(ByVal dt As DataTable, ByVal strFilePath As String) Dim sw As New StreamWriter(strFilePath, False) ''# First we will write the headers. ''EDataTable dt = m_dsProducts.Tables[0]; Dim iColCount As Integer = dt.Columns.Count For i As Integer = 0 To iColCount - 1 sw.Write(dt.Columns(i)) If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) ''# Now write all the rows. For Each dr As DataRow In dt.Rows For i As Integer = 0 To iColCount - 1 If Not Convert.IsDBNull(dr(i)) Then sw.Write(dr(i).ToString()) End If If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) Next sw.Close() End Sub The problem is I am not using the streamwriter object correctly for what I trying to accomplish. Since this is an asp.net I need the user to pick a local filepath to put the file on. If I pass any path to this function its gonna try to write it to the directory specified on the server where the code is. I would like this to popup and let the user select a place on their local machine to put the file.... Dim exData As Byte() = File.ReadAllBytes(Server.MapPath(eio)) File.Delete(Server.MapPath(eio)) Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fn)) Response.ContentType = "application/x-msexcel" Response.BinaryWrite(exData) Response.Flush() Response.End() I am calling the first function in code like this... Dim emplTable As DataTable = SiteAccess.DownloadEmployee_H() CreateCSVFile(emplTable, "C:\\EmplTable.csv") Where I dont want to have specify the file loaction (because this will put the file on the server and not on a client machine) but rather let the user select the location on their client machine. Can someone help me put this together? Thanks in advance.

    Read the article

  • Issue in creating Zip file using glob.glob

    - by infosyssec
    Hi, I am creating a Zip file from a folder (and subfolders). it works fine and creates a new .zip file also but I am having an issue while using glob.glob. It is reading all files from the desired folder (source folder) and writing to the new zip file but the problem is that it is, however, adding subdirectories, but not adding files form the subdirectories. I am giving user an option to select the filename and path as well as filetype also (Zip or Tar). I don;t get any problem while creating .tar.gz file, but when use creates .zip file, this problem comes across. Here is my code: for name in (Source_Dir): for name in glob.glob("/path/to/source/dir/*" ): myZip.write(name, os.path.basename(name), zipfile.ZIP_DEFLATED) myZip.close() Also, if I use code below: for dirpath, dirnames, filenames in os.walk(Source_Dir): myZip.write(os.path.join(dirpath, filename) os.path.basename(filename)) myZip.close() Now the 2nd code taks all files even if it inside the folder/ subfolders, creates a new .zip file and write to it without any directory strucure. It even does not take dir structure for main folder and simply write all files from main dir or subdir to that .zip file. Can anyone please help me or suggest me. I would prefer glob.glob rather than the 2nd option to use. Thanks in advance. Regards, Akash

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >