Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 810/1981 | < Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >

  • How to find all the file handles by a process programmatically?

    - by kumar
    I have a process "x" which uses "system" C function to start ntpd daemon. I observed that ntpd are passed the open file descriptors of "x". ntpd holds on to the file descriptors even after original file is deleted. for ex: Some log files used by "x" are rotated out after sometime, but "ntpd" has file handle opened for these deleted files. Will it cause any problem? Alternatively I thought of setting "FD_CLOEXEC" flag for all the file descriptors before calling "system" function. But as we are running as an extension library to third process "x"( "x" loads our library based on some condition), there is no easy way to know about all the file descriptors process has opened. One way is to read /proc//fd and set "FD_CLOEXEC" for each file handle and reset it back after "system" function returns. I'm using Linux 2.16. Is there any other easy way to find all the file handlers? Thanks,

    Read the article

  • Hide *.inc.php from website visitors

    - by Ghostrider
    I have a script myscript.inc.php which handles all urls that look like /script-blah I accomplish this by using following .htaccess RewriteEngine On RewriteRule ^script-(.*)$ myscript.inc.php?s=$1 [QSA,L] However users could also access it this way by typing /myscript.inc.php?s=blah I would like to prevent that. I tried <Files ~ "\.inc\.php$"> Order deny,allow Deny from all </Files> and RewriteCond %{REQUEST_URI} \.inc\.php RewriteRule .* - [F,L,NS] They both prevent users from viewing /myscript.inc.php?s=blah but they also cause /script-blah to return 403... Is there a way to do this correctly?

    Read the article

  • how to organize javascripts using rails and jquery

    - by VP
    Hi, i'm working in a big and rich rails web application using tons of javascript. I would like to know if anybody has a tip to organize the javascripts. Today i'm generating a new file named controller.js and adding it to my views using content_for. The problem is, some files are becoming big and sometimes, i need a function from one controller in another, so then in the end, i add a products.js to a details controller just to keep DRY. Is that solution good? Any other tip? I think the same pattern can be applied as well to css files?

    Read the article

  • How to convert raw_input() into a directory?

    - by Azeworai
    Hi everyone, I just picked up IronPython and I've been trying to get this IronPython script going but I'm stuck at trying to get a Path input from raw_input to be a directory path. The first block of code is the broken one that I'm working on. import System from System import * from System.IO import * from System.Diagnostics import * inputDirectory = raw_input("Enter Input Directory's full path [eg. c:\\vid\\]: ") print ("In: "+inputDirectory) outputDirectory = inputDirectory +"ipod\\" print ("Out: "+outputDirectory) #create the default output directory for s in DirectoryInfo(inputDirectory).GetFiles("*.avi"): print s.FullName arg = String.Format('-i "{0}" -t 1 -c 1 -o "{1}" --preset="iPod"' , s.FullName, outputDirectory + s.Name.Replace(".avi", ".mp4")) print arg proc = Process.Start("C:\\Program Files\\Handbrake\\HandBrakeCLI.exe", arg) #path to handbrake goes here proc.WaitForExit() The following code block is what I have working at the moment. import System from System import * from System.IO import * from System.Diagnostics import * for s in DirectoryInfo("F:\\Tomorrow\\").GetFiles("*.avi"): arg = String.Format('-i "{0}" -t 1 -c 1 -o "{1}" --preset="iPod"' , s.FullName, "F:\\Tomorrow\\ipod\\" + s.Name.Replace(".avi", ".mp4")) print arg proc = Process.Start("C:\\Program Files\\Handbrake\\HandBrakeCLI.exe", arg) #path to handbrake goes here proc.WaitForExit() PS: Credit for the above working code goes to Joseph at jcooney.net

    Read the article

  • Recover from inadvertent skip during rebase

    - by Benjol
    I just tried to rebase a very old branch with a minor modification onto my master. There was a problem with merging just one of the three files involved, so I did an unthinking --skip, thinking that it would just skip that file, but as it happened, it seems to have skipped all my changes, and rolled forwards. So now the rebase is finished, and my changes seem to have disappeared. I've seen the question about undoing rebase, but it's all greek to me, I see the reflog, but I don't know which commit the branch was attached to before the rebase. In any case, I don't really need to undo the rebase, I just want to be able to recover the changes in the two files. Is there anyway to do this properly (failing this, I'll just have to restore yesterday's backup of my repository and pick the bits out by hand).

    Read the article

  • Stack trace in website project, when debug = false

    - by chandmk
    We have a website project. We are logging unhanded exceptions via a appdomain level exception handler. When we set debug= true in web.config, the exception log is showing the offending line numbers in the stack trace. But when we set debug = false, in web.config, log is not displaying the line numbers. We are not in a position to convert the website project in to webapplication project type at this time. Its legacy application and almost all the code is in aspx pages. We also need to leave the project in 'updatable' mode. i.e. We can't user pre-compile option. We are generating pdb files. Is there anyway to tell this kind of website projects to generate the pdb files, and show the line numbers in the stack trace?

    Read the article

  • Using rsync to take backup of folder

    - by Ali
    Hi, I have a server (Linux) with NAS which is mounted as folder "mount" I have website in "public_html" folder. I want to take backup of website in mount folder automatically at certain intervals for e.g. every hour. I read that there is something called "rsync" which is used to make two folders sync. And it doesn't copy all files every time and instead matches if the file has been changed and then only update changed files. How do I use it to make automatic backups? I have root access to server. Thanks

    Read the article

  • jcuda library usage problem

    - by user513164
    hi m very new to java and Linux i have a code which is taken from examples of jcuda.the code is following import jcuda.CUDA; import jcuda.driver.CUdevprop; import jcuda.driver.types.CUdevice; public class EnumDevices { public static void main(String args[]) { //Init CUDA Driver CUDA cuda = new CUDA(true); int count = cuda.getDeviceCount(); System.out.println("Total number of devices: " + count); for (int i = 0; i < count; i++) { CUdevice dev = cuda.getDevice(i); String name = cuda.getDeviceName(dev); System.out.println("Name: " + name); int version[] = cuda.getDeviceComputeCapability(dev); System.out.println("Version: " + String.format("%d.%d", version[0], version[1])); CUdevprop prop = cuda.getDeviceProperties(dev); System.out.println("Clock rate: " + prop.clockRate + " MHz"); System.out.println("Threads per block: " + prop.maxThreadsPerBlock); } } } I'm using Ubuntu as my operating system i compiled it with following command 1:-javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices i got following error error: Class names, 'EnumDevices', are only accepted if annotation processing is explicitly requested 1 error i don't know what is the meaning of this error.what should i do to compile the program than i changed the compiling option which is javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices.java than i got following error EnumDevices.java:36: clockRate is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Clock rate: " + prop.clockRate + " MHz"); ^ EnumDevices.java:37: maxThreadsPerBlock is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Threads per block: " + prop.maxThreadsPerBlock); ^ 2 errors Now I'm completely confused i don't know what to do? how to compile this program ? how to install the jcuda package or how to use it ? how to use package which have only jar files and .so files and the jar files don't having manifest file ? please help me

    Read the article

  • FormsAuthentication redirecting to login page when visiting root of website

    - by Ryan Lattimer
    I wanted to use FormsAuthentication to secure my static files as well on my site, so I followed the instructions located here http://learn.iis.net/page.aspx/244/how-to-take-advantage-of-the-iis7-integrated-pipeline/ under title "Enabling Forms Authentication for the Entire Application". Now though, when I try to visit the site by going directly to http://www.mysite.com I get redirected to http://www.mysite.com/Login.aspx?ReturnUrl=%2f instead of it using my DefaultDocument I have set. I can go to my default document by just visiting http://www.mysite.com/Home.aspx without any issues because it is set to allow anonymous access. Is there something I need to add into my web.config file to make iis7 allow anonymous access to the root? I tried adding with anonymous access but no such luck. Any help would be much appreciated. Both Home and the Login form allow anonymous. <location path="Home.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="Login.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> Login form is set as the loginUrl <authentication mode="Forms"> <forms protection="All" loginUrl="Login.aspx"> </forms> </authentication> Default document is set as Home.aspx <defaultDocument> <files> <add value="Home.aspx" /> </files> </defaultDocument> I have not removed any of the iis7 default documents. However, Home.aspx is first in the priority.

    Read the article

  • How to configure database connection securely

    - by chiccodoro
    Similar but not the same: How to securely store database connection details Securely connecting to database within a application Hi all, I have a C# WinForms application connecting to a database server. The database connection string, including a generic user/pass, is placed in a NHibernate configuration file, which lies in the same directory as the exe file. Now I have this issue: The user that runs the application should not get to know the username/password of the general database user because I don't want him to rummage around in the database directly. Alternatively I could hardcode the connection string, which is bad because the administrator must be able to change it if the database is moved or if he wants to switch between dev/test/prod environments. So long I've found three possibilities: The first referenced question was generally answered by making the file only readable for the user that runs the application. But that's not not enough in my case (the user running the application is a person. The database user/pass are general and shouldn't even be accessible by the person.) The first answer additionally proposed to encrypt the connection data before writing it to the file. With this approach, the administrator is not able anymore to configure the connection string because he cannot encrypt it by hand. The second referenced question provides an approach for this very scenario but it seems very complicated. My questions to you: This is a very general issue, so isn't there any general "how-to-do-it" way, somehow a "design pattern"? Is there some support in .NET's config infrastructure? (optional, maybe out of scope) Can I combine that easily with the NHibernate configuration mechanism?

    Read the article

  • Monolog conversations in SQL Service Broker 2008

    - by hemil
    Hi, I have a scenario in which I need to process(in SQL Server) messages being delivered as .xml files in a folder in real time. I started investigating SQL Service Broker for my queuing needs. Basically, I want the Service Broker to pick up my .xml files and place them in a queue as they arrive in the folder. But, SQL Service Broker does not support "Monolog" conversations, at least not in the current version. It supports only a dialog between an initiator and a target service. I can use MSMQ but then I will have two things to maintain - the .Net Code for file processing in MSMQ and the SQL Server T-SQL stored procs. What options do I have left? Thanks.

    Read the article

  • Moving .NET assemblies away from the application base directory?

    - by RasmusKL
    I have a WinForms application with a bunch of third party references. This makes the output folder quite messy. I'd like to place the compiled / referenced dlls into a common subdirectory in the output folder, bin / lib - whatever - and just have the executables (+ needed configs etc) reside in the output folder. After some searching I ran into assembly probing (http://msdn.microsoft.com/en-us/library/4191fzwb.aspx) - and verified that if I set this up and manually move the assemblies my application will still work if they are stored in the designated subdirectory like so: <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="bin" /> </assemblyBinding> </runtime> </configuration> However, this doesn't solve the build part - is there any way to specify where referenced assemblies and compiled library assemblies go? Only solutions I can think of off the top of my head is either post-build actions or dropping the idea and using ILMerge or something. There has got to be a better way of defining the structure :-)

    Read the article

  • Showing renames in hg status?

    - by Ryan Thompson
    I know that Mercurial can track renames of files, but how do I get it to show me renames instead of adds/removes when I do hg status? For instance, instead of: A bin/extract-csv-column.pl A bin/find-mirna-binding.pl A bin/xls2csv-separate-sheets.pl A lib/Text/CSV/Euclid.pm R src/extract-csv-column.pl R src/find-mirna-binding.pl R src/modules/Text/CSV/Euclid.pm R src/xls2csv-separate-sheets.pl I want some indication that four files have been moved. I think I read somewhere that the output is like this to preserve backward-compatibility with something-or-other, but I'm not worried about that.

    Read the article

  • Setting up Subsonic 3.0 on a ASP.NET web application on Visual Studio 2008

    - by Rob Paul
    I followed the steps on this tutorial video by at subsonic website. Everything seems to be self explanatory but when I copy the .tt files into my Visual Studio nothing happens. I have read other question relating to this problem on this website but they don't seem to fix this problem. I also went into the regedit to find out the generator key for .tt files and it is properly set. This is not a visual studio express or anything but a professional edition also I am trying to build a ASP.net web application and not a MVC application like the demo. Any help will be greatly appreciated. Thank You

    Read the article

  • Trouble with object injection in Spring.Net

    - by Abdel Olakara
    Hi all, I have a issue with my Spring.Net configuration where its not injecting an object. I have a CommService to which an object named GeneralEmail is injected to. Here is the configuration: <!-- GeneralMail Object --> <object id="GeneralMailObject" type="CommUtil.Email.GeneralEmail, CommUtil"> <constructor-arg name="host" value="xxxxx.com"/> <constructor-arg name="port" value="25"/> <constructor-arg name="user" value="[email protected]"/> <constructor-arg name="password" value="xxxxx"/> <constructor-arg name="template" value="xxxxx"/> </object> <!-- Communication Service --> <object id="CommServiceObject" type="TApp.Code.Services.CommService, TApp"> <property name="emailService" ref="GeneralMailObject" /> </object> The communication service object is again injected to many other aspx pages & service. In one scenario, I need to call the commnucation service from an static WebMethod. I try doing: CommService cso = new CommService(); But when i try to get the emailService object, its null! why didn't the spring inject the GeneralMail object into my cso object? What am I doing wrong and how do I access the object from spring container. Thanks in advance for the suggestions and solutions. Reagrds, Abdel Olakara

    Read the article

  • Recursive wildcards in GNU make?

    - by Roger Lipscombe
    It's been a while since I've used make, so bear with me... I've got a directory, flac, containing .FLAC files. I've got a corresponding directory, mp3 containing MP3 files. If a FLAC file is newer than the corresponding MP3 file (or the corresponding MP3 file doesn't exist), then I want to run a bunch of commands to convert the FLAC file to an MP3 file, and copy the tags across. The kicker: I need to search the flac directory recursively, and create corresponding subdirectories in the mp3 directory. And I want to use make to drive this.

    Read the article

  • Exposing headers on iPhone static library

    - by leolobato
    Hello guys, I've followed this tutorial for setting up a static library with common classes from 3 projects we are working on. It's pretty simple, create a new static library project on xcode, add the code there, a change some headers role from project to public. The tutorial says I should add my library folder to the header search paths recursively. Is this the right way to go? I mean, on my library project, I have files separated in folders like Global/, InfoScreen/, Additions/. I was trying to setup one LOKit.h file on the root folder, and inside that file #import everything I need to expose. So on my host project I don't need to add the folder recursively to the header search path, and would just #import "LOKit.h". But I couldn't get this to work, the host project won't build complaining about all the classes I didn't add to LOKit.h, even though the library project builds. So, my question is, what is the right way of exposing header files when I setup a Cocoa Touch Static Library project on xCode?

    Read the article

  • Adding file of the same name into the same list and unable to update name of the SPFile

    - by BeraCim
    Hi all: I'm having difficulties adding file of the same name in the same list and subsequently updating/changing/modifying the name of a SPFile. Basically, this is what I'm trying to do: string fileName = "something"; // obtained from a loop -- loop omitted here. SPFile file = folder.Files.Add(fileName, otherFile.OpenBinary()); The Add method will generate a runtime exception when it finds another file of the same name in the same list/folder. So I thought of changing the file name later in the process: string newGuid = Guid.NewGuid().ToString(); SPFile file = folder.Files.Add(newGuid, otherFile.OpenBinary()); // some other processing... afterwards, rename the file file.name = fileName; file.Item.Update(); A few minute of googling indicated I need to either move the file around, or update the name field by using ["Name"] instead. I was wondering are there any other better ways to get around this problem? Thanks.

    Read the article

  • What to write in "Web location" field when starting an "New website" in Visual Studio 2010?

    - by Ivan
    I want a website to be deployed automatically to a local IIS (built in Windows XP Pro SP3), avoiding VisualStudio-built-in server if possible. I'd like development source files to be stored in a project folder outside wwwroot (I wouldn't mind built files to be copied to wwwroot each time I press F5). I don't store my projects inside default directories inside "My documents". What should I specify as in "Web location" when starting an "New website" in Visual Studio 2010? A deployment path in wwwroot, a folder where I'd like to save my project, or something else? I want the website to be a part of a complex solution in VS 2010, also icludinc a class libray project, a WinForms application project, a Windows Service project and a common Entitity Framework data model.

    Read the article

  • Excluding standard directories from code coverage results with C++/CLI

    - by brickner
    I have a Visual Studio 2010 .NET 4 solution with C# projects and a C++/CLI project. I use Visual Studio's built in unit tests and code coverage. Other than the fact that Visual Studio 2010 coverage tool for C++/CLI projects seems to be much weaker than Visual Studio 2008 coverage tool, I get weird results. For example, I get uncovered code in this file: c:\program files (x86)\microsoft visual studio 10.0\vc\include\xstring And some other files in that directory. I want to exclude this code from coverage results. Is there a way to put some exclude attributes on that code? If not, is there a different automatic way to exclude that code from coverage? If not, is there a way to use EXCLUDE option to exclude it? Can it be done automatically within Visual Studio without running the coverage tool from command prompt? Any other solutions?

    Read the article

  • Access MP3 audio data independently of ID3 tags?

    - by kyl191
    Hi, this is a 2 part question. First off, is it possible to access the audio data in an MP3 independently of the ID3 tags, and secondly, is there any way to do so using available libraries? I recently consolidated my music collection from 3 computers and ended up with songs which had changed ID3 tags, but the audio data itself was unmodified. Running a search for duplicate files failed because the file changed with the ID3 tag change, but I think it should be possible to identify duplicate files if I just run a deduplication using the audio data for comparison. I know that it's possible to seek to a particular position past the ID3 header in the file, and directly read the data, but was wondering if there's a library that would expose the audio data so I could just extract the data, run a checksum on it, and store the computed result somewhere, then look for identical checksums. (Also, I'd probably have to use some kind of library when you take into account variable length headers.)

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • Problems with XAML WPF 4.0 Editor in VS2010

    - by RTPeat
    Wondering if anybody else has found some very odd behaviour with the XAML/WPF 4 editor in VS2010. This only occurs if the project is using .NET 4. Whenever I tried to open a XAML document for editing, the window would appear to open for a split second and then vanish, but VS2010 would still list the window as open. The fault was eventually traced to having the "Reuse current document window, if saved" option under "Documents" in the "Environment" options checked. Once this was unchecked XAML 4 files opened as expected. As I said, this only appears to occur on projects targeted at .NET Framework 4 - those targeted at 3.5 worked without a problem, and the "Reuse current document window, if saved" appears to work fine on other files.

    Read the article

  • Dependency Injection: How to maintain multiple configurations?

    - by Malax
    Hi StackOverflow, Lets assume we've build a system with a DI framework which is working quite fine. This system currently uses JMS to "talk" with other systems not maintained by us. The majority of our customers like the JMS approach and uses it according to our specification. The component which does all the messaging is injected with Spring into the rest of the application. Now we got the case that one customer cannot implement the JMS solution and want to use another messaging technology. Thats not a problem because we can simply implement a messaging service using this technology and inject it in the rest of the application. But how are we supposed to handle the deployment and maintenance of the configuration? Since the application uses Spring i could imagine to check in all the configurations i have for this application and the system administrator could start the application and passing the name of the DI XML file to specify which configuration should be loaded. But... it just don't feel right. Are there any solutions for such cases available? What are the best-practices you use? I could even imagine more complex scenarios which do not contain only one service substitution... Thanks a lot!

    Read the article

< Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >