Search Results

Search found 39577 results on 1584 pages for 'temp files'.

Page 405/1584 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • how to merge stripped symbols with binary?

    - by Bernd Schmickler
    I compiled the Qt Framework with debugging enabled, but the script stripped the debugging symbols from the libraries and saved them as *.debug files -- just like here. Sadly I need these symbols inside the .so files, so I can continue working with them. There seems no way to teach my debugger to load external (non-PDB) symbols. So another way of solving my problem might be converting the .debug files to PDB format, which might also be a problem. Thank you very much!

    Read the article

  • Error reached after genereated entity framework classes by edmgen tool

    - by loviji
    Hello, First I read this question, but this knowledge did not help to solve my problems. In initial I've created edmx file by Visual Studio. Generated files with names: uqsModel.Designer.cs uqsModel.edmx This files are located on App_Code folder. And my web app work normally. In Web Config generated connectionstring automatically. <add name="uqsEntities" connectionString="metadata=res://*/App_Code.uqsModel.csdl|res://*/App_Code.uqsModel.ssdl|res://*/App_Code.uqsModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=aemloviji\sqlexpress;Initial Catalog=uqs;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> Then I had to generate classes by the instrument edmgen tool(full generation mode). Generated new files with names: uqsModel.cs uqsModel.csdl uqsModel.msl uqsModel.ssdl uqsViews.cs it save new classed to the folder where edmx files located before, and remove existing edmx files. And when page redirrects to any web page server side code fails. And problem: Unable to load the specified metadata resource. Some idea, please.

    Read the article

  • vsftpd not allowing uploads. 550 response

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • Thread and two dimensional array in objective C?

    - by mactonny
    Hey, guys, I am just starting to wrap my head around objective C and I am doing a little project on Iphone. And I just encountered a weird problem. I had to deal with images in my program so I have a lot local variables declared like temp[width][height]. If I am not using NSThread to perform image processing, it works all fine. However, if I use NSThread, it'll keep giving me EXC_BAD_ACCESS on whenever I try to access a 2-D array declared like temp[widht][height]. So I have to allocate memory from heap in order to have a 2-D array. That'll solve the problem but I still don't get it. My first thought would be stack over flow, but it worked all fine with one thread. I just don't get it.

    Read the article

  • Multiple Git repositories in one directory

    - by Jakob
    Hello, I would like to deploy a directory to multiple developers having different permissions. So this is one thing Git cannot do. What about creating two repositories in one directory and assigning them different file lists by excluding files managed by the other repository with the .gitignore file. Example: /www/project/.git for all files execpt in /www/project/css /www/project/css/.git - only files in this directory Has anyone tried this solution? Or are there any better ways to handle this issue? Regards, Jakob

    Read the article

  • Run a shell command with arguments from powershell script

    - by Mike Weerasinghe
    Hello, I need to extract and save a some tables from a remote SQL database using bcp. I would like to write a powershell script to invoke bcp for each table and save the data. So far I have this script that creates the necessary args for bcp. However I can not figure out how to pass the args to bcp. Every time I run the script it just shows the bcp help instead. This must be something really easy that I am not getting. #commands bcp database.dbo.tablename out c:\temp\users.txt -N -t, -U uname -P pwd -S <servername> $bcp_path = "C:\Program Files\Microsoft SQL Server\90\Tools\Binn\bcp.exe" $serverinfo =@{} $serverinfo.add('table','database.dbo.tablename') $serverinfo.add('uid','uname') $serverinfo.add('pwd','pwd') $serverinfo.add('server','servername') $out_path= "c:\Temp\db\" $args = "$($serverinfo['table']) out $($out_path)test.dat -N -t, -U $($serverinfo['uid']) -P $($serverinfo['pwd']) -S $($serverinfo['server'])" #this is the part I can't figure out & $bcp_path $args

    Read the article

  • REST on *just* IIS7 (without a web framework)

    - by noblethrasher
    Hi, I want to upload files directly to IIS7 (in this case I am using the WebRequest object in .NET). Thus I need IIS7 to accept POST, PUT, and DELETE verbs such that I can upload and delete files on the server directly. Is it possible to have IIS accept files without a a web framework like ASP.NET? Essentially I want to be able to use IIS (HTTP) as an FTP server.

    Read the article

  • Linq query challenge

    - by vdh_ant
    My table structure is as follows: Person 1-M PesonAddress Person 1-M PesonPhone Person 1-M PesonEmail Person 1-M Contract Contract M-M Program Contract M-1 Organization At the end of this query I need a populated object graph where each person has their: PesonAddress's PesonPhone's PesonEmail's PesonPhone's Contract's - and this has its respective Program's Now I had the following query and I thought that it was working great, but it has a couple of problems: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") where people.Contract.Any( contract => (param.OrganizationId == contract.OrganizationId) && contract.Program.Any( contractProgram => (param.ProgramId == contractProgram.ProgramId))) select people; The problem is that it filters the person to the criteria but not the Contracts or the Contract's Programs. It brings back all Contracts that each person has not just the ones that have an OrganizationId of x and the same goes for each of those Contract's Programs respectively. What I want is only the people that have at least one contract with an OrgId of x with and where that contract has a Program with the Id of y... and for the object graph that is returned to have only the contracts that match and programs within that contract that match. I kinda understand why its not working, but I don't know how to change it so it is working... This is my attempt thus far: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") let currentContracts = from contract in people.Contract where (param.OrganizationId == contract.OrganizationId) select contract let currentContractPrograms = from contractProgram in currentContracts let temp = from x in contractProgram.Program where (param.ProgramId == contractProgram.ProgramId) select x where temp.Any() select temp where currentContracts.Any() && currentContractPrograms.Any() select new Person { PersonId = people.PersonId, FirstName = people.FirstName, ..., ...., MiddleName = people.MiddleName, Surname = people.Surname, ..., ...., Gender = people.Gender, DateOfBirth = people.DateOfBirth, ..., ...., Contract = currentContracts, ... }; //This doesn't work But this has several problems (where the Person type is an EF object): I am left to do the mapping by myself, which in this case there is quite a lot to map When ever I try to map a list to a property (i.e. Scholarship = currentScholarships) it says I can't because IEnumerable is trying to be cast to EntityCollection Include doesn't work Hence how do I get this to work. Keeping in mind that I am trying to do this as a compiled query so I think that means anonymous types are out.

    Read the article

  • Accessing SQL server Table is slow -very few data inside

    - by Joseph
    Dear all I have a temp table ,datas keep on coming in and going out. now a days even if there is very few records also if we select ,its taking so long. i cant put index on this table because its a Temp table. The only way i found that drop the table and recreate it.its working very fine. any idea why this is happening?is it like some kind of fragmentation?if there is index ,then we can check the frgment,but if there is no index then waht to do. we are using sql server 2008 64 bit thanks Joseph

    Read the article

  • Unable to cast lists with Reflection in C#

    - by DrLazer
    I am having a lot of trouble with Reflection in C# at the moment. The app I am writing allows the user to modify the attributes of certain objects using a config file. I want to be able to save the object model (users project) to XML. The function below is called in the middle of a foreach loop, looping through a list of objects that contain all the other objects in the project within them. The idea is, that it works recursively to translate the object model into XML. Dont worry about the call to "Unreal" that just modifes the name of the objects slightly if they contain certain words. private void ReflectToXML(object anObject, XmlElement parentElement) { Type aType = anObject.GetType(); XmlElement anXmlElement = m_xml.CreateElement(Unreal(aType.Name)); parentElement.AppendChild(anXmlElement); PropertyInfo[] pinfos = aType.GetProperties(); //loop through this objects public attributes foreach (PropertyInfo aInfo in pinfos) { //if the attribute is a list Type propertyType = aInfo.PropertyType; if ((propertyType.IsGenericType)&&(propertyType.GetGenericTypeDefinition() == typeof(List<>))) { List<object> listObjects = (aInfo.GetValue(anObject,null) as List<object>); foreach (object aListObject in listObjects) { ReflectToXML(aListObject, anXmlElement); } } //attribute is not a list else anXmlElement.SetAttribute(aInfo.Name, ""); } } If an object attributes are just strings then it should be writing them out as string attributes in the XML. If an objects attributes are lists, then it should recursively call "ReflectToXML" passing itself in as a parameter, thereby creating the nested structure I require that nicely reflect the object model in memory. The problem I have is with the line List<object> listObjects = (aInfo.GetValue(anObject,null) as List<object>); The cast doesn't work and it just returns null. While debugging I changed the line to object temp = aInfo.GetValue(anObject,null); slapped a breakpoint on it to see what "GetValue" was returning. It returns a "Generic list of objects" Surely I should be able to cast that? The annoying thing is that temp becomes a generic list of objects but because i declared temp as a singular object, I can't loop through it because it has no Enumerator. How can I loop through a list of objects when I only have it as a propertyInfo of a class? I know at this point I will only be saving a list of empty strings out anyway, but thats fine. I would be happy to see the structure save out for now. Thanks in advance

    Read the article

  • Deprecated errors when installing pear

    - by JW
    I'm trying to install PEAR on OS X. I did this: curl http://pear.php.net/go-pear > go-pear.php php go-pear.php After answering some prompts, I start getting tons of errors like this: Deprecated: Assigning the return value of new by reference is deprecated in /Users/jacobweber/bin/pear/temp/PEAR.php on line 563 PHP Deprecated: Assigning the return value of new by reference is deprecated in /Users/jacobweber/bin/pear/temp/PEAR.php on line 566 Now, I understand what these errors mean. I just want to hide them. So in my /private/etc/php.ini file, I have the following: error_reporting = E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED This hides those same errors in my own code. But in PEAR it doesn't. They seem to be changing the error_reporting level. Is there a good way to fix this?

    Read the article

  • “Could not create the file”, “Access is denied” and “Unrecoverable build error” in building setup project in VS 2008

    - by Mister Cook
    When building a setup project I get a message: Error with setup build: Error 27 Could not create the file 'C:\Users\MyName\AppData\Local\Temp\VSI1E1A.tmp' 'Access is denied.' I have tried the following (from http://support.microsoft.com/kb/329214/EN-US) regsvr32 "C:\Program Files (x86)\Common Files\Microsoft Shared\MSI Tools\mergemod.dll" The DLL registers but this does not fix my problem. Also, I tried a Clean build, deleting the temp folder, ran VS2008 as administratror, restart my PC but it occurs every time. I have no anti-virus software running and running on Windows 7 64-bit. This operation worked fine until recently. I have read many other users see this but found no solution. The only half solution I found was to edit setup properties and switch to Package files as Loose uncompressed files. This works but is not ideal as I need a full installer.

    Read the article

  • Workaround for GNU Make 3.80 eval bug

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I've run into a known bug with GNU Make 3.80. When $(eval) evaluates a line that is over 193 characters, Make crashes with a "Virtual Memory Exhausted" error. The code I have that causes the issue looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $$(SRC_DIR)$(1)/ $(1)_SRC_FILES = $$(wildcard $$($(1)_SRC_DIR)*.c) $(1)_OBJ_FILES = $$($(1)_SRC_FILES):.c=.o) $$($(1)_OBJ_FILES) : $$($(1)_SRC_FILES) # This is the problem line endef $(eval $(call PROGRAM_template,$(PROG_NAME))) When I run this Makefile, I get gmake: *** virtual memory exhausted. Stop. The expected output is that all .c files in ./src/test/ get compiled into .o files (via an implicit rule). The problem is that $$($(1)_SRC_FILES) and $$($(1)_OBJ_FILES) are together over 193 characters long (if there are enough source files). I have tried running the make file on a directory where there is only 2 .c files, and it works fine. It's only when there are many .c files in the SRC directory that I get the error. I know that GNU Make 3.81 fixes this bug. Unfortunately I do not have the authority or ability to install the newer version on the system I'm working on. I'm stuck with 3.80. So, is there some workaround? Maybe split $$($(1)_SRC_FILES) up and declare each dependency individually within the eval?

    Read the article

  • Linq guru - filtering related entities...

    - by vdh_ant
    My table structure is as follows: Person 1-M PesonAddress Person 1-M PesonPhone Person 1-M PesonEmail Person 1-M Contract Contract M-M Program Contract M-1 Organization At the end of this query I need a populated object graph where each person has their: PesonAddress's PesonPhone's PesonEmail's PesonPhone's Contract's - and this has its respective Program's Now I had the following query and I thought that it was working great, but it has a couple of problems: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") where people.Contract.Any( contract => (param.OrganizationId == contract.OrganizationId) && contract.Program.Any( contractProgram => (param.ProgramId == contractProgram.ProgramId))) select people; The problem is that it filters the person to the criteria but not the Contracts or the Contract's Programs. It brings back all Contracts that each person has not just the ones that have an OrganizationId of x and the same goes for each of those Contract's Programs respectively. What I want is only the people that have at least one contract with an OrgId of x with and where that contract has a Program with the Id of y... and for the object graph that is returned to have only the contracts that match and programs within that contract that match. I kinda understand why its not working, but I don't know how to change it so it is working... This is my attempt thus far: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") let currentContracts = from contract in people.Contract where (param.OrganizationId == contract.OrganizationId) select contract let currentContractPrograms = from contractProgram in currentContracts let temp = from x in contractProgram.Program where (param.ProgramId == contractProgram.ProgramId) select x where temp.Any() select temp where currentContracts.Any() && currentContractPrograms.Any() select new Person { PersonId = people.PersonId, FirstName = people.FirstName, ..., ...., MiddleName = people.MiddleName, Surname = people.Surname, ..., ...., Gender = people.Gender, DateOfBirth = people.DateOfBirth, ..., ...., Contract = currentContracts, ... }; //This doesn't work But this has several problems (where the Person type is an EF object): I am left to do the mapping by myself, which in this case there is quite a lot to map When ever I try to map a list to a property (i.e. Scholarship = currentScholarships) it says I can't because IEnumerable is trying to be cast to EntityCollection Include doesn't work Hence how do I get this to work. Keeping in mind that I am trying to do this as a compiled query so I think that means anonymous types are out.

    Read the article

  • JSP or .ascx equivalent for Scala?

    - by Daniel Worthington
    I'm working on a small MVC "framework" (it's really very small) in Scala. I'd like to be able to write my view files as Scala code so I can get lots of help from the compiler. Pre-compiling is great, but what I really want is a way to have the servlet container automatically compile certain files (my view files) on request so I don't have to shut down Jetty and compile all my source files at once, then start it up again just to see small changes to my HTML. I do this a lot with .ascx files in .NET (the file will contain just one scriptlet tag with a bunch of C# code inside which writes out markup using an XmlWriter) and I love this workflow. You just make changes and then refresh your browser, but it's still getting compiled! I don't have a lot of experience with Java, but it seems possible to do this with JSP as well. I'm wondering if this sort of thing is possible in Scala. I have looked into building this myself (see more info here: http://www.nabble.com/Compiler-API-td12050645.html) but I would rather use something else if it's out there.

    Read the article

  • all rows plus min / max values using a single stored procedure

    - by Rajeev
    I have a custom data source which pulls out data form a flat file. The flat file contains a timestamp , source and data. I can use sp_execute to execute a select query against the flat file. I'm currently using 2 stored procedures . - one which runs a select * from flat_file into a temp table - the other which does a select min/max from flat_file grouping by source into another temp file Im using the data retrieved using the stored procedures in a SSRS report Is is possible in a a single stored procedure to retrieve all the rows from the file within a date range and also identify the min/max values for each group retrieved ?e

    Read the article

  • Centos 6.3 vsftp unable to upload file to apache webserver

    - by user148648
    I am new to Centos, I did work with Sun Solaris and upload files to Apache web server before. I create an end user account and manage to ftp using command prompt to the server, error message is '226 Transfer Done (but failed to open directory). Content of my vsftpd.conf as below # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # ** may need to comment it back # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 local_umask=077 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # *** maybe to comment it back!!! # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # ** may need to comment it back!!! # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Warning, only for authorize login. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list local_root=/var/www # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • How to upload a huge(GBs) file to a webserver

    - by Aman Jain
    Hi I want to create a web interface that will allow users to upload huge files. These files are actually vmdk files(Virtual Machine Disks), and they can be multi GB in size. I think this makes the situation different from normal file uploads(10-20MB file uploads). So any advise on how to achieve the same in an efficient and fault tolerant way, is appreciated. Thanks Aman Jain

    Read the article

  • Active directory authentication for Ubuntu Linux login and cifs mounting home directories...

    - by Jamie
    I've configured my Ubuntu 10.04 Server LTS Beta 2 residing on a windows network to authenticate logins using active directory, then mount a windows share to serve as there home directory. Here is what I did starting from the initial installation of Ubuntu. Download and install Ubuntu Server 10.04 LTS Beta 2 Get updates # sudo apt-get update && sudo apt-get upgrade Install an SSH server (sshd) # sudo apt-get install openssh-server Some would argue that you should "lock sshd down" by disabling root logins. I figure if your smart enough to hack an ssh session for a root password, you're probably not going to be thwarted by the addition of PermitRootLogin no in the /etc/ssh/sshd_config file. If your paranoid or not simply not convinced then edit the file or give the following a spin: # (grep PermitRootLogin /etc/ssh/sshd_conifg && sudo sed -ri 's/PermitRootLogin ).+/\1no/' /etc/ssh/sshd_conifg) || echo "PermitRootLogin not found. Add it manually." Install required packages # sudo apt-get install winbind samba smbfs smbclient ntp krb5-user Do some basic networking housecleaning in preparation for the specific package configurations to come. Determine your windows domain name, DNS server name, and IP address for the active directory server (for samba). For conveniance I set environment variables for the windows domain and DNS server. For me it was (my AD IP address was 192.168.20.11): # WINDOMAIN=mydomain.local && WINDNS=srv1.$WINDOMAIN If you want to figure out what your domain and DNS server is (I was contractor and didn't know the network) check out this helpful reference. The authentication and file sharing processes for the Windows and Linux boxes need to have their clocks agree. Do this with an NTP service, and on the server version of Ubuntu the NTP service comes installed and preconfigured. The network I was joining had the DNS server serving up the NTP service too. # sudo sed -ri "s/^(server[ \t]).+/\1$WINDNS/" /etc/ntp.conf Restart the NTP daemon # sudo /etc/init.d/ntp restart We need to christen the Linux box on the new network, this is done by editing the host file (replace the DNS of with the FQDN of the windows DNS): # sudo sed -ri "s/^(127\.0\.0\.1[ \t]).*/\1$(hostname).$WINDOMAIN localhost $(hostname)/" /etc/hosts Kerberos configuration. The instructions that follow here aren't to be taken literally: the values for MYDOMAIN.LOCAL and srv1.mydomain.local need to be replaced with what's appropriate for your network when you edit the files. Edit the (previously installed above) /etc/krb5.conf file. Find the [libdefaults] section and change (or add) the key value pair (and it is in UPPERCASE WHERE IT NEEDS TO BE): [libdefaults] default_realm = MYDOMAIN.LOCAL Add the following to the [realms] section of the file: MYDOMAIN.LOCAL = { kdc = srv1.mydomain.local admin_server = srv1.mydomain.local default_domain = MYDOMAIN.LOCAL } Add the following to the [domain_realm] section of the file: .mydomain.local = MYDOMAIN.LOCAL mydomain.local = MYDOMAIN.LOCAL Conmfigure samba. When it's all said done, I don't know where SAMBA fits in ... I used cifs to mount the windows shares ... regardless, my system works and this is how I did it. Replace /etc/samba/smb.conf (remember I was working from a clean distro of Ubuntu, so I wasn't worried about breaking anything): [global] security = ads realm = MYDOMAIN.LOCAL password server = 192.168.20.11 workgroup = MYDOMAIN idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start Setup the authentication. Edit the /etc/nsswitch.conf. Here are the contents of mine: passwd: compat winbind group: compat winbind shadow: compat winbind hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start At this point I could login, home directories didn't exist, but I could login. Later I'll come back and add how I got the cifs automounting to work. Numerous resources were considered so I could figure this out. Here is a short list (a number of these links point to mine own questions on the topic): Samba Kerberos Active Directory WinBind Mounting Linux user home directories on CIFS server Authenticating OpenBSD against Active Directory How to use Active Directory to authenticate linux users Mounting windows shares with Active Directory permissions Using Active Directory authentication with Samba on Ubuntu 9.10 server 64bit How practical is to authenticate a Linux server against AD? Auto-mounting a windows share on Linux AD login

    Read the article

  • Rational Application Developer (RAD) 7.5+ and websphere runtime will not pick up jars from projects

    - by Berlin Brown
    With RAD Version: 7.5.3, Java 1.5. I have a couple of different projects. I needed to break out the java code and turn the *.class files into a jar. So basically, same *.class files I just removed the code and then jarred the class files into a jar. I broke the classes into a jar and then included the jar in the project. And I also did an order/export on the jar so that other projects can see the jar. At this point, ideally my project should not have changed because I am using class files in a jar instead of the java code. When I visit my web application in websphere, I get class not found errors on the classes that are now in the jar. Project Structure: A. Project earApp -- will need the webapp B. Project webapp -- will need the project (no jar files or *.java files are found in this project) C. Project javasrc -- the java source and the NEW JAR file are found here. I don't think websphere is acknowledging the jar. Here is the error: java.lang.NoClassDefFoundError: com.MyApp at java.lang.ClassLoader.defineClassImpl(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:258) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:151) at com.ibm.ws.classloader.CompoundClassLoader._defineClass(CompoundClassLoader.java:675) at com.ibm.ws.classloader.CompoundClassLoader.findClass(CompoundClassLoader.java:614) at com.ibm.ws.classloader.CompoundClassLoader.loadClass(CompoundClassLoader.java:431) at java.lang.ClassLoader.loadClass(ClassLoader.java:597) at java.lang.Class.getDeclaredMethodsImpl(Native Method) at java.lang.Class.getDeclaredMethods(Class.java:664) at com.ibm.ws.webcontainer.annotation.data.ScannedAnnotationData.collectMethodAnnotations(ScannedAnnotationData.java:130) at com.ibm.ws.webcontainer.annotation.data.ScannedAnnotationData.<init>(ScannedAnnotationData.java:47) at com.ibm.ws.webcontainer.annotation.AnnotationScanner.scanClass(AnnotationScanner.java:61) at com.ibm.ws.wswebcontainer.webapp.WebApp.processRuntimeAnnotationHelpers(WebApp.java:711) at com.ibm.ws.wswebcontainer.webapp.WebApp.populateJavaNameSpace(WebApp.java:624) at com.ibm.ws.wswebcontainer.webapp.WebApp.initialize(WebApp.java:289) at com.ibm.ws.wswebcontainer.webapp.WebGroup.addWebApplication(WebGroup.java:93) at com.ibm.ws.wswebcontainer.VirtualHost.addWebApplication(VirtualHost.java:162) at com.ibm.ws.wswebcontainer.WebContainer.addWebApp(WebContainer.java:671) at com.ibm.ws.wswebcontainer.WebContainer.addWebApplication(WebContainer.java:624) at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:395) at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:611) at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1274) at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1165) at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:587) at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:832) at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:921) at com.ibm.ws.runtime.component.ApplicationMgrImpl$AppInitializer.run(ApplicationMgrImpl.java:2124) at com.ibm.wsspi.runtime.component.WsComponentImpl$_AsynchInitializer.run(WsComponentImpl.java:342) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1497) What do you think I need to do?

    Read the article

  • Fastest way to do mass update

    - by user356004
    Let’s say you have a table with about 5 million records and a nvarchar(max) column populated with large text data. You want to set this column to NULL if SomeOtherColumn = 1 in fastest possible way. The brute force UPDATE does not work very well here because it will create large implicit transaction and take forever. Doing updates in small batches of 50K records at a time works but it’s still taking 47 hrs to complete on beefy 32 core/64GB server. Is there any way to do this update faster? Are there any magic query hints/table options that scarifies something else (like concurrency) in exchange of speed? NOTE: Creating temp table or temp column is not an option because this nvarchar(max) column involves lots of data and so consumes lots of space!

    Read the article

  • Security flaw in this code approach

    - by Alec Smart
    Hello, Am wondering if there would be any security flaw in this approach. I am writing a piece of code which allows users to upload files and another set to download those files. These files can be anything. User uploads the file (any file including .php files), it is renamed to an md5 hash (extension removed) and stored on server. A corresponding mySQL entry is made. The user trying to download the file, uses say download.php to download the file where the md5 file is sent (with the original name). Is there someway in which anyone can exploit the above scenario?

    Read the article

  • Help on using paperclip plugin

    - by Brian Roisentul
    I've just installed this plugin, created the migrations, added everything I needed to make it work(I didn't install ImageMagick yet). The problem is when I get the upload control parameter to save it in my controller, I get something like this: #<File:C:\Users\Brian\AppData\Local\Temp\RackMultipart.2560.6677> instead of a simple string, like C:\Users\Brian\AppData\Local\Temp\RackMultipart.2560.6677 And if I try to read it I get the following exception: TypeError backtrace must be Array of String What am I doing wrong? How do I read it or simply get rid of the # and < symbols?

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >