Search Results

Search found 75840 results on 3034 pages for 'file servers'.

Page 97/3034 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Django - Moving database from development to production servers

    - by Garfonzo
    I am working on a Django project with a MySQL backend. I'm curious about the best way to update a production server's database to reflect the changes made on the development server's database? When I develop now, I make some changes to a models.py file, then to a schemamigration using South. Sometimes I do several migrations across several apps within the main project folder before it's ready for the production database. This means that there are several migration files in the app/migrations/ folder created by South. So on the production server, how does one update the database to reflect all the changes made in development, without having any data loss?

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • How to deploy a single website to multiple physical servers.?

    - by user70122
    I want to deploy my PHP and Ajax based Dating website to multiple physical servers. Does anyone know the method. Is there possible any method to save the files and setting (and also the user input), on all of the physical server, so that the servers do the following jobs. Servers used as a network, speeding up the process giving a robust thing to the end user. Data to be stored on all the servers' hard drivers, and Every server have the internet connection from different provider, so for example a. if any of the internet providers gets down, or b. there occur any hardware or power failure, the website still runs, without a break and without any data loss. Does anyone know about some online course, or some comprehensive tutorials about this topic ?

    Read the article

  • How can i simulate the production servers in my home for linux VMs [closed]

    - by user31
    I am thinking of making the small simulation of how the big companies run their system in my home environment to get the feeling. I have the server with 8GB ram , quad core processor. I am thinking of following setup if thats [possible because i have not worked with biger companies , so i want to know how can i do that I am thinking of creating 5 virtual machines VM1 will be database server and will have all databases like MySQL , postgreSQL , sqlite , mongodb and Oracle VM2 will be the web server and will have Apache and Tomcat installed VM3 will be the Filse server where i will have all the web sites file VM4 , i am thinking of as main box where i can install ptyon php java j2ee sites but not sure VM5 will have the server 22008 for c# .net applications my main idea is to be able to host the sites in php, python , java j2ee with spring Is my setup ok or i am missing few things. Please guide me with correct setup so that i can learn stuff

    Read the article

  • How to grep (or find) on cPanel?

    - by San
    How can I search for a specific string (function name or a variable name) in my files which are in various directories under cPanel file manager? I have been using a library directory and functions on that directory are used in various apps and pages. Now, I am in a situation to change something in the library file, for which I need to know the impact on files which use this library file functions. How to search / find / grep through the files hosted?

    Read the article

  • How to backup/restore full-disk encryption ubuntu 11.10?

    - by ggc
    How to backup/restore full-disk encryption ubuntu 11.10? I would like to put the RAW encrypted file system and restore on another computer. Encryption Details: crypt setup via Ubuntu alterate CD Installer only thing unencrypted is /boot File systems setup: boot- j swap-swap everything else-ext 4 Any suggestions? I have considered backing up the file system stripped of encryption, but I would prefer to keep the os encrypted while transferring. Thanks for any help!

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • Bind can only work for the DNS server inside zone

    - by Bob
    I got a big problem when I added a new zone to my current Bind configuration. ===============/etc/named.conf=============== include "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndckey"; }; }; acl "trusted" { 127.0.0.1; 208.43.81.157; 69.4.236.88; }; options { directory "/var/named"; allow-query { any; }; recursion yes; allow-recursion { trusted; }; }; zone "." { type hint; file "root.hints"; }; zone "2comu.com" { type master; file "2comu.com.db"; allow-update { none; }; }; zone "usa-diamond.com" { type master; file "usa-diamond.com.db"; allow-update { none; }; }; ===============/var/named/2comu.com.db=============== $TTL 86400 @ IN SOA ns1.2comu.com. root.2comu.com. ( 2011011101 3600 300 3600000 3600 ) IN NS ns1.2comu.com. IN NS ns2.2comu.com. IN MX 10 email.2comu.com. ns1.2comu.com. IN A 208.43.81.157 ns2.2comu.com. IN A 69.4.236.88 www.2comu.com. IN A 208.43.81.157 ftp.2comu.com. IN A 208.43.81.157 email.2comu.com. IN A 208.43.81.157 ===============/var/named/usa-diamond.com=============== $TTL 86400 @ IN SOA ns1.2comu.com. root.usa-diamond.com. ( 2011011115 3600 300 3600000 3600 ) IN NS ns1.2comu.com. IN NS ns2.2comu.com. www.usa-diamond.com. IN A 208.43.81.157 ================================================================ All of the configurations inside domain 2comu.com work well. But when www.usa-diamond.com doesn't work at all. When I tried "dig +trace www.usa-diamond.com", I got the following message ================================================================ ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> +trace usa-diamond.com ;; global options: printcmd . 517603 IN NS c.root-servers.net. . 517603 IN NS d.root-servers.net. . 517603 IN NS e.root-servers.net. . 517603 IN NS f.root-servers.net. . 517603 IN NS g.root-servers.net. . 517603 IN NS h.root-servers.net. . 517603 IN NS i.root-servers.net. . 517603 IN NS j.root-servers.net. . 517603 IN NS k.root-servers.net. . 517603 IN NS l.root-servers.net. . 517603 IN NS m.root-servers.net. . 517603 IN NS a.root-servers.net. . 517603 IN NS b.root-servers.net. ;; Received 500 bytes from 208.43.81.157#53(208.43.81.157) in 0 ms com. 172800 IN NS j.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. ;; Received 505 bytes from 192.33.4.12#53(c.root-servers.net) in 3 ms usa-diamond.com. 172800 IN NS ns1.2comu.com. usa-diamond.com. 172800 IN NS ns2.2comu.com. ;; Received 107 bytes from 192.48.79.30#53(j.gtld-servers.net) in 177 ms ;; Received 33 bytes from 208.43.81.157#53(ns1.2comu.com) in 0 ms ========================================================================= It seems I can't get any answer from ns1.2comu.com. Can anyone give some suggestions? Thanks a lot. Bob

    Read the article

  • Problem When Compressing File using vb.net

    - by Amr Elnashar
    I have File Like "Sample.bak" and when I compress it to be "Sample.zip" I lose the file extension inside the zip file I meann when I open the compressed file I find "Sample" without any extension. I use this code : Dim name As String = Path.GetFileName(filePath).Replace(".Bak", "") Dim source() As Byte = System.IO.File.ReadAllBytes(filePath) Dim compressed() As Byte = ConvertToByteArray(source) System.IO.File.WriteAllBytes(destination & name & ".Bak" & ".zip", compressed) Or using this code : Public Sub cmdCompressFile(ByVal FileName As String) 'Stream object that reads file contents Dim streamObj As Stream = New StreamReader(FileName).BaseStream 'Allocate space in buffer according to the length of the file read Dim buffer(streamObj.Length) As Byte 'Fill buffer streamObj.Read(buffer, 0, buffer.Length) streamObj.Close() 'File Stream object used to change the extension of a file Dim compFile As System.IO.FileStream = File.Create(Path.ChangeExtension(FileName, "zip")) 'GZip object that compress the file Dim zipStreamObj As New GZipStream(compFile, CompressionMode.Compress) 'Write to the Stream object from the buffer zipStreamObj.Write(buffer, 0, buffer.Length) zipStreamObj.Close() End Sub please I need to compress the file without loosing file extension inside compressed file. thanks,

    Read the article

  • One Comcast Business Gateway, One Router, Two Web Servers

    - by Kevin Scheidt
    I have a Comcast business account with a router and a web server (info) attached. behind the router there are multiple computers and a second web server (info) which also serves as a file server. (info) has two nics in it. One direct to comcast and one connected to the router. It needs to serve the world it's websites. It needs however, to also be able to see all the internal computers and (com)'s served files. With just 1 nic (the one connected to the router, not comcast), (info) works fine but no one outside can see it. (com) services port 80 and (info) needs to handle port 80 as well. I have two domain names registered, and 5 static ip's from comcast. right now h t t p: / /www.graceamazing.com handled by (com) works fine and h t t p: / /www.graceamazing.com:1307 handled by (info) works fine. but as soon as I enable the 2nd nic in (info) h t t p: / /www.graceamazing.info runs extremely slow (Horribly slow). however, h t t p: / /www.graceamazing.com:1307 and .com work fine. (com) has an ip address via the router 70.89.233.41 (info) has a ip addy of 70.89.233.46 via comcast (2nd nic) and a internal ip of 192.168.x.100 via static behind the router. Any suggestions or changes to make that will make h t t p: / /www.graceamazing.info perform with the same speed it has when going through h t t p: / /graceamazing.com:1307 is there a setting I should check / could have misssed?

    Read the article

  • How to make ssh connection between servers using public-key authentication

    - by Rafael
    I am setting up a continuos integration(CI) server and a test web server. I would like that CI server would access web server with public key authentication. In the web server I have created an user and generated the keys sudo useradd -d /var/www/user -m user sudo passwd user sudo su user ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/var/www/user/.ssh/id_rsa): Created directory '/var/www/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /var/www/user/.ssh/id_rsa. Your public key has been saved in /var/www/user/.ssh/id_rsa.pub. However othe side, CI server copies the key to the host but still asks password ssh-copy-id -i ~/.ssh/id_rsa.pub user@webserver-address user@webserver-address's password: Now try logging into the machine, with "ssh 'user@webserver-address'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. I checked on the web server and the CI server public key has been copied to web server authorized_keys but when I connect, It asks password. ssh 'user@webserver-address' user@webserver-address's password: If I try use root user rather than my created user (both users are with copied public keys). It connects with the public key ssh 'root@webserver-address' Welcome to Ubuntu 11.04 (GNU/Linux 2.6.18-274.7.1.el5.028stab095.1 x86_64) * Documentation: https://help.ubuntu.com/ Last login: Wed Apr 11 10:21:13 2012 from ******* root@webserver-address:~#

    Read the article

  • FTP Server upload and filesystem questions

    - by Alex
    I'm a photographer who mainly does event photography. A while ago I bought myself a Nikon WT-4 wireless transmitter, a small device which connects via USB to my Nikon D700 DSLR, and then establishes a WiFi connection to an existing WLAN. It can then upload any pictures I take via FTP to an FTP server somewhere in the network. On my laptop I then have a piece of software which will check a given folder on the disk regularly, this software is smart enough to look at the modified file timestamp, if this timestamp is less than 10 seconds ago, it will not attempt to import the folder and skip the file in this iteration of the import scan. The problem I've discovered seems to be inherent to the FTP protocol, as I have the same problem with Windows 7 built in IIS server, as I do with FileZilla FTP server. When the transmitter starts to upload a file, the FTP server will create a small 300-500 KB file with the correct filename on the disk, but then do nothing with the file until it has completely received the file via FTP. So it seems to create this small dummy file, and then buffer the remainder of the FTP upload until it's finished, and then dump the rest of the file into the dummy file making it the correct size. Problem is, these uploads take about 15-30 seconds depending on reception, but since the folder watch tool will already try to import any file older than 10 seconds, it will always try to import the small dummy files which obviously fails as they're not copmlete yet. Is there any way to 'disable' this behaviour? Ideally I would like my file only to show up once it's been completely uploaded. Or perhaps someone knows another FTP server application (it has to run on win7) which does not show this behaviour?

    Read the article

  • Open an excel file using COM and save it as .xml file

    - by chupinette
    Hi. Im trying the following code: <?php $workbook = "D:\b2\\test.XLS"; $sheet = "Sheet1"; #Instantiate the spreadsheet component. $ex = new COM("Excel.sheet") or Die ("Did not connect"); #Get the application name and version print "Application name:{$ex->Application->value}<BR>" ; print "Loaded version: {$ex->Application->version}<BR>"; #Open the workbook that we want to use. $wkb = $ex->application->Workbooks->Open($workbook) or Die ("Did not open"); #Create a copy of the workbook, so the original workbook will be preserved. $ex->Application->ActiveWorkbook->SaveAs("D:\b2\Ourtest.xml"); #$ex->Application->Visible = 1; #Uncomment to make Excel visible. #Optionally, save the modified workbook $ex->Application->ActiveWorkbook->SaveAs("D:\Ourtest.xml"); #Close all workbooks without questioning $ex->application->ActiveWorkbook->Close("False"); unset ($ex); ?> This actually works and creates the Ourtest.xml file. But im getting characters like: ÐÏࡱá þÿ þÿÿÿ I have tried with SaveAs("D:\Ourtest.pdf") and it says the file has been corrupted or incorrectly decoded. Can anyone help me please?Thanks

    Read the article

  • Replace delimited block of text in file with the contents of another file

    - by rmarimon
    I need to write a simple script to replace a block of text in a configuration file with the contents of another file. Let's assume with have the following simplified files: server.xml <?xml version='1.0' encoding='UTF-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Service name="Catalina"> <Connector port="80" protocol="HTTP/1.1"/> <Engine name="Catalina" defaultHost="localhost"> <!-- BEGIN realm --> <sometags/> <sometags/> <!-- END realm --> <Host name="localhost" appBase="webapps"/> </Engine> </Service> </Server> realm.xml <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> I want to run a script and have realm.xml replace the contents between the <!-- BEGIN realm --> and <!-- END realm --> lines. If realm.xml changes then whenever the script is run again it will replace the lines again with the new contents of realm.xml. This is intended to be run in /etc/init.d/tomcat on startup of the service on multiple installations on which the realm is going to be different. I'm not so sure how can I do this simply with awk or sed.

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • XP, how to apply security to files, now have simple file sharing and can't access some files from other machines ?

    - by Jules
    For a month or two now I've been using simple file sharing, for several months before that I didn't, then before that I had simple file sharing tuned on. So at the moment I don't have a security tab (on files or folders) or sharing permissions settings there too. As an example, from another machine, I can access files from 2007 but not from the summer of last year in the same folder. I can access all files on that local machine. So I think I just need to re-apply security or permissions somehow? What should I do?

    Read the article

  • How can I edit local security policy from a batch file?

    - by Stephen Jennings
    I am trying to write a utility as a batch file that, among other things, adds a user to the "Deny logon locally" local security policy. This batch file will be used on hundreds of independent computers (not on a domain and aren't even on the same network). I assumed one of the following were my options, but perhaps there's one I haven't thought of. A command line utility similar to net.exe which can modify local security policy. A VBScript sample to do the same. Write my own using some WMI or Win32 calls. I'd rather not do this one if I don't have to.

    Read the article

  • Increasing file descriptor limit on Debian does not work! Help!

    - by Aco
    I am running Debian 6 and I am trying to increase the file descriptor limit but it does not want to work. This is what I have done: I edited /etc/sysctl.conf by adding fs.file-max = 64000 at the end and applied the changes using sysctl -p. I then edited /etc/security/limits.conf and added the following lines: * soft nofile 64000 and * hard nofile 64000. Now when I execute ulimit -Hn and ulimit -Sn I still see 1024. I rebooted the server and I still get the same result. What have I failed to do?

    Read the article

  • Plesk file permissions - Apache/PHP conflicting with user accounts.

    - by hfidgen
    Hiya, I'm building a Drupal site which performs various automatic disk operations using the apache user (id=40). The problem is that the site was set up on a subdomain belonging to user ID 10001 (ie my main FTP account) so the filesystem belongs to that user ID. So I keep getting errors like this: warning: move_uploaded_file() [function.move-uploaded-file]: SAFE MODE Restriction in effect. The script whose uid is 10001 is not allowed to access /var/www/vhosts/domain.com/httpdocs/sites/default/files/images/user owned by uid 48 in /var/www/vhosts/domain.com/httpdocs/includes/file.inc on line 579. I've tried changing the apache group in httpd.conf to apache:psacln, psacln being the default group for all web users but that's not helped. The situation now is: ..../files/images/ = 777 and chown = ftplogin:psacln ..../files/images/user = 775 and chown = apache:psacln ..../files/tmp = 777 and chown = ftplogin:psacln So apparently uid 40 and 10001 both have permissions to write to any of the 3 directories involved, but still can't. Am i missing something here? Can anyone help? Thanks!

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • How to use iTunes USB File Transfer to copy files from PC to Apple iPad, e.g. PDF files for viewer a

    - by Chris W. Rea
    I'm interested in reading PDF-format ebooks on my Apple iPad. I have half a gig of PDFs I want to transfer to it, from my PC. I'm familiar already with loading EPUB-format titles through iBooks – unfortunately, iBooks doesn't read PDFs so I am looking at using a third-party application. I know many such third-party media viewer applications for the iPad support download from web or email, but that's a hassle. I've heard iTunes 9.1 added support for USB File Transfer, specifically for iPad devices. How does USB File Transfer work in iTunes, for transferring files from my PC to my iPad? Please provide example steps. Moderators: Please remember the FAQ's "except insofar as they interface with your computer." ;-)

    Read the article

  • Windows 2003 Server on a domain, XP client PCs on a workgroup - file share without authentication?

    - by Zach
    I have a windows 2003 server on a domain and client PCs running XP on a workgroup. I have created a file share on the server that should be accessible by the client PCs. I even set the security and sharing to 'Everyone' just to test. When I try to access the file share from any of the XP machines, I get an authentication prompt that displays asking for credentials, even though 'Everyone' has full control currently (just for testing purposes). Why is it asking to authenticate? I need it to where it doesn't ask to authenticate. I also made sure passwords were set on all XP machines since I found this could be one possible issue and they all were. Any ideas? Thanks!

    Read the article

  • What is the simplest and fastest way to transfer large file through a Windows network?

    - by Sake
    I have a Window Server 2000 machine running MS SQL Server that stores over 20GB of data. The database is backed-up every day to the second harddrive. I want to transfer those backup files to another computer to build another test server and for recovery practicing. (the backup never actually got restored for almost 5 years. Don't tell my boss about that!) I have trouble transfering that huge file through the network. I've tried plain network copy, apache download, and ftp. Any method I tried end up failing when the amount of data transfered reach 2GB. The last time that I successfully transfered the file, it was through a usb attached external harddrive. But I want to perform this task routinely and preferably automatically. Wonder what is the most pragmatic approach for this situation ?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >