Search Results

Search found 77259 results on 3091 pages for 'file writing'.

Page 87/3091 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • Writing my first bash script (line 2: testscript: No such file or directory)

    - by skyeagle
    I am attempting to write my first shell script (I'm running Ubuntu 10.x) This is what my 'script' looks like cd /path/to/directory/cronscripts . testscript I put the following debug statements (after the cd), in the script above to make sure everything was ok: pwd echo `ls -lhrt` and it displayed all the files in the directory. So I dont understand the error message I am getting. I have the following two questions Why am I getting the above error message and how do I fix it? My script seems to run fine without a shebang in the first line - infact when I try either of the following: #!/usr/bin/bash #!/usr/bin/bash env I get the following error message: /usr/bin/bash: bad interpreter: No such file or directory What is the purpose of the shebang statement and why do I get the above error when I try to include it in my script/ how do I fix it, so I can add the shebang (is it necessary? - it seems most script examples have the shebang - but I'm not sure if it is a must, as the script runs without it).

    Read the article

  • Include all php files in one file and include that file in every page if we're using hiphop?

    - by Hasan Khan
    I understand that in normal php if we're including a file we're merging the source of it in the script and it would take longer for that page to be parsed/processed but if we're using HipHop shouldn't it be ok to just create one single php file and include every file in it (that contains some class) and every page which needs those classes (in separate file each) can just include one single php file? Would this be ok in presence of HipHop?

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    Hi, I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS drive. When I write a file to the device using eSata, or a SMB share I cannot write files over 2GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Can see samba shares but not access them

    - by nitefrog
    For the life of me I cannot figure this one out. I have samba installed and set up on the ubuntu box and on the Win7 box I CAN SEE all the shares I created. I created two users on ubuntu that map to the users in windows. On ubuntu they are both admins, user A & B on Windows User A is admin and user B is poweruser. User A can see both shares and access them, but user B can see everythin, but only access the homes directory, the other directory throws an error. I have two drives in Ubuntu and this is the smb.config file (I am new to samba): [global] workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = yes name resolve order = lmhosts host wins bcast log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ; usershare max shares = 100 usershare allow guests = yes And here is the share section: Both user A & B can access this from windows. No problems. [homes] comment = Home Directories browseable = no writable = yes Both User A & B can see this share, but only user A can access it. User B get an error thrown. [stuff] comment = Unixmen File Server path = /media/data/appinstall/ browseable = yes ;writable = no read only = yes hosts allow = The permission for the media/data/appinstall/ is as follows: appInstall properties: share name: stuff Allow others to create and delete files in this folder is cheeked Guest access (for people without a user account) is checked permissions: Owner: user A Folder Access: Create and delete files File Access: --- Group: user A Folder Access: Create and delete files File Access: --- Others Folder Access: Create and delete files File Access: --- I am at a loss and need to get this work. Any ideas? The goal is to have a setup like this. 3 users on window machines. Each user on the data drive will have their own personal folder where they are the ones that can only access, then another folder where 2 of the users will have read only and one user full access. I had this setup before on windows, but after what happened I am NEVER going back to windows, so Unix here I am to stay! I am really stuck. I am running Ubuntu 11. I could reformat again and put on version 10 if that would make life easier. I have been dealing with this since Wed. 3pm. Thanks.

    Read the article

  • File Server Resource Manager attempting to access quota.xml on System Reserved partition?

    - by pmellett
    I've got a new install of Server 2008 R2 that is designed to be our quota server for user home directories and shared areas. I installed FSRM and set up a few quotas to try out. They worked fine but at some point over the weekend it's stopped loading the FSRM console quota screen and gives the following error, with Event ID 8228: File Server Resource Manager was unable to access the following file or volume: '\\?\Volume{73649de6-7f04-11e1-a344-005056b10310}\System Volume Information\SRM\quota.xml'. This file or volume might be locked by another application right now, or you might need to give Local System access to it. I have removed and reinstalled the FSRM Role Service, cleared the \System Volume Information\SRM folder on each volume and am at the verge of just starting again. I'd rather not since then I have to go through and set up all my NTFS permissions again. Since it looks like the service is trying to access the System Reserved partition, which I assume won't have any files it could possibly need, how do I remove System Reserved partition as a volume to be monitored for the quota service? (I am not aware of configuring that to be the case originally though!)

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Why do we need a format for binary executable files

    - by user3671483
    When binary files (i.e. executables) are saved they usually have a format (e.g. ELF or .out) where we have a header containing pointers to where data or code is stored inside the file. But why don't we store the binary files directly in the form of sequence of machine instructions.Why do we need to store data separately from the code?Secondly when the assembler creates a binary file is the file is among the above formats?

    Read the article

  • Multiple users writing to one Samba mount point in OSX

    - by Sam
    I have an OSX box containing a script which writes a unique file to a Samba share. The first part of the script mounts the share. On the machine are 2 users- UserA and UserB. Each requires to run this script at any given time however only the user who mounted the share is able to write to it. I really need both users to have rwx access. Here is what I have tried: Mounting then chmod'ing the mountpoint (no effect- overruled by Samba server?) chmod'ing the mountpoint then mounting (same as above) sudo mount_smbfs Both users have admin privileges. Ideally a solution would be executable by one of the users (contained in the script) and not rely on mounting at machine boot time. Any ideas appreciated, thanks!

    Read the article

  • How often should saving to disk occur in an automatically saving text editor?

    - by lelandmiller
    I am developing a simple text editor and would like the application to save the text automatically. In other words, the user would never have to press a save button. I have seen other applications that do this, and was wondering how often is it safe to write files to disk? From a user experience standpoint, it seems that the more frequently this happens the better, but I am worried about performance and possible disk wear (especially on writes to SSDs). It seems like the operating system disk caching might help avoid these problems, but I also don't know if its safe to rely on that for an application like this. I was planning on writing the whole document to disk at each save, but this just seems terribly inefficient if the OS ends up writing it to disk to frequently, but relying on program unload may lose data in the case of a crash. Does anyone have any experience dealing with this that might be able to help?

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • Writing to network share failed

    - by Unreason
    I have outlook files stored on a network share and accessed by clients directly. Outlook is version 2003, clients windows XP and server is 2003. The files are quite big at around 3GB. One of the common problem that happens is that I get 'delayed write failed' and this happens only on these PST files. When this happens I have to run scanpst.exe to fix the PST file. I did not find any entries in even logs that I could relate to the issue. What would you suggest to change to fix the issue or where to look to further diagnose it? EDIT: No loss on ping and ping times within normal for LAN.

    Read the article

  • searchd under runit continues writing to the runit's log

    - by Eugene
    searchd (Sphinx) run file: #!/bin/sh set -e APP_PATH=/srv/application TARGET_USER=user exec chpst -u $TARGET_USER /usr/bin/searchd --pidfile --nodetach --config $APP_PATH/current/config/production.sphinx.conf tail /var/log/sphinx/current 2014-06-07_18:13:56.87885 precached 9 indexes in 0.497 sec 2014-06-07_18:13:57.13740 precached 9 indexes in 0.497 sec 2014-06-07_18:13:57.88113 precached 9 indexes in 0.497 sec 2014-06-07_18:13:57.89167 precached 9 indexes in 0.497 sec 2014-06-07_18:13:59.75555 precached 9 indexes in 0.497 sec 2014-06-07_18:13:59.81554 precached 9 indexes in 0.497 sec 2014-06-07_18:14:00.33466 precached 9 indexes in 0.497 sec ... it continues to write the same line until sv stop sphinx ... Everything works fine, seachd starts and responds to the queries. But how to make logs to be less repetitive? When I start Sphinx manually it prints the "precached 9 indexes" just once.

    Read the article

  • How to grep (or find) on cPanel?

    - by San
    How can I search for a specific string (function name or a variable name) in my files which are in various directories under cPanel file manager? I have been using a library directory and functions on that directory are used in various apps and pages. Now, I am in a situation to change something in the library file, for which I need to know the impact on files which use this library file functions. How to search / find / grep through the files hosted?

    Read the article

  • How to backup/restore full-disk encryption ubuntu 11.10?

    - by ggc
    How to backup/restore full-disk encryption ubuntu 11.10? I would like to put the RAW encrypted file system and restore on another computer. Encryption Details: crypt setup via Ubuntu alterate CD Installer only thing unencrypted is /boot File systems setup: boot- j swap-swap everything else-ext 4 Any suggestions? I have considered backing up the file system stripped of encryption, but I would prefer to keep the os encrypted while transferring. Thanks for any help!

    Read the article

  • Writing directory: permission denied even though dir seems to be chmodded correctly

    - by Aron Rotteveel
    I am having some trouble creating files in directory on my Ubuntu machine: I added myself to the www-data group in order for me to easily edit stuff in my /var/www dir on my development machine. stat /var/www shows the following: File: ‘/var/www’ Size: 4096 Blocks: 8 IO Block: 4096 map Device: 808h/2056d Inode: 142853 Links: 3 Access: (0775/drwxrwxr-x) Uid: ( 33/www-data) Gid: ( 33/www-data) Access: 2010-12-30 16:03:18.563998000 +0100 Modify: 2010-12-30 16:02:52.663998000 +0100 Change: 2010-12-30 16:03:13.111998001 +0100 Still, it is impossible for me to create anything below /var/www (the only way for it to work is to chmod it to 777. What am I missing?

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Using PHP to store results of a post request

    - by Paul M
    Im currently working with an API which requires we send our collection details in xml to their server using a post request. Nothing major there but its not working, so I want to output the sent xml to a txt file so I can look at actually whats being sent!! Instead of posting to the API im posting to a document called target, but the xml its outputting its recording seems to be really wrong. Here is my target script, note that the posting script posts 3 items, so the file being written should have details of each post request one after the other. <?php error_reporting(E_ALL); ini_set('display_errors', 1); // get the request data... $payload = ''; $fp = fopen('php://input','r'); $output_file = fopen('output.txt', 'w'); while (!feof($fp)) { $payload .= fgets($fp); fwrite($output_file, $payload); } fclose($fp); fclose($output_file); ?> I also tried the following, but this just recorded the last post request so only 1 collection item was recorded in the txt file, instead of all 3 output_file = fopen('output.txt', 'w'); while (!feof($fp)) { $payload .= fgets($fp); } fwrite($output_file, $payload); fclose($fp); fclose($output_file); I know im missing something really obvious, but ive been looking at this all morning!

    Read the article

  • Problem When Compressing File using vb.net

    - by Amr Elnashar
    I have File Like "Sample.bak" and when I compress it to be "Sample.zip" I lose the file extension inside the zip file I meann when I open the compressed file I find "Sample" without any extension. I use this code : Dim name As String = Path.GetFileName(filePath).Replace(".Bak", "") Dim source() As Byte = System.IO.File.ReadAllBytes(filePath) Dim compressed() As Byte = ConvertToByteArray(source) System.IO.File.WriteAllBytes(destination & name & ".Bak" & ".zip", compressed) Or using this code : Public Sub cmdCompressFile(ByVal FileName As String) 'Stream object that reads file contents Dim streamObj As Stream = New StreamReader(FileName).BaseStream 'Allocate space in buffer according to the length of the file read Dim buffer(streamObj.Length) As Byte 'Fill buffer streamObj.Read(buffer, 0, buffer.Length) streamObj.Close() 'File Stream object used to change the extension of a file Dim compFile As System.IO.FileStream = File.Create(Path.ChangeExtension(FileName, "zip")) 'GZip object that compress the file Dim zipStreamObj As New GZipStream(compFile, CompressionMode.Compress) 'Write to the Stream object from the buffer zipStreamObj.Write(buffer, 0, buffer.Length) zipStreamObj.Close() End Sub please I need to compress the file without loosing file extension inside compressed file. thanks,

    Read the article

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • FTP Server upload and filesystem questions

    - by Alex
    I'm a photographer who mainly does event photography. A while ago I bought myself a Nikon WT-4 wireless transmitter, a small device which connects via USB to my Nikon D700 DSLR, and then establishes a WiFi connection to an existing WLAN. It can then upload any pictures I take via FTP to an FTP server somewhere in the network. On my laptop I then have a piece of software which will check a given folder on the disk regularly, this software is smart enough to look at the modified file timestamp, if this timestamp is less than 10 seconds ago, it will not attempt to import the folder and skip the file in this iteration of the import scan. The problem I've discovered seems to be inherent to the FTP protocol, as I have the same problem with Windows 7 built in IIS server, as I do with FileZilla FTP server. When the transmitter starts to upload a file, the FTP server will create a small 300-500 KB file with the correct filename on the disk, but then do nothing with the file until it has completely received the file via FTP. So it seems to create this small dummy file, and then buffer the remainder of the FTP upload until it's finished, and then dump the rest of the file into the dummy file making it the correct size. Problem is, these uploads take about 15-30 seconds depending on reception, but since the folder watch tool will already try to import any file older than 10 seconds, it will always try to import the small dummy files which obviously fails as they're not copmlete yet. Is there any way to 'disable' this behaviour? Ideally I would like my file only to show up once it's been completely uploaded. Or perhaps someone knows another FTP server application (it has to run on win7) which does not show this behaviour?

    Read the article

  • Open an excel file using COM and save it as .xml file

    - by chupinette
    Hi. Im trying the following code: <?php $workbook = "D:\b2\\test.XLS"; $sheet = "Sheet1"; #Instantiate the spreadsheet component. $ex = new COM("Excel.sheet") or Die ("Did not connect"); #Get the application name and version print "Application name:{$ex->Application->value}<BR>" ; print "Loaded version: {$ex->Application->version}<BR>"; #Open the workbook that we want to use. $wkb = $ex->application->Workbooks->Open($workbook) or Die ("Did not open"); #Create a copy of the workbook, so the original workbook will be preserved. $ex->Application->ActiveWorkbook->SaveAs("D:\b2\Ourtest.xml"); #$ex->Application->Visible = 1; #Uncomment to make Excel visible. #Optionally, save the modified workbook $ex->Application->ActiveWorkbook->SaveAs("D:\Ourtest.xml"); #Close all workbooks without questioning $ex->application->ActiveWorkbook->Close("False"); unset ($ex); ?> This actually works and creates the Ourtest.xml file. But im getting characters like: ÐÏࡱá þÿ þÿÿÿ I have tried with SaveAs("D:\Ourtest.pdf") and it says the file has been corrupted or incorrectly decoded. Can anyone help me please?Thanks

    Read the article

  • Replace delimited block of text in file with the contents of another file

    - by rmarimon
    I need to write a simple script to replace a block of text in a configuration file with the contents of another file. Let's assume with have the following simplified files: server.xml <?xml version='1.0' encoding='UTF-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Service name="Catalina"> <Connector port="80" protocol="HTTP/1.1"/> <Engine name="Catalina" defaultHost="localhost"> <!-- BEGIN realm --> <sometags/> <sometags/> <!-- END realm --> <Host name="localhost" appBase="webapps"/> </Engine> </Service> </Server> realm.xml <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> I want to run a script and have realm.xml replace the contents between the <!-- BEGIN realm --> and <!-- END realm --> lines. If realm.xml changes then whenever the script is run again it will replace the lines again with the new contents of realm.xml. This is intended to be run in /etc/init.d/tomcat on startup of the service on multiple installations on which the realm is going to be different. I'm not so sure how can I do this simply with awk or sed.

    Read the article

  • Accessing objects on one nib file from another nib file

    - by ASN
    I have two nib files Main.nib and Preferernces.nib I have a class CalendarView.m that inherits from NSView .It has a method for drawing calendar - (void)drawCalendar; In Main.nib window I have NSWindow(My Main window) which has an NSView item on it for displaying calendar.Main window has an NSPopUp button that shows a menu when application runs. Menu has a 'Preferences' menu item which on clicking show a preferences panel(NSPanel) that panel is in Preferences.nib file.Panel has a colorwell item .When application is executed clcking on colorwell show a color panel to choose color .But I am unable to apply that color to my calendar. I have another class PreferencesWindowController.m that shows preferences panel . It has a method changeColor that takes selected color from colors panel and make changes to user defaults . I have IBOutlet CalendarView *calView as a member in PreferencesWindowController.h class. In changeColor mehod I am writing - calView = [[CalendarView alloc] init]; [calView drawCalendar]; On debugging call goes to drawCalendar method of CalendarView but skips some part of it and goes to end of function without redrawing. On restarting the application color is applied but I want it to happen while application is executing, so that there is no need to rerun the application to view changes.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >