Search Results

Search found 3459 results on 139 pages for 'modified'.

Page 47/139 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Changing Network Path of Offline Files

    - by Adam
    Many of our users have their Home folder set as Available Offline. Their Windows 7 laptops will not be back on our network for a few weeks. In the mean time, we're setting up new servers and reorganizing our files, so the network path to the Home folder is going to be completely different. Based on some testing I did, when the users return, any files they've created or modified while offline will be gone, and the new Home folder will be there and not set to sync. The offline cache of the old Home folder is still accessible through the Sync Center, but they're not going to want to dig through that and try to find what's missing. Avoiding this would involve keeping the old server around and moving everyone to the new location in person, so we know for sure they're synced first. Is there any way to avoid this that isn't as tedious, like a quick registry edit or something that will point the old offline cache to the new location?

    Read the article

  • How to create Large resumable download from a secured location .NET

    - by Kelvin H
    I need to preface I'm not a .NET coder at all, but to get partial functionality, I modified a technet chunkedfilefetch.aspx script that uses chunked Data Reading and writing Streamed method of doing file transfer, to get me half-way. iStream = New System.IO.FileStream(path, System.IO.FileMode.Open, _ IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Length", file.Length.ToString()) Response.AddHeader("Content-Disposition", "attachment; filename=" & filedownload) ' Read and send the file 16,000 bytes at a time. ' While dataToRead 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, 16000) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(16000) ' Clear the buffer ' dataToRead = dataToRead - length Else ' Prevent infinite loop if user disconnects ' dataToRead = -1 End If End While This works great on files up to 2GB and is fully functioning now.. But only one problem it doesn't allow for resume. I took the original code called it fetch.aspx and pass an orderNUM through the URL. fetch.aspx&ordernum=xxxxxxx It then reads the filename/location from the database occording to the ordernumber, and chunks it out from a secured location NOT under the webroot. I need a way to make this resumable, by the nature of the internet and large files people always get disconnected and would like to resume where they left off. But any resumable articles i've read, assume the file is within the webroot.. ie. http://www.devx.com/dotnet/Article/22533/1954 Great article and works well, but I need to stream from a secured location. I'm not a .NET coder at all, at best i can do a bit of coldfusion, if anyone could help me modify a handler to do this, i would really appreciate it. Requirements: I Have a working fetch.aspx script that functions well and uses the above code snippet as a base for the streamed downloading. Download files are large 600MB and are stored in a secured location outside of the webroot. Users click on the fetch.aspx to start the download, and would therefore be clicking it again if it was to fail. If the ext is a .ASPX and the file being sent is a AVI, clicking on it would completely bypass an IHTTP handler mapped to .AVI ext, so this confuses me From what I understand the browser will read and match etag value and file modified date to determine they are talking about the same file, then a subsequent accept-range is exchanged between the browser and IIS. Since this dialog happens with IIS, we need to use a handler to intercept and respond accordingly, but clicking on the link would send it to an ASPX file which the handeler needs to be on an AVI fiel.. Also confusing me. If there is a way to request the initial HTTP request header containing etag, accept-range into the normal .ASPX file, i could read those values and if the accept-range and etag exist, start chunking at that byte value somehow? but I couldn't find a way to transfer the http request headers since they seem to get lost at the IIS level. OrderNum which is passed in the URL string is unique and could be used as the ETag Response.AddHeader("ETag", request("ordernum")) Files need to be resumable and chunked out due to size. File extensions are .AVI so a handler could be written around it. IIS 6.0 Web Server Any help would really be appreciated, i've been reading and reading and downloading code, but none of the examples given meet my situation with the original file being streamed from outside of the webroot. Please help me get a handle on these httphandlers :)

    Read the article

  • Merging paragraphs in MS Word 2007

    - by Rajneesh Jain
    My name is Rajneesh Jain from New Delhi, India. I saw your code on merging and re-formatting paragraphs in MS Word 2007. I am facing problem of text overflow. The code I used is: Sub FixParagraph() ' ' FixParagraph Macro ' ' Dim selectedText As String Dim textLength As Integer selectedText = Selection.Text ' If no text is selected, this prevents this subroutine from typing another ' copy of the character following the cursor into the document If Len(selectedText) <= 1 Then Exit Sub End If ' Replace all carriage returns and line feeds in the selected text with spaces selectedText = Replace(selectedText, vbCr, " ") selectedText = Replace(selectedText, vbLf, " ") ' Get rid of repeated spaces Do textLength = Len(selectedText) selectedText = Replace(selectedText, " ", " ") Loop While textLength <> Len(selectedText) ' Replace the selected text in the document with the modified text Selection.TypeText (selectedText) End Sub

    Read the article

  • Fixing the position of items in vim's statusline

    - by ldigas
    My statusline looks something like this: set statusline+=%m set statusline+=b%n: " set statusline+=%f set statusline+=%F set statusline+=%R set statusline+=%Y set statusline+=\ set statusline+=[ set statusline+=row\ %l/%L set statusline+=,\ " set statusline+=column\ %c\ (%v) set statusline+=column\ %v\ (%c) set statusline+=] which, on an average day, when there is no clouds, gives something like this: [-]b3:options.txt,RO,HELP [row 6291/7778, column 42 (29)] Now, when I go about splitting windows, and opening different files, some of them modified, some of them not, the things in the statusline start to wiggle back and forth, and it annoys me to no end. I saw in vim's help (:help 'statusline) that one can set a fixed width of some items. How would you go about fixing the above items in a way, that if one item is missing, or no matter of its width, that it doesn't affect the other ones ? (i.e. so I can always look at a known position and know what is there ... not move my eyes left and right searching for the thing I need).

    Read the article

  • Apache proxy is modifying the HTTP status code

    - by jarnbjo
    I am using Apache as a proxy frontend for a Java web application, which is deployed on WebSphere. The web application is using custom status codes (55x) to signal specific errors to the clients. When accessing the web application directly through the WebSphere HTTP listener, everything works as expected, but when these requests are proxied through an Apache load balancer, the status codes are modified by Apache and replaced with a generic 500 error code (internal server error). In Apache's access.log, the correct status code is logged: <IP> - - [11/Nov/2011:17:24:53 +0100] "POST <URL> HTTP/1.1" 551 36 But the actual response received by the client starts like this (logged with tcpdump): HTTP/1.1 500 Internal Server Error ... Followed by the real status code in the response content: ... Error 551: Berichteter Fehler: 551 ... Is there an obvious reason for this behaviour or does someone have a suggestion on how to modify the Apache configuration to forward the "real" status code instead of 500?

    Read the article

  • Our Server Rooted but exploit doesnt work?

    - by Salina Odelva
    Hi everyone. My friend's hosting server got rooted and we have traced some of attacker's commands.. We've found some exploits under /tmp/.idc directory.. We've disconnected the server and are now testing some local kernel exploits that the attacker tried on our server. Here is our kernel version: 2.4.21-4.ELsmp #1 SMP We think that he got root access by the modified uselib() local root exploit but the exploit doesn't work! loki@danaria {/tmp}# ./mail -l ./lib [+] SLAB cleanup child 1 VMAs 32768 The exploit hangs like this.. I've waited over 5 minutes but nothing has happened. I've also tried other exploits but they didn't work.. Any ideas? or experimentations with this exploit? Because we need to find the issue and patch our kernel but we can't understand how he used this exploit to get root... Thanks

    Read the article

  • Software to automatically download a file from FTP and then rename->replace existing file

    - by pauska
    Hi. We pay a news agency to provide us mp3's of hourly news bulletins. They put the mp3's on a FTP server just about 10 minutes before every hour, with files named after date and time (example: 02012010_1600.mp3 or similar). I need to find some solution to download only the latest modified file from the FTP server, rename it to news.mp3 and replace the previous news.mp3 that was created. This should prefferably run on a Windows 2008 Server, as a service if its possible. Anyone have suggestion for software?

    Read the article

  • Why might apache2 use 100% of CPU at startup?

    - by QuantumMechanic
    This is apache 2.2.14 on SLES9. Out of nowhere (i.e. it had been working fine for ages) I am seeing apache2 suddenly start using 100% of the CPU at startup, and never completing startup. Nothing is getting written to /var/log/error_log (when it did back when things were OK). ps only shows the main httpd process and not any of the spawned threads. When things were OK, it would show the spawned threads. So it appears httpd is going into some sort of infinite loop right at startup and isn't even completing startup. It's not an issue of being overloaded by connections -- this happens even when nothing is trying to contact it. The config files haven't changed (or at least they haven't changed in a way that changed their last-modified time). I've tried added -e debug -E /var/log/apache2/startup_info to the command line, but nothing is put in the file. Any ideas what could be happening?

    Read the article

  • Connect devices plugged into Raspberry Pi ethernet to WiFi network

    - by Tom
    I'm just starting out on a mission to learn more about networking and I've followed a tutorial (http://raspberrypihq.com/how-to-turn-a-raspberry-pi-into-a-wifi-router/) to turn my Raspberry Pi into a wifi router. That worked really well so I modified it slightly so that I can use a tethered iphone for the internet connection - I just switched all "eth0" references to "eth1" (the iphone interface) and added a script to set everything up when the phone is plugged in. This setup has freed up the Pi's ethernet port so I'd like to try and take this a step further and allow devices plugged into it to connect to the network. If possible, I'd like to try adding a switch so I can connect multiple devices. I've tried fiddling around with nat & iptables with no luck so my question is, how can I connect devices on eth0 to my wlan network?

    Read the article

  • Fabric doesn't launch Nginx remotely

    - by endofu
    I want to be able to start and stop an nginx server on an Ubuntu EC2 instance with Fabric. I have this two scripts in my fabfile.py: def start_nginx(): sudo('/etc/init.d/nginx start') #also tried this: run('sudo /etc/init.d/nginx start') def stop_nginx(): sudo('/etc/init.d/nginx stop') The start_nginx() seemingly runs without errors (* Starting Nginx Server.../ ...done.) but doesn't start the server (or it dies immediately). If I SSH into the instance this starts nginx perfectly: sudo /etc/init.d/nginx start The stop_nginx() Fabric script stops the server remotely. I compiled nginx from source, using this http://nginx.org/download/nginx-1.1.9.tar.gz and using this script in /etc/init.d: https://github.com/JasonGiedymin/nginx-init-ubuntu/blob/master/nginx. The only thing I modified is this line: DAEMON=/usr/local/sbin/nginx to DAEMON=/usr/sbin/nginx because that's the path I used when I ./configure-d my compile. Does anyone have any idea why the init script behaves differently being called from Fabric?

    Read the article

  • How can I reinstall QoS Packet Scheduler if it was removed from the winxp installation by nLite?

    - by Irwin1138
    I have a WinXP SP3 installation modified by nLite. This particular installation was stripped off the QoS Packet Scheduler. I was advised to remove QoS because of the overhead it produces or something like that. Now, I read this lifehacker post about windows maintenance, and it says that on the contrary, by doing so I may have done more harm than good: Disabling QoS in Windows XP: Rumor had it that Microsoft had permanently tied up 20 percent of your net bandwidth for Windows Update. They didn't, and those who disable QoS, or IPv6, in XP actually end up with some pretty harsh connectivity problems. I tend to believe this, and now I seek a way to reinstall QoS. I tried to install it by going to network adapter properties - install - service, but there is no QoS there. I have the original, untouched WinXP SP3 cd. So, is there a way to bring back QoS into my WinXP installation, preferably without reinstalling windows from scratch?

    Read the article

  • Hardlink files not the same

    - by SabreWolfy
    I created a hardlink of a file as follows: ln /path/to/source/file1 /path/to/target/file2 Using md5sum, the two files are identical. After a while, the source file has been modified by another program. The target file does not get "updated". The md5sums are now different. The files are on the same partition of course, otherwise I could not create a link. What I'm trying to do is get a copy of the source file into the target folder (which is versioned), so that I have access to the source file elsewhere. I tried moving the source file to the target folder with a different name and then creating a symlink to it at the source, but the program expecting the file then (somehow) created a file of the name it wanted in the target folder. Ideas?

    Read the article

  • Mantaining Wordpress website with subversion

    - by Geries
    I want to setup a website using wordpress, which we can modified locally and then via subversion commit the site and make it public. This means to installing new plugins, changing the content, testing updates of wordpress to see if they work with the theme, etc. The idea is to control the development on the site, in case we need to keep track of the dev or roll back, because of unexpected bugs in a the plugins, theme, etc. I've read this article in codex, however I'm not sure how this is done when, we want to include the content and changes on the options of worpdress and plugins (which is in the mysql). Thanks

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • Why does a pdf file download result in varying bytes logged, all with sc-status 200

    - by Pat James
    I have a mojoportal CMS installation on an IIS7 server where users are reporting problems downloading a pdf file. It always downloads fine for me and most others, either displaying in browser or in Adobe Reader. Using logparser to query the IIS logs, all the responses are status 200 (OK) or 304 (Not modified), but the bytes sent vary quite a bit. Sometimes zero, some 211, some about half the full file size of 27059, and lots in between. Plenty show the full size of 27059. Do these other entries for smaller byte counts represent errors of some kind, correlating with the problems reported? Is this likely to be a browser/client issue or a server side problem? If there is any other info that would be helpful let me know. This is a shared hosting server though so I am somewhat limited in what I can dig into on the server.

    Read the article

  • How to check for duplicate files?

    - by miorel
    I have an external hard drive on which I have backed up files several times. Some files were modified between backups, others were not. Some may have been renamed. Now I'm running out of space, and I'd like to clean up duplicate files. My idea was to md5sum every file on the drive, then look for duplicates, and diff the relevant files (just in case, haha). Is this the best way to do this? What are some other methods of checking for duplicate files?

    Read the article

  • Proxmox 31 + KVM routing + IP subnet + csf

    - by KeyJey
    We have proxmox 3.1 server in netzner with routuing network and IP subnet block. We want to implement csf firewall without interfering the traffic of the KVM VMs, what would be the easiest way? We readed that we should add this lines to /etc/csf/csfpost.sh: iptables -A FORWARD -d 144.76.223.155 -j ACCEPT iptables -A FORWARD -d 144.76.223.156 -j ACCEPT iptables -A FORWARD -d 144.76.223.157 -j ACCEPT iptables -A FORWARD -d 144.76.223.158 -j ACCEPT iptables -A FORWARD -d 144.76.223.159 -j ACCEPT iptables -A FORWARD -d 144.99.183.323 -j ACCEPT But when we enable csf the ping breaks, this is the network config (IPs are modified): auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 144.76.166.100 netmask 255.255.255.255 pointopoint 144.76.183.97 gateway 144.76.183.97 # for single IPs auto vmbr0 iface vmbr0 inet static address 144.76.166.100 netmask 255.255.255.255 bridge_ports none bridge_stp off bridge_fd 0 up ip route add 144.99.183.323/32 dev vmbr0 # for a subnet auto vmbr1 iface vmbr1 inet static address 144.76.166.100 netmask 255.255.255.248 bridge_ports none bridge_stp off bridge_fd 0 Thanks in advanced ! :)

    Read the article

  • Mimic NTFS "Modify" Permissions on an ext3 acl enabled filesystem in linux?

    - by bobinabottle
    I am migrating our file share from Windows Server to Samba on Linux, and the only hurdle I have at the moment is the acl's. Currently we have a number of directories that use the "Modify" permission on NTFS, so users can write to a directory, but once the file is written it cannot be modified. On Linux, I had the idea that I would set an ACL for the directory to have read/write access, but have a default ACL associated with read only access. Is this possible? I'm not quite sure how to set a default ACL that differs from the parent directory. Thanks!

    Read the article

  • Mirror/Backup from SSH/SFTP to Windows

    - by Andrew Russell
    What I am trying to do is mirror a directory (recursively) from a server I can SSH/SFTP to, to a Windows machine. I want to do this as part of a script, so it can be automated. I only want to copy new or modified files. I don't want to have to download all the files every time the script runs. In other words, I'm trying to get the equivalent of RoboCopy /MIR that will work using SFTP as a source. What would you recommend?

    Read the article

  • Boot linux off hard drive and then switch to run from usb flash disk

    - by Jesse
    I have an older laptop that I want to use as a simple media server on my home network. I would like to avoid using the internal hard drive except for booting (BIOS does NOT support booting from USB). My thought was to mirror the hard drive (currently has current install of Arch Linux) onto the flash drive and then after booting switch over to run everything from the flash drive. I read the following article about using a RAM disk (HOW-TO: Boot OS into RAM for speed and silence) but ran into problem because the USB subsystem does not seem to be initialized soon enough (I create root and home paritions on the flash disk and modified fstab to pick those - didn't work). Any thoughts?

    Read the article

  • signing the web server certificate with the CA key

    - by user1064786
    I have problem in running the command below using openssl-0.9.8e and apache in Ubuntu 11.10. do you have any idea to resolve it? first i was receiving this error: No such file or directory:bss_file.c:169:fopen('openssl.cnf','rb') then i copied my modified openssl.cnf file in the /etc/ssl/ directory. now i receive an error regarding -in option: openssl ca -days 3650 –in server/requests/ciise.concordia.ca.csr –cert ./CA/ConcordiaCA.crt –keyfile ./CA/ConcordiaCA.key –out ./server/certificates/ciise.concordia.ca.crt -config openssl.cnf unknown option –in I also copied ciise.concordia.ca.csr in the upper directory, but the problem still persists I would appreciate any help:)

    Read the article

  • Original sender is not correctly identified when spam is forwarded

    - by Stephan Burlot
    I have a forwarding rule with Postfix that forwards all messages to my main email address. When a spam message is sent to one of my emails, it is forwarded but the sender is shown as being the forwarding domain, not the spammer's domain. Real example: mywebsite.com is hosted on Linode. [email protected] sends an email to [email protected] the mail is forwarded to [email protected] my email hosting (anotherwebsite.com) sees it's spam and sends a message to [email protected] and Linode reports a TOS violation. I have modified my postfix settings so I now use RBL, but if a message goes through, it may happen again. How can I prevent this to happen again? Is there some settings to change on Postfix so the original sender is correctly identified? Thanks Stephan EDIT: The steps I did to prevent this to happen again are: Add RBL checking to Postfix Add postgrey to Postfix And finally fix the MX record which was incorrect. I checked with a test email on Spamcop.net and the original sender is correctly identified.

    Read the article

  • Can a file change size when the transfer protocol changes?

    - by djechelon
    I am very curious about what I have just found happening on my computers. I have set up SyncBackPro to synchronize a music folder from my home desktop to my laptop using Windows network share (SMB). Files get synchronized regularly. Now I tried to switch to FTP and I noticed that NO FILE matches its counterpart even if they have never been modified (I make sure there is the readonly flag and no application is allowed to retag MP3s and whatever...), so SyncBack asks me what side should overwrite the other. FTP files are a little larger than local files. I run synchronization from the laptop. How can such a thing happen? Files are the same, bytes should be the same... If I run SMB sync again it matches all the files again.

    Read the article

  • Default prolog for all ZSH scripts?

    - by Igor Spasic
    I have a file that contains several helper functions, meant to be used only in other ZSH scripts. I do not want them to be loaded with my profile. To make these functions available in my script, I would need to source this file. Is it somehow possible to have automatic prolog script (or pre-script) loaded before all my ZSH scripts? My current idea is to alias -s zsh extension to custom function that those all this for me: concatenate prolog file and current script calls zsh with such modified input but for now I am somehow not able to do this (haven't sleep for almost a day). Please, does anyone have a working solution?

    Read the article

  • Disable Mailman Reminders

    - by VxJasonxV
    We run a mailserver on OSX server, and a few mailing lists. The password/subscription reminds USED to come out at 8AM (local), but in the past months it's moved to 5AM, a nuisance to all involved. It would appear that mailman has been modified by Apple because there is no cronjob entry I can find that controls when these reminder notices come out, and I haven't found any launch agent/daemon plists that would control this ether. Nor have I found anything in the mailman configuration web pages. So... where are they?! Due to the specific-announcement style of use, they're a fairly worthless message to be originated, and they are a huge bother when announcing to support phones.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >