Search Results

Search found 4900 results on 196 pages for 'upload'.

Page 81/196 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Git clone/push/pull - where's that username comes from?

    - by Kuroki Kaze
    I've set up gitosis and able to pull/push through ssh. Gitosis is installed on Debian Lenny server, I'm using git from windows machine (msysgit). The strange thing, if I enable loglevel = DEBUG in gitosis.conf, I see something like this when doing any actions with gitosis server: D:\Kaze\source\test-project>git pull origin master DEBUG:gitosis.serve.main:Got command "git-upload-pack 'test_project.git'" DEBUG:gitosis.access.haveAccess:Access check for '[email protected]' as 'writable' on 'test_project.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'test_project.git', new value 'test_project' DEBUG:gitosis.group.getMembership:found '[email protected]' in 'test' DEBUG:gitosis.access.haveAccess:Access ok for '[email protected]' as 'writable' on 'test_project' DEBUG:gitosis.access.haveAccess:Using prefix 'repositories' for 'test_project' DEBUG:gitosis.serve.main:Serving git-upload-pack 'repositories/test_project.git' From 192.168.175.128:test_project * branch master -> FETCH_HEAD Already up-to-date. Question is: why am I *[email protected]? This email is in global user.email config variable, too. Yesterday, when the gitosis was installed, it seen me as kaze@KAZE, this is the name under which I was added to gitosis-admin group (and it worked). But today git (or gitosis) started to see me as [email protected]. This is true for all repositories I push or clone. I had to add this address to gitosis.conf directly on server to be able to edit configs again (it worked). There is 2 public keys in keydir: [email protected] and [email protected], their content is identical and they have kaze@KAZE at end. Origin URL looks like git@lennyserver:test_project. Now, the question is - why Git (or gitosis) suddenly decided to call me by email instead of name@machinename? I've changed a couple things trying to set up Gitosis (updated git on server to 1.6.0 for example), but maybe I broke something in my local git installation?

    Read the article

  • Managing Linux Directory Permissions & SFTP

    - by Dizzle
    Good morning; I have a RHEL 5.7 web server configured to allow SSH/SFTP only by specific groups. I'd like for content managers to upload content to their respective directories and have that content inherit the user/group ownership of the directory regardless of upload method or application. For example: John is in group "web" for SSH/SFTP rights and "finance" for directory permissions, and uploads to directory "webstuff" via SFTP. Directory "webstuff" has permissions of "2760" (rwxrws---), and ownership of "apache:finance". If John uploads an update to an existing file in "webstuff", the ownership of the file stays at "apache:finance". If John uploads a new file to "webstuff", the ownership of the file is "john:finance". My desire is to have any file from John uploaded to "webstuff" to change to the directory's owner. I've tried with setuid and setgid both set, but the user-ownership didn't take. I've seen mentions on ServerFault of using ACL's, or a chrooted jail for SFTP but I have yet to configure and test them, and I don't know if they're a viable solution (they could be, I just don't know because I've never done either). Any thoughts and assistance would be greatly appreciated.

    Read the article

  • how to edit source files and commit the changes to the new website?

    - by ajsie
    i've got ubuntu installed with lamp. im using webdav to upload/download files to/from the ubuntu web server, after i have edited the php source files in netbeans. however, i wonder what is best practice for editing source files and committing these changes to the new website. cause if we are 2-3 developers, i guess we have to use svn. but i have never used it before so i wonder how it works. should i install it and then select the /var/www (apaches webroot) as the repository folder? then when i check in, all the changes will apply immediately? could someone please explain following steps: how to download, edit the source files, upload the files and see the new changes in the website. cause i have only worked with a local apache before, and it was only me. now there will be some more programmers so i have to set up a decent, central environment for this, and have to know how netbeans, svn, webdav and apache works all together. thanks!

    Read the article

  • Getting file not found error with pdebuild

    - by user35042
    I am attempting to build a Debian package using pdebuild on my main development server (running Debian wheezy). Here is the command I run: pdebuild --pbuilder cowbuilder --buildresult .. \ --debbuildopts -i -- \ --basepath /var/cache/pbuilder/base-wheezy.cow \ --distribution wheezy --configfile /etc/pbuilder/wheezy This works on other servers, but on one server I get this output: I: using cowbuilder as pbuilder dpkg-buildpackage: source package libexample-orange-util-perl dpkg-buildpackage: source version 0.08 dpkg-buildpackage: source changed by John User <[email protected]> dpkg-source -i --before-build libexample-orange-util-perl fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean dh_clean dpkg-source -i -b libexample-orange-util-perl dpkg-source: info: using source format `3.0 (native)' dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.tar.gz dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.dsc dpkg-genchanges -S >../libexample-orange-util-perl_0.08_source.changes dpkg-genchanges: including full source code in upload dpkg-source -i --after-build libexample-orange-util-perl dpkg-buildpackage: source only upload: Debian-native package File not found: ../libexample-orange-util-perl_0.08.dsc There is no file ../libexample-orange-util-perl_0.08.dsc, but on other build servers no such file is needed (it gets created by the package build). What is causing this "file not found" error?

    Read the article

  • Permissions nightmare - tried all I know

    - by Ben
    Working on a new client's dev site, which is a wordpress install on a Plesk box. I have SSH root access, and FTP access through a separate account. What I've done so far Initially I couldn't make any changes to any files at all. The permissions on all the template files looked a little screwy (644), so I figured change them to allow group, and add myself to the group: CHMOD Recursive on the theme folder to set everything to 664 Quickly realised I'd broken it, set the folders to 755, kept files as 664 Ownership on all files is a mixture of root:root and 500:500 (there is no user nor group with the ID of 500 on the server). Added myself to the group 'root' so I could modify the files too The Problem This worked OK, in terms of being able to edit the existing files, so I began working. However, I can't upload to the directory, even having run CHOWN -R root:root templatefolder/ and being in the root group. I feel like I must be missing something obvious, and it's doing my head in. Questions: Files in the install owned by 500 with group 500 - I've looked in /etc/group and /etc/passwd and there is no user nor group with this ID. Is that left over from another developer's setup or the previous server (they moved recently)? Is being in the 'root' group enough, or do I need to own the theme folder as 'myftpuser' in order to upload and create new files? Like I say, I have edit access, so I got myself this far. I'm now questioning what to do next!

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Unusually high dentry cache usage

    - by Wolfgang Stengel
    Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cached Mem: 129048 76435 52612 0 144 7675 -/+ buffers/cache: 68615 60432 Swap: 8191 0 8191 Meminfo: cat /proc/meminfo MemTotal: 132145324 kB MemFree: 53620068 kB Buffers: 147760 kB Cached: 8239072 kB SwapCached: 0 kB Active: 20300940 kB Inactive: 6512716 kB Active(anon): 18408460 kB Inactive(anon): 24736 kB Active(file): 1892480 kB Inactive(file): 6487980 kB Unevictable: 8608 kB Mlocked: 8608 kB SwapTotal: 8388600 kB SwapFree: 8388600 kB Dirty: 11416 kB Writeback: 0 kB AnonPages: 18436224 kB Mapped: 94536 kB Shmem: 6364 kB Slab: 46240380 kB SReclaimable: 44561644 kB SUnreclaim: 1678736 kB KernelStack: 9336 kB PageTables: 457516 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 72364108 kB Committed_AS: 22305444 kB VmallocTotal: 34359738367 kB VmallocUsed: 480164 kB VmallocChunk: 34290830848 kB HardwareCorrupted: 0 kB AnonHugePages: 12216320 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 5604 kB DirectMap2M: 2078720 kB DirectMap1G: 132120576 kB Slabtop: slabtop --once Active / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure 125 Swappiness: cat /proc/sys/vm/swappiness 0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0 tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0 cgroup /cgroup/devices cgroup rw,relatime,devices 0 0 cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0 cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0 /etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 /etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo 16 21 0:3 / /proc rw,relatime - proc proc rw 17 21 0:0 / /sys rw,relatime - sysfs sysfs rw 18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=755 19 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000 20 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw 21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered 22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw 23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered 24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 25 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 26 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw 27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset 28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu 29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct 30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory 31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices 32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer 33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls 34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio 35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 36 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 37 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw 39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.vol volume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2 end-volume volume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 end-volume volume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distribute end-volume volume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocache end-volume ### Add io-threads for parallel requisitions volume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writeback end-volume volume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreads end-volume

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Apache permission Problems

    - by swg1cor14
    Ok all my files and folders are set as owner of vsftpd:nogroup. FTP program can upload and create and do everything. But when I use the PHP command mkdir, I get a Permission Denied even though the folder its creating it in is set to chmod 777. IF i set the base folder to user www-data and group www-data, PHP mkdir will work. However, I can't use FTP to delete or upload to that folder. /uploads is base folder. I use PHP mkdir to create a directory in there: if (!is_dir($_SERVER['DOCUMENT_ROOT'] . "/uploads/" . $_REQUEST['clientID'] . '/video/')) { @mkdir($_SERVER['DOCUMENT_ROOT'] . "/uploads/" . $_REQUEST['clientID'] . '/video/', 0777); } If /uploads is vsftpd:nogroup then PHP mkdir will give a Permission Denied error. If /uploads is www-data:www-data then PHP mkdir WILL work, but I cant continue to FTP anything in that folder that was just created. If /uploads is vsftpd:www-data then PHP mkdir will give a Permission Denied error. How can I create a directory with PHP and still be able to access it via FTP?

    Read the article

  • How to make the ec2 ami work on the Xen on Debian

    - by user67103
    Hi. Here is the scenario. We are creating a cloud App. We have a Debian on which we create a customized image and upload it to the Amazon EC2. After uploading it to the cloud we have made some more customizations and are trying to rebundle it. We are facing some issues in rebundling it. We would like to know if we could do something like this. a. Create an AMI Image on Debian b. Load it on to the Xen Hypervisor which would be over the Debian c. Customize the image d. Save the customized image e. Upload it to EC2. The issue is that I am unable to find a proper solution on how to install Xen on Debian and will the AMI on Xen work on EC2. Any suggestions/Help would be greatly appreciated. thanks. Krishna

    Read the article

  • This web site needs a different Google Maps API key. A new key can be generated at http://code.googl

    - by MJI
    Apologies in advance if this is the wrong place to post. I tried searching for this issue and all that seemed to come up were questions posted from people who had this issue with their web pages. I couldn't find questions related to this issue from a laypersons perspective. I'm not a developer. I have no domain, nor wish to have one at the time. Rather I'm just a regular person who likes to upload photos to some photo related sites. My uploading process constantly gets interrupted by one of these annoying API errors. I get it at least two times, one when I click the page to upload, and also right after it has uploaded. It also pops up if I go to edit a photo or delete it. This interrupts my browsing experience until I click okay. I just want a fix for the annoying without having to register for a key. I tried before and it required a web domain. I rather not have to create a domain and go through such hoops just to fix this. Is there a solution for this problem that doesn't require registration? Another thing to note: I have used two computers. One has the message pop-up and the other doesn't. What is different about the two computers?

    Read the article

  • Permission Denied for FTP User

    - by Alasdair
    I have an FTP user whose default is /root/ftpuser This user can login fine. The user is the owner of the directory & the directory is even set to 777 permissions. But the user can't upload anything, the display is: Status: Connecting to xx.xxx.xxx.xx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 2 of 50 allowed. Response: 220-Local time is now 05:12. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER ftpuser Response: 331 User ftpuser OK. Password required Command: PASS ********* Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Starting upload of test.html Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: MKD / Response: 550 Can't create directory: Permission denied Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: SIZE /btn.png Response: 550 Can't check for file existence Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (66,232,106,33,52,218) Command: STOR /test.html Response: 553 Can't open that file: Permission denied Error: Critical file transfer error It's a Linux CentOS 6 server. Any ideas?

    Read the article

  • SFTP, ChrootDirectory and multiple users

    - by mdo
    I need a setup where I can put the contents of several user folders to a DMZ server from where external clients can download it, protocol SFTP, Linux, OpenSSH. To ease administration we want to use one single user for the upload. What does work is to define ChrootDirectory /home/sftp/ in sshd_config, set the according ownership and modes and define a home dir in passwd so that the working directory of the user fits. This is my structure: /home/sftp/uploader/user1/file1.txt /user2/file2.txt The uploader user can write file1.txt and file2.txt to the corresponding folders and by having the user folders (user1, user2) set to the users' primary group + setting SETGUID on the folders the users are able to even delete the files (which is necessary). Only problem: because /home/sftp/ is the chroot base dir the users can change updir and see other users' folders, though not being able to change into because of access rights. Requirement: We want to prevent users to change to /home/sftp/uploader/ and see other users' folders. My requirements are to use SFTP, have one upload user and every user must have write access to his home dir. Obviously it's not an option to use something like ChrootDirectory %h because every path component of the chroot path needs to have limited access rights, so as far as I understand this does not work.

    Read the article

  • very slow internet with Linksys WRT54GL only in wireless mode (wired is OK)

    - by gojira
    I bought a new Cisco Linksys WRT54GL router to connect my laptop (running Windows 7) to the internet. I installed Tomato 1.28 firmware on the router. When I connect the laptop to the router via ethernet cable, everything is fine and I get extremely fast up- and download speeds. When I connect wirelesssly however, websites load extremely slow - it takes dozens of seconds to load a website! <-- This is my question, how can I fix the wireless speed issue? Gmail for example is unusable this way. I tried speedtest.net, but this always fails in the upload part of the test so I can't even test the bandwidth (could the fact that it fails in the upload part, not the download part, be an indication what the problem is?!). I have isolated the problem a bit, I am convinced it has to do either with the router itself, the router settings, or the settings of the wireless connection in Win 7. Because previously, I was using another router by Buffalo and I had no problems whatsoever. I have tried to reproduce the settings from the Bufallo router as closely as possible on the Linksys router (same channel, same encryption etc). The download speed problem only occurs with the Linksys router, and only in wireless mode! When I exchange the Linksys router with the Buffalo router I have here for testing, the wireless speed is up to normal again. Also, before I had installed the Tomato firmware I had exactly the same problem, so it has nothing to do with the firmware itself. Notes & things I already tried: Changing the channel: does not seem to affect anything, I am also on the same channel (10) which I was previously on when I had a Buffalo router. QoS is off. Ping to the router itself is OK, ~ 1 ms. Some current settings of the linksys router: WAN / Internet Type: DHCP Wirelesss Mode: Access Point B/G Mode: Mixed Broadcast: check Channel: 10 - 2.457 GHz Security: WPA2 Personal Encryption: AES

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

  • Windows 7 - "Magic" frequent folder

    - by TheAdamGaskins
    Every week, I export an mp3 file from audacity into a folder with that day's date (e.g. this past sunday I exported the file to a folder named 20130609). Then I close everything and that's it for a while. Then, I come back a few hours later to upload the file to ftp. I usually have some folders open, so to open a new one, I right click on the folder icon on the taskbar... to open a new folder window and browse to this folder I just created, right? Well I look up a little bit and: So I click it and upload the file, and it actually saves me 30 seconds, which is really awesome... but what in the world? It happens every single week without fail. I create the folder inside the audacity export window. The folder stays on the frequent list until I create a new folder the following week. This was definitely not an advertised feature of Windows 7, and it's extremely handy... but it really just seems like magic to me. How does it work?

    Read the article

  • Windows 7, network transmit (send) not working

    - by user326287
    My Win 7 works 2 years without problem. But now, I can't transmit (send) big data on LAN/Internet. I can: - Ping anything - Browse Internet, download files at full speed - Send e-mails with very small attachments. - Testing download speed on Speedtest.net, measure stable full speed. I can't: - Testing upload speed on Speedtest.net. Upload stuck.. - Save/send email messages with big (128k) attachment, independent from e-mail provider or e-mail box. THIS IS NOT A HARDWARE/CABLE/CARD OR OTHER NETWORK DEVICES PROBLEM! When I boot from a Linux Live CD, without ANY hardware change, all data sending, testing works correctly, at full speed. I have tried already in Win 7: - Disable Windows/3rd party Firewall completely - Reset IP stack parameters (netsh int ip reset c:\resetlog.txt) - Computer restore - Reinstall LAN driver When I inspect the packets in Wireshark in Windows, I see lot's of (maybe 60% of sent packets) "TCP Retransmission". Sometimes receive "TCP Dup Ack" or "TCP Out-of-Order". Linux don't do this. Thank you for the help.

    Read the article

  • Flash IO error while uploading photo with low uploading internet speed

    - by Beck
    Actionscript: System.security.allowDomain("http://" + _root.tdomain + "/"); import flash.net.FileReferenceList; import flash.net.FileReference; import flash.external.ExternalInterface; import flash.external.*; /* Main variables */ var session_photos = _root.ph; var how_much_you_can_upload = 0; var selected_photos; // container for selected photos var inside_photo_num = 0; // for photo in_array selection var created_elements = _root.ph; var for_js_num = _root.ph; /* Functions & settings for javascript<->flash conversation */ var methodName:String = "addtoflash"; var instance:Object = null; var method:Function = addnewphotonumber; var wasSuccessful:Boolean = ExternalInterface.addCallback(methodName, instance, method); function addnewphotonumber() { session_photos--; created_elements--; for_js_num--; } /* Javascript hide and show flash button functions */ function block(){getURL("Javascript: blocking();");} function unblock(){getURL("Javascript:unblocking();");} /* Creating HTML platform function */ var result = false; /* Uploading */ function uploadthis(photos:Array) { if(!photos[inside_photo_num].upload("http://" + _root.tdomain + "/upload.php?PHPSESSID=" + _root.phpsessionid)) { getURL("Javascript:error_uploading();"); } } /* Flash button(applet) options and bindings */ var fileTypes:Array = new Array(); var imageTypes:Object = new Object(); imageTypes.description = "Images (*.jpg)"; imageTypes.extension = "*.jpg;"; fileTypes.push(imageTypes); var fileListener:Object = new Object(); var btnListener:Object = new Object(); btnListener.click = function(eventObj:Object) { var fileRef:FileReferenceList = new FileReferenceList(); fileRef.addListener(fileListener); fileRef.browse(fileTypes); } uploadButton.addEventListener("click", btnListener); /* Listeners */ fileListener.onSelect = function(fileRefList:FileReferenceList):Void { // reseting values inside_photo_num = 0; var list:Array = fileRefList.fileList; var item:FileReference; // PHP photo counter how_much_you_can_upload = 3 - session_photos; if(list.length > how_much_you_can_upload) { getURL("Javascript:howmuch=" + how_much_you_can_upload + ";list_length=" + list.length + ";limit_reached();"); return; } // if session variable isn't yet refreshed, we check inner counter if(created_elements >= 3) { getURL("Javascript:limit_reached();"); return; } selected_photos = list; for(var i:Number = 0; i < list.length; i++) { how_much_you_can_upload--; item = list[i]; trace("name: " + item.name); trace(item.addListener(this)); if((item.size / 1024) > 5000) {getURL("Javascript:size_limit_reached();");return;} } result = false; setTimeout(block,500); /* Increment number for new HTML container and pass it to javascript, after javascript returns true and we start uploading */ for_js_num++; if(ExternalInterface.call("create_platform",for_js_num)) { uploadthis(selected_photos); } } fileListener.onProgress = function(file:FileReference, bytesLoaded:Number, bytesTotal:Number):Void { getURL("Javascript:files_process(" + bytesLoaded + "," + bytesTotal + "," + for_js_num + ");"); } fileListener.onComplete = function(file:FileReference, bytesLoaded:Number, bytesTotal:Number):Void { inside_photo_num++; var sendvar_lv:LoadVars = new LoadVars(); var loadvar_lv:LoadVars = new LoadVars(); loadvar_lv.onLoad = function(success:Boolean){ if(loadvar_lv.failed == 1) { getURL("Javascript:type_failed();"); return; } getURL("Javascript:filelinks='" + loadvar_lv.json + "';fullname='" + loadvar_lv.fullname + "';completed(" + for_js_num + ");"); created_elements++; if((inside_photo_num + 1) > selected_photos.length) {setTimeout(unblock,1000);return;} // don't create empty containers anymore if(created_elements >= 3) {return;} result = false; /* Increment number for new HTML container and pass it to javascript, after javascript returns true and we start uploading */ for_js_num++; if(ExternalInterface.call("create_platform",for_js_num)) { uploadthis(selected_photos); } } sendvar_lv.getnum = true; sendvar_lv.PHPSESSID = _root.phpsessionid; sendvar_lv.sendAndLoad("http://" + _root.tdomain + "/upload.php",loadvar_lv,"POST"); } fileListener.onCancel = function(file:FileReference):Void { } fileListener.onOpen = function(file:FileReference):Void { } fileListener.onHTTPError = function(file:FileReference, httpError:Number):Void { getURL("Javascript:http_error(" + httpError + ");"); } fileListener.onSecurityError = function(file:FileReference, errorString:String):Void { getURL("Javascript:security_error(" + errorString + ");"); } fileListener.onIOError = function(file:FileReference):Void { getURL("Javascript:io_error();"); selected_photos[inside_photo_num].cancel(); uploadthis(selected_photos); } <PARAM name="allowScriptAccess" value="always"> <PARAM name="swliveconnect" value="true"> <PARAM name="movie" value="http://www.localh.com/fileref.swf?ph=0&phpsessionid=8mirsjsd75v6vk583vkus50qbb2djsp6&tdomain=www.localh.com"> <PARAM name="wmode" value="opaque"> <PARAM name="quality" value="high"> <PARAM name="bgcolor" value="#ffffff"> <EMBED swliveconnect="true" wmode="opaque" src="http://www.localh.com/fileref.swf?ph=0&phpsessionid=8mirsjsd75v6vk583vkus50qbb2djsp6&tdomain=www.localh.com" quality="high" bgcolor="#ffffff" width="100" height="22" name="fileref" align="middle" allowScriptAccess="always" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer"></EMBED> My uploading speed is 40kb/sec Getting flash error while uploading photos bigger than 500kb and getting no error while uploading photos less than 100-500kb~. My friend has 8mbit uploading speed and has no errors even while uploading 3.2mb photos and more. How to fix this problem? I have tried to re-upload on IO error trigger, but it stops at the same place. Any solution regarding this error? By the way, i was watching process via debugging proxy and figured out, that responce headers doesn't come at all on this IO error. And sometimes shows socket error. If need, i will post serverside php script as well. But it stops at if(isset($_FILES['Filedata'])) { so it won't help :) as all processing comes after this check.

    Read the article

  • Android: OutOfMemoryError while uploading video...

    - by AP257
    Hi all, I have the same problem as described here, but I will supply a few more details. While trying to upload a video in Android, I'm reading it into memory, and if the video is large I get an OutOfMemoryError. Here's my code: // get bytestream to upload videoByteArray = getBytesFromFile(cR, fileUriString); public static byte[] getBytesFromFile(ContentResolver cR, String fileUriString) throws IOException { Uri tempuri = Uri.parse(fileUriString); InputStream is = cR.openInputStream(tempuri); byte[] b3 = readBytes(is); is.close(); return b3; } public static byte[] readBytes(InputStream inputStream) throws IOException { ByteArrayOutputStream byteBuffer = new ByteArrayOutputStream(); // this is storage overwritten on each iteration with bytes int bufferSize = 1024; byte[] buffer = new byte[bufferSize]; int len = 0; while ((len = inputStream.read(buffer)) != -1) { byteBuffer.write(buffer, 0, len); } return byteBuffer.toByteArray(); } And here's the traceback (the error is thrown on the byteBuffer.write(buffer, 0, len) line): 04-08 11:56:20.456: ERROR/dalvikvm-heap(6088): Out of memory on a 16775184-byte allocation. 04-08 11:56:20.456: INFO/dalvikvm(6088): "IntentService[UploadService]" prio=5 tid=17 RUNNABLE 04-08 11:56:20.456: INFO/dalvikvm(6088): | group="main" sCount=0 dsCount=0 s=N obj=0x449a3cf0 self=0x38d410 04-08 11:56:20.456: INFO/dalvikvm(6088): | sysTid=6119 nice=0 sched=0/0 cgrp=default handle=4010416 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:~93) 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.467: WARN/dalvikvm(6088): threadid=17: thread exiting with uncaught exception (group=0x4001b180) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): Uncaught handler: thread IntentService[UploadService] exiting due to uncaught exception 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): java.lang.OutOfMemoryError 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:93) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.496: INFO/Process(4657): Sending signal. PID: 6088 SIG: 3 I guess that as @DroidIn suggests, I need to upload it in chunks. But (newbie question alert) does that mean that I should make multiple PostMethod requests, and glue the file together at the server end? Or can I load the bytestream into memory in chunks, and glue it together in the Android code? If anyone could give me a clue as to the best approach, I would be very grateful.

    Read the article

  • PHP/GD - Cropping and Resizing Images

    - by Alix Axel
    I've coded a function that crops an image to a given aspect ratio and finally then resizes it and outputs it as JPG: <?php function Image($image, $crop = null, $size = null) { $image = ImageCreateFromString(file_get_contents($image)); if (is_resource($image) === true) { $x = 0; $y = 0; $width = imagesx($image); $height = imagesy($image); /* CROP (Aspect Ratio) Section */ if (is_null($crop) === true) { $crop = array($width, $height); } else { $crop = array_filter(explode(':', $crop)); if (empty($crop) === true) { $crop = array($width, $height); } else { if ((empty($crop[0]) === true) || (is_numeric($crop[0]) === false)) { $crop[0] = $crop[1]; } else if ((empty($crop[1]) === true) || (is_numeric($crop[1]) === false)) { $crop[1] = $crop[0]; } } $ratio = array ( 0 => $width / $height, 1 => $crop[0] / $crop[1], ); if ($ratio[0] > $ratio[1]) { $width = $height * $ratio[1]; $x = (imagesx($image) - $width) / 2; } else if ($ratio[0] < $ratio[1]) { $height = $width / $ratio[1]; $y = (imagesy($image) - $height) / 2; } /* How can I skip (join) this operation with the one in the Resize Section? */ $result = ImageCreateTrueColor($width, $height); if (is_resource($result) === true) { ImageSaveAlpha($result, true); ImageAlphaBlending($result, false); ImageFill($result, 0, 0, ImageColorAllocateAlpha($result, 255, 255, 255, 127)); ImageCopyResampled($result, $image, 0, 0, $x, $y, $width, $height, $width, $height); $image = $result; } } /* Resize Section */ if (is_null($size) === true) { $size = array(imagesx($image), imagesy($image)); } else { $size = array_filter(explode('x', $size)); if (empty($size) === true) { $size = array(imagesx($image), imagesy($image)); } else { if ((empty($size[0]) === true) || (is_numeric($size[0]) === false)) { $size[0] = round($size[1] * imagesx($image) / imagesy($image)); } else if ((empty($size[1]) === true) || (is_numeric($size[1]) === false)) { $size[1] = round($size[0] * imagesy($image) / imagesx($image)); } } } $result = ImageCreateTrueColor($size[0], $size[1]); if (is_resource($result) === true) { ImageSaveAlpha($result, true); ImageAlphaBlending($result, true); ImageFill($result, 0, 0, ImageColorAllocate($result, 255, 255, 255)); ImageCopyResampled($result, $image, 0, 0, 0, 0, $size[0], $size[1], imagesx($image), imagesy($image)); header('Content-Type: image/jpeg'); ImageInterlace($result, true); ImageJPEG($result, null, 90); } } return false; } ?> The function works as expected but I'm creating a non-required GD image resource, how can I fix it? I've tried joining both calls but I must be doing some miscalculations. <?php /* Usage Examples */ Image('http://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png', '1:1', '600x'); Image('http://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png', '2:1', '600x'); Image('http://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png', '2:', '250x300'); ?> Any help is greatly appreciated, thanks.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >