Search Results

Search found 975 results on 39 pages for 'uploads'.

Page 35/39 | < Previous Page | 31 32 33 34 35 36 37 38 39  | Next Page >

  • codeigniter avoiding html div's

    - by rabidmachine9
    Hello there!Is there a proper syntax to avoid div's in codeigniter? I don't really like opening and closing tags all the time... <div class="theForm"> <?php echo form_open('edit/links');//this form uploads echo "Enter the Name: ". form_input('name','name'); echo "Enter the Link: ". form_input('url','url'); echo " ".form_submit('submit', 'Submit'); echo form_close(); if (isset($linksQuery) && count($linksQuery)){ foreach($linksQuery as $link){ echo anchor($link['link'], $link['name'].".", array("class" => "links")); echo form_open('edit/links',array('class' => 'deleteForm')); echo form_hidden('name',$link['name']); echo " ".form_submit('delete','Delete'); echo form_close(); echo br(2); } } ?> </div>

    Read the article

  • Setting PHP session variables using Flash Actionscript

    - by Abs
    Hello all, I have a simple PHP upload script that is called from my Flash App. I am sure it makes the call because it actually uploads the file! session_start(); $default_path = 'files/'; $target_path = ($_POST['dir']) ? $_POST['dir'] : $default_path; if(!file_exists($target_path)) mkdir($target_path, 0777, true); $destination = $target_path . basename( $_FILES[ 'Filedata' ][ 'name' ] ); $file_name = rand(1,9999).$_FILES[ 'Filedata' ][ 'name' ]; if(move_uploaded_file($_FILES[ 'Filedata' ][ 'tmp_name' ], $destination)){ $_SESSION['path'] = 'flashuploader_online/upload/'.$destination; } However, I try to use the session variable "path" in another script but it gives me an empty value! Yes, I have made sure to use session_start. Am I missing something? Update At least now I know what the problem is! But I am not sure how to solve it without it getting messy to pass across session variables. Any ideas?

    Read the article

  • PHP site scheduling Java execution?

    - by obfuscation
    I'm trying to get started on combining my (slightly limited) PHP experience with my (better) Java experience, in a project where I need to allow uploads of Java source files to the server, which the server then executes Javac on to compile it. Then, at a set time (e.g. specified on upload) I need to run that once on the server, which will generate some database info for the PHP site to display. To describe my current programming abilities- I have made many desktop Java programs, and am confident in 'pure' Java, but so far have only undertaken a couple of PHP projects (including using the CodeIgniter framework). My motivation for using PHP as the frontend is because I know it is very fast, lightweight and I will be able to display the results I need very easily with it (simple DB readout). Ideally, the technology used should be able to be developed on a localhost (e.g. WAMP, Tomcat etc..) Is there any advice which you could give on what technology I should consider to use to bridge this gap, and what resources could help in using that technology? I have looked at a few, but have struggled to find documentation helping in achieving what I need.

    Read the article

  • Is there a way to sync (two way) tables betwen a mysql server and a local MS Access?

    - by Kailen
    Help me figure out a solution to a (not so unique) problem. My research group has gps devices attached to migratory animals. Every once in a while, a research tech will be within range of an animal and will get the chance to download all the logged points. Each individual spits out a single dbf and new locations are just appended to the end (so the file is just cumulative). These data need to be shared among a research group. Everyone else (besides me) wants to use access, so they can make small edits and prefer that interface. They do not like using MySQL. The solution I came up with is: a) The person who downloads the file goes to a web page, enters animal ID into a form, chooses .dbf file and uploads to a mysql database on the server (I still have to write php code to read the dbf and write sql insert statements from it). b) Everyone syncs from their local access database to the server. (This is natively possible from access but very clunky). Is there a tool (preferably open source), that can compare a access table to mysql table and sync the two (both ways)? Alternatively, does anyone have a more elegant solution? The ultimate goal is to allow everyone to have access to the most current data on their computers using their preferred database app.

    Read the article

  • Detect some conflictive characters in a string with javascript

    - by FranQ
    Hello. I have a file input in a form that uploads a mp3 file, but I´d like to detect conflictive characters to my system in the filename, like ! @ or any other. All codes I´ve found replace these characters, but I just want to detect them to alert the user. I think it will be easy with regular expressions, but I dont know about them. I´m using jquery/javascript. Thanks in advance for your help Edit to improve my problem description: I´m working in a CodeIgniter application that allows user to upload mp3 files to the server. I use jQuery to manage client side forms. The CI upload class converts spaces in the file name to underscores and everything works. But testing the application I uploaded a mp3 file with a (!) in the name, and I got troubles with it. I just want to insert a javascript conditional before the file is uploaded to evaluate if the user´s filename contains a (!) (or any other I´d like to add later) to ask for the file to be renamed if it does.

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • nginx timeout albeit ridicolous configuration

    - by Joa Ebert
    The scenario is an API server that should handle uploads. Posting on my.host.com/api/upload should do something with the body the client sends. However the API server has been designed to block the whole request until it fully processed the file, including some analysis which can take up to approx. 5min (...!). This has to change of course. In the meantime I wanted to setup nginx as a load balancer in front of the API servers. I quickly ran into a timeout issue, consulted Google and came up with this ridiculous test configuration: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; access_log off; sendfile on; send_timeout 3600; keepalive_timeout 3600 120; tcp_nopush on; tcp_nodelay on; gzip off; client_header_timeout 3600; client_body_timeout 3600; proxy_send_timeout 3600; proxy_read_timeout 3600; proxy_connect_timeout 1800; proxy_next_upstream error; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And upstream test { server host1; server host2; } server { listen 80; server_name my.host.com; client_max_body_size 10m; location /api/ { proxy_pass http://test; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } Still, when an upload happens, I get the following result in the error.log: 2010/12/22 13:36:42 [error] 5256#0: *187359 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xx.xx.xx.xx, server: my.host.com, request: "POST /api/upload HTTP/1.1", upstream: "http://apiserver:80/upload", host: "my.host.com" What else could I do? If I look at the log of the API server I still see that it is processing the request and analyzing the file. But I think 3600 seconds as a timeout should be more than enough. This happens even after a could of seconds. And I did a reload and force-reload of the configuration as well of course.

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • Suspected network performance issue on VirtualBox Ubuntu guest on Win7 host

    - by Adam
    I set up Ubuntu 12.04 in VirtualBox on the Win7 machine I was allocated on my new project. I am running Java, Eclipse, Tomcat to develop a large data-intensive application and I noticed that this application runs at half the speed of my colleague's identical machine, where he runs it all under Windows. I think I have narrowed down the performance issue to the network, after comparing and equalising all the Java VM settings with my colleague. Is there a ping test I can do or some other network diagnostic test to flag up any problems? To give some background, the network performance is confusing. Running a network speed test to my colleague's machine with iperf shows speeds of 6 Mb/s from my Ubuntu guest, and 90 Mb/s from the win7 host. Large downloads, e.g. the Java SDK, come down at about 1.2 MB/s on both the guest and the host. Pings are sub-1ms on the host, but 1.5ms on the guest. I also did a broadband speed test, and got 10Mb/s download speed on both, but the host has an upload speed of 10Mb/s but the guest only uploads at 3Mb/s. I've been trying to diagnose any MTU problems with ping -M do to identify any kind of packet fragmentation problem but it's progressing very slow because I don't have much experience in this area. From what I read on other people's networking issues with VB and Linux guests on Win7 hosts, I should be able to get the speed on the guest up to the same level as the host. I installed a fresh VM with Ubuntu again to see if I'd foobar'd it somehow, but I'm getting the same readings with iperf on the virgin installation. My setup is: Adapter 1: Intel PRO/1000 MT Desktop (NAT) Adapter 2: ditto (host-only adapter) eth0 Link encap:Ethernet HWaddr 08:00:27:0b:76:bf inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe0b:76bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86236 errors:0 dropped:0 overruns:0 frame:0 TX packets:49369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69163946 (69.1 MB) TX bytes:3530535 (3.5 MB) eth2 Link encap:Ethernet HWaddr 08:00:27:a3:26:b8 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:26b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9148 (9.1 KB) TX bytes:7648 (7.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:701 errors:0 dropped:0 overruns:0 frame:0 TX packets:701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66321 (66.3 KB) TX bytes:66321 (66.3 KB)

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • Help needed setting up nginx to serve static files.

    - by Catalina
    Hi Guys, I'm trying to setup nginx to serve static files. Basically all I need is to have http://mydomain.com/site_media/ point to /var/django/myproject/site_media. I have tried so many configurations and when I test it I always get a 404 error for static files. Can anyone please tell me what I'm doing wrong or how I should be setting this up? This is my current nginx configuration file. user www-data; worker_processes 1; #error_log /usr/local/nginx/logs/error.log; #pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; use epoll; } http { # Enumerate all the Tornado servers here upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003; } include mime.types; default_type application/octet-stream; #access_log /usr/local/nginx/logs/access.log; keepalive_timeout 65; proxy_read_timeout 200; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/html text/css text/xml application/x-javascript application/xml application/atom+xml text/javascript; proxy_next_upstream error; server { listen 80; # Allow file uploads client_max_body_size 50M; location ^~ /site_media/ { root /var/django/myproject/site_media; if ($query_string) { expires max; } } location = /favicon.ico { rewrite (.*) /site_media/favicon.ico; } location = /robots.txt { rewrite (.*) /site_media/robots.txt; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } } #include /usr/local/nginx/sites-enabled/*; } Thanks, Cata

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Nginx Rewrite Rule For File Within Folder Not Working

    - by user3620111
    Good evening everyone or possible early morning if you are in my neck of the woods. My problem seems trivial but after several hours of testing, researching and fiddling I can't seem to get this simple nginx rewrite function to work. There are several rewrites we need, some will have multiple parameters but I cant even get this simple 1 parameter current url to alter at all to the desired. Current: website.com/public/viewpost.php?id=post-title Desired: website.com/public/post/post-title Can someone kindly point me to as what I have done wrong, I am baffled / very tired... For testing purposes before we launch we were just using a simple port on the server. Here is that section. # Listen on port 7774 for dev test server { listen 7774; server_name localhost; root /usr/share/nginx/html/paa; index index.php home.php index.html index.htm /public/index.php; location ~* /uploads/.*\.php$ { if ($request_uri ~* (^\/|\.jpg|\.png|\.gif)$ ) { break; } return 444; } location ~ \.php$ { try_files $uri @rewrite =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location @rewrite { rewrite ^/viewpost.php$ /post/$arg_id? permanent; } } I have tried countless attempts such as above @rewrite and simpler: location / { rewrite ^/post/(.*)$ /viewpost.php?id=$1 last; } location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I can not seem to get anything to work at all, I have tried changing the location tried multiple rules... Please tell me what I have done wrong. Pause for facepalm [relocated from stack overflow as per mod suggestion]

    Read the article

  • $_POST data returns empty when headers are > POST_MAX_SIZE

    - by Jared
    Hi Hopefully someone here might have an answer to my question. I have a basic form that contains simple fields, like name, number, email address etc and 1 file upload field. I am trying to add some validation into my script that detects if the file is too large and then rejects the user back to the form to select/upload a smaller file. My problem is, if a user selects a file that is bigger than my validation file size rule and larger than php.ini POST_MAX_SIZE/UPLOAD_MAX_FILESIZE and pushes submit, then PHP seems to try process the form only to fail on the POST_MAX_SIZE settings and then clears the entire $_POST array and returns nothing back to the form. Is there a way around this? Surely if someone uploads something than the max size configured in the php.ini then you can still get the rest of the $_POST data??? Here is my code. <?php function validEmail($email) { $isValid = true; $atIndex = strrpos($email, "@"); if (is_bool($atIndex) && !$atIndex) { $isValid = false; } else { $domain = substr($email, $atIndex+1); $local = substr($email, 0, $atIndex); $localLen = strlen($local); $domainLen = strlen($domain); if ($localLen < 1 || $localLen > 64) { // local part length exceeded $isValid = false; } else if ($domainLen < 1 || $domainLen > 255) { // domain part length exceeded $isValid = false; } else if ($local[0] == '.' || $local[$localLen-1] == '.') { // local part starts or ends with '.' $isValid = false; } else if (preg_match('/\\.\\./', $local)) { // local part has two consecutive dots $isValid = false; } else if (!preg_match('/^[A-Za-z0-9\\-\\.]+$/', $domain)) { // character not valid in domain part $isValid = false; } else if (preg_match('/\\.\\./', $domain)) { // domain part has two consecutive dots $isValid = false; } else if (!preg_match('/^(\\\\.|[A-Za-z0-9!#%&`_=\\/$\'*+?^{}|~.-])+$/', str_replace("\\\\","",$local))) { // character not valid in local part unless // local part is quoted if (!preg_match('/^"(\\\\"|[^"])+"$/', str_replace("\\\\","",$local))) { $isValid = false; } } } return $isValid; } //setup post variables @$name = htmlspecialchars(trim($_REQUEST['name'])); @$emailCheck = htmlspecialchars(trim($_REQUEST['email'])); @$organisation = htmlspecialchars(trim($_REQUEST['organisation'])); @$title = htmlspecialchars(trim($_REQUEST['title'])); @$phone = htmlspecialchars(trim($_REQUEST['phone'])); @$location = htmlspecialchars(trim($_REQUEST['location'])); @$description = htmlspecialchars(trim($_REQUEST['description'])); @$fileError = 0; @$phoneError = ""; //setup file upload handler $target_path = 'uploads/'; $filename = basename( @$_FILES['uploadedfile']['name']); $max_size = 8000000; // maximum file size (8mb in bytes) NB: php.ini max filesize upload is 10MB on test environment. $allowed_filetypes = Array(".pdf", ".doc", ".zip", ".txt", ".xls", ".docx", ".csv", ".rtf"); //put extensions in here that should be uploaded only. $ext = substr($filename, strpos($filename,'.'), strlen($filename)-1); // Get the extension from the filename. if(!is_writable($target_path)) die('You cannot upload to the specified directory, please CHMOD it to 777.'); //Check if we can upload to the specified upload folder. //display form function function displayForm($name, $emailCheck, $organisation, $phone, $title, $location, $description, $phoneError, $allowed_filetypes, $ext, $filename, $fileError) { //make $emailCheck global so function can get value from global scope. global $emailCheck; global $max_size; echo '<form action="geodetic_form.php" method="post" name="contact" id="contact" enctype="multipart/form-data">'."\n". '<fieldset>'."\n".'<div>'."\n"; //name echo '<label for="name"><span class="mandatory">*</span>Your name:</label>'."\n". '<input type="text" name="name" id="name" class="inputText required" value="'. $name .'" />'."\n"; //check if name field is filled out if (isset($_REQUEST['submit']) && empty($name)) { echo '<label for="name" class="error">Please enter your name.</label>'."\n"; } echo '</div>'."\n". '<div>'."\n"; //Email echo '<label for="email"><span class="mandatory">*</span>Your email:</label>'."\n". '<input type="text" name="email" id="email" class="inputText required email" value="'. $emailCheck .'" />'."\n"; // check if email field is filled out and proper format if (isset($_REQUEST['submit']) && validEmail($emailCheck) == false) { echo '<label for="email" class="error">Invalid email address entered.</label>'."\n"; } echo '</div>'."\n". '<div>'."\n"; //organisation echo '<label for="phone">Organisation:</label>'."\n". '<input type="text" name="organisation" id="organisation" class="inputText" value="'. $organisation .'" />'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //title echo '<label for="phone">Title:</label>'."\n". '<input type="text" name="title" id="title" class="inputText" value="'. $title .'" />'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //phone echo '<label for="phone"><span class="mandatory">*</span>Phone <br /><span class="small">(include area code)</span>:</label>'."\n". '<input type="text" name="phone" id="phone" class="inputText required" value="'. $phone .'" />'."\n"; // check if phone field is filled out that it has numbers and not characters if (isset($_REQUEST['submit']) && $phoneError == "true" && empty($phone)) echo '<label for="email" class="error">Please enter a valid phone number.</label>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //Location echo '<label class="location" for="location"><span class="mandatory">*</span>Location:</label>'."\n". '<textarea name="location" id="location" class="required">'. $location .'</textarea>'."\n"; //check if message field is filled out if (isset($_REQUEST['submit']) && empty($_REQUEST['location'])) echo '<label for="location" class="error">This field is required.</label>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //description echo '<label class="description" for="description">Description:</label>'."\n". '<textarea name="description" id="queryComments">'. $description .'</textarea>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //file upload echo '<label class="uploadedfile" for="uploadedfile">File:</label>'."\n". '<input type="file" name="uploadedfile" id="uploadedfile" value="'. $filename .'" />'."\n"; // Check if the filetype is allowed, if not DIE and inform the user. switch ($fileError) { case "1": echo '<label for="uploadedfile" class="error">The file you attempted to upload is not allowed.</label>'; break; case "2": echo '<label for="uploadedfile" class="error">The file you attempted to upload is too large.</label>'; break; } echo '</div>'."\n". '</fieldset>'; //end of form echo '<div class="submit"><input type="submit" name="submit" value="Submit" id="submit" /></div>'. '<div class="clear"><p><br /></p></div>'; } //end function //setup error validations if (isset($_REQUEST['submit']) && !empty($_REQUEST['phone']) && !is_numeric($_REQUEST['phone'])) $phoneError = "true"; if (isset($_REQUEST['submit']) && $_FILES['uploadedfile']['error'] != 4 && !in_array($ext, $allowed_filetypes)) $fileError = 1; if (isset($_REQUEST['submit']) && $_FILES["uploadedfile"]["size"] > $max_size) $fileError = 2; echo "this condition " . $fileError; $POST_MAX_SIZE = ini_get('post_max_size'); $mul = substr($POST_MAX_SIZE, -1); $mul = ($mul == 'M' ? 1048576 : ($mul == 'K' ? 1024 : ($mul == 'G' ? 1073741824 : 1))); if ($_SERVER['CONTENT_LENGTH'] > $mul*(int)$POST_MAX_SIZE && $POST_MAX_SIZE) echo "too big!!"; echo $POST_MAX_SIZE; if(empty($name) || empty($phone) || empty($location) || validEmail($emailCheck) == false || $phoneError == "true" || $fileError != 0) { displayForm($name, $emailCheck, $organisation, $phone, $title, $location, $description, $phoneError, $allowed_filetypes, $ext, $filename, $fileError); echo $fileError; echo "max size is: " .$max_size; echo "and file size is: " . $_FILES["uploadedfile"]["size"]; exit; } else { //copy file from temp to upload directory $path_of_uploaded_file = $target_path . $filename; $tmp_path = $_FILES["uploadedfile"]["tmp_name"]; echo $tmp_path; echo "and file size is: " . filesize($_FILES["uploadedfile"]["tmp_name"]); exit; if(is_uploaded_file($tmp_path)) { if(!copy($tmp_path,$path_of_uploaded_file)) { echo 'error while copying the uploaded file'; } } //test debug stuff echo "sending email..."; exit; } ?> PHP is returning this error in the log: [29-Apr-2010 10:32:47] PHP Warning: POST Content-Length of 57885895 bytes exceeds the limit of 10485760 bytes in Unknown on line 0 Excuse all the debug stuff :) FTR, I am running PHP 5.1.2 on IIS. TIA Jared

    Read the article

  • XNA Notes 007

    - by George Clingerman
    Every week I keep wondering if there’s going to be enough activity in the community to keep doing these notes on a weekly basis and every week I’m reminded of just how awesome and active the XNA community is. There’s engines being made, tutorials being created, games being crafted. There’s information being shared, questions being answered and then there’s another whole community around the Xbox LIVE Indie Games themselves. It’s really incredibly to just watch all that’s going on and I’m glad I’m playing a small part in all of this. So here’s what I noticed happening in the XNA community last week. If there’s things I’m missing, always feel free to let me know. I love learning about new corners of the XNA community that I wasn’t aware of or just have been missing! XNA Developers: Uditha Bandara held an XNA Game Development Workshops at Singapore Universities http://uditha.wordpress.com/2011/02/18/xna-game-development-workshops-at-singapore-universities-event-update/ Binary Tweed gives his talks about Indie City and gives his opinion on the false promise of digital distribution http://www.develop-online.net/news/37053/OPINION-The-false-promise-of-digital-distribution Kris Steele posts his Trivia or Die postmortem http://www.krissteele.net/blogdetails.aspx?id=246 @MadNinjaSkills (James Johnston) posts his feelings on testing for XBLIG http://www.ezmuze.co.uk/101 Simon (@DDReaper) posts hints and tips for XNA developers to help get the size of their projects down http://twitter.com/#!/DDReaper/status/38279440924545024 http://xna-uk.net/blogs/darkgenesis/archive/2011/02/17/look-at-the-size-of-that-thing.aspx Michael B. McLaughlin proving why he should be an XNA MVP posts the list of commonly used value types in XNA games http://geekswithblogs.net/mikebmcl/archive/2011/02/17/list-of-commonly-used-value-types-in-xna-games.aspx http://twitter.com/#!/mikebmcl/status/38166541354811392 Paul Powell (@ITSligoPaul) posts about a common sprite batch as a game service http://itspaulsblog.blogspot.com/2011/02/xna-common-sprite-batch-as-game-service.html @SigilXNA (John Defenbaugh) posts his new level editor video for the sequel to Opac’s Journey http://twitter.com/SigilXNA/statuses/36548174373982209 http://twitter.com/#!/SigilXNA/status/36548174373982209 http://youtu.be/QHbmxB_2AW8 @jwatte updates kW Animation for XNA 4.0 http://www.enchantedage.com/xna-animation @DSebJ posts Blender to SunBurn http://twitter.com/#!/DSebJ/status/36564920224976896 http://dsebj.evolvingsoftware.com/?p=187 Ads and WP7 Games - @mechaghost shares his revenue data for his ad based games http://www.occasionalgamer.com/2011/02/09/ads-and-wp7-games/ Xbox LIVE Indie Games (XBLIG): Steven Hurdle posts day 100 of his quest to find a fantastic XBLIG purchase every day http://writingsofmassdeduction.com/2011/02/17/day-100-radiangames-ballistic/ Xbox 360 Indie Game Buying Guide - 12 games for $60 including several Xbox LIVE Indie games! (although if the XNA community was asked we could have recommended 60 games for $60...) http://www.indiegamemag.com/xbox360-indie-games-buying-guide/ The best selling Xbox LIVE Indie games of 2010 http://www.1up.com/news/xbox-live-most-popular-games I’d buy that for a dollar! - the California Literary Review points out a few gems on the XBLIG marketplace (and other places) where you can game on the cheap. http://calitreview.com/14125 Armless Octopus Episode 39 - The Indie Gem Octocast http://www.armlessoctopus.com/2011/02/17/armless-octocast-episode-39-the-indie-gem-octocast/ Ska Studios posts a plethora of updates http://www.ska-studios.com/2011/02/11/good-morning-gato-49/ http://www.ska-studios.com/2011/02/14/vampire-smile-valentines/ http://www.ska-studios.com/2011/02/16/the-dishwasher-vs-finds-a-home/ Kotaku posts about the Xbox LIVE Indie Game that makes you go Pew Pew Pew Pew Pew Pew http://kotaku.com/#!5760632/the-game-that-makes-you-go-pew-pew-pew-pew-pew-pew-pew GameMarx continues to be active and doing a ton for the XBLIG community reviews and Top 5 indie games of the week 2/4-2/10 http://www.gamemarx.com/video/the-show/22/ep-9-february-11-2010.aspx a new podcast Xbox Indie New Releases http://twitter.com/#!/gamemarx/status/36888849107910656 http://www.gamemarx.com/news/2011/02/13/a-new-podcast-xbox-indie-new-releases.aspx @MasterBlud uploads Indocalypse XBLIG Collections #2 http://www.youtube.com/watch?v=uzCZSv075mc&feature=youtu.be&a http://twitter.com/#!/MasterBlud/status/37100029697064960 Just Press Start interviews Michael Hicks from MichaelArts, 18 year old creator of Honor in Vengeance http://justpressstart.net/?p=465 Achievement Locked interviews Kris Steele of FunInfused Games http://xboxindies.wordpress.com/2011/02/11/interview-fun-infused-games/ XNA Game Development: XNA -UK launches their XAP test service to help the XNA community http://xna-uk.net/blogs/news/archive/2011/02/18/xna-uk-xap-test-service-now-live.aspx Transmute shows off a video of the standard character editor http://www.youtube.com/watch?v=qqH6gErG948&feature=youtu.be Microsoft Tech Student introduces their first tech student of the month.  Meet Daniel Van Tassel from the University of Utah and learn how he created an Xbox LIVE Indie Game using XNA Studio http://blogs.msdn.com/b/techstudent/archive/2010/12/22/introducing-our-first-tech-student-of-the-month-daniel-van-tassel.aspx XNA for Silverlight Developers Part 3 - Animation (transforms) http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx XNA for Silverlight Developers Part 4 - Animation (frame based) http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-4-Animation-frame-based.aspx @suhinini tweets about an XNA Sprite Font generation tool http://twitter.com/#!/suhinini/status/36841370131890176 http://www.nubik.com/SpriteFont/ XNATouch 1.5 is out and in it’s words is faster, simpler, more reliable and has the XNA 4.0 API http://monogame.codeplex.com/releases/view/60815 IndieCity is hosting marketing workshops for Indie Developers (UK and US) http://forums.create.msdn.com/forums/p/75197/457654.aspx#457654 New York Students - Learn XNA and Silverlight for Xbox 360 and Windows Phone 7 http://forums.create.msdn.com/forums/p/72753/456964.aspx#456964 http://blogs.msdn.com/b/andrewparsons/archive/2011/01/13/learn-to-build-your-own-games-for-xbox-360-and-windows-phone-7.aspx http://blogs.msdn.com/b/andrewparsons/archive/2011/01/13/build-a-game-in-48-hours-win-a-kinect-or-windows-phone-7.aspx Extra Credits: Videogame Music http://www.escapistmagazine.com/videos/view/extra-credits/2019-Videogame-Music Steve Pavlina posts an article with useful information for all XNA/XBLIG developers http://www.stevepavlina.com/blog/2011/02/completion-vs-perfection/

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • can't load big files to server with php [closed]

    - by yozhik
    Hi all! I can't load big files to server. The problem is in that file $_FILES["filename"]["tmp_name"] is empty if file a little more bigger then 2mb. I tried to change variables in php.ini upload_max_filesize = 700M post_max_size = 16M but not working to. Also tried to add this variables to my .httaccess file - but 500 error appears. Error code while uploading=1. UPLOAD_ERR_INI_SIZE Value: 1; The uploaded file exceeds the upload_max_filesize directive in php.ini. Here is my uppload.php page, please anwer what I doing wrong? Thanx! <?php if(strlen($_FILES["filename"]["name"])) { $folder = "uploads/"; echo $folder; $error = ""; if($_FILES["filename"]["size"] > 1024*700*1024) { $error .= "<b><p class=ErrorMessage>?????? ????? ????????? 5Mb</p></b><br>"; header("Location: upload.php?error=".$error, true, 303 ); } if(!file_exists($folder.="hh/")) { if(!mkdir($folder, 0700)) $error .= "<b><p class=ErrorMessage>Folder not created</p></b><br>"; } //echo "<br>".$_FILES["filename"]["tmp_name"]."<br>"; echo $folder.$_FILES["filename"]["name"]."<br>"; echo $_FILES["filename"]["error"]."<br>"; if(move_uploaded_file($_FILES["filename"]["tmp_name"], $folder.$_FILES["filename"]["name"])) { echo("???? ??????? ???????? <br>"); echo("?????????????? ?????: <br>"); echo("??? ?????: "); echo($_FILES["filename"]["name"]); echo("<br>?????? ?????: "); echo($_FILES["filename"]["size"]); echo("<br>??????? ??? ????????: "); echo($folder.=$_FILES["filename"]["name"]); echo("<br>??? ?????: "); echo($_FILES["filename"]["type"]); } else { $error .= "<b><p class=ErrorMessage>?????? ???????? ?????</p></b><br>"; } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>???????? ??? ????????</title> </head> <body> <?php if(isset($_REQUEST["error"])) { echo $_REQUEST["error"]; } ?> <h2><p><b> ????? ??? ???????? ?????? </b></p></h2> <form action="upload.php" method="post" enctype="multipart/form-data"> <input type="file" name="filename" READONLY><br> <input name="Upload" type="submit" value="Upload"><br> </form> </body> </html>

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

  • Dissertation about website and database security - in need of some pointers

    - by ClarkeyBoy
    Hi, I am on my dissertation in my final year at university at the moment. One of the areas I need to research is security - for both websites and for databases. I currently have sections on the following: Website Form security - such as data validation. This section is more about preventing errors made by legitimate users as much as possible rather than stopping hackers, for example comparing a field to a regular expression and giving them meaningful feedback on any errors which did occur so as to stop it happening again. Constraints. For example if a value must be true or false then use a checkbox. If it is likely to be one of several values then use a dropdown or a set of radio boxes, and so on. If the value is unpredictable then use regular expressions to limit what characters they are allowed to enter, and to restrict the length of the string, and sometimes to limit the format (such as for dates / times, post codes and so on). Sometimes you can limit permissions to the form. This is on the occasion that you know exactly who (whether it be peoples names or a group of people - such as administrators or employees) is going to need access to the form. Restricting permissions will stop members of the public from being able to access the form. Symbols or strings which could be used maliciously or cause the website to act incorrectly (such as the script tag) should be filtered out or html encoded. Captcha images can be used to prevent automated systems from filling in and submitting the form. There are some hacks for file uploads - such as using double extensions - which can allow hackers to upload malicious files. Databases (this is nowhere near done yet but the sections I have planned are listed below) SQL statements vs stored procedures Throwing an error when one of the variables contains particular characters or groups of characters (I cant remember what characters they are, but I have seen a message thrown back at me before where I have tried to enter html or something into a text area). SQL Injection - and ways around it, with some examples. Does anyone have any hints and tips on where I could go for some decent, reliable information either about these areas or about other areas of security that I could cover? Thanks in advance. Regards, Richard PS I am a complete newbie when it comes to security, so please be patient with me. If any of the information I have put down is wrong or could be sub-sectioned then please feel free to say so.

    Read the article

  • jQuery doesn't work in IE8?

    - by Wade D Ouellet
    Hi, I am working on a site here: mfm.treethink.net All the jquery works fine in Firefox, Chrome and Safari but on IE8 it gives me errors and the banner at the top doesn't work (which uses the crossSlide jQuery plugin) and as well the image rollovers don't work with the colour change. IE8 is telling me that the errors are on lines 53, 134 and 149 in the source, all of those lines are where the jquery function is declared. $(document).ready(function(){ I am running jquery 1.4. Oddly enough, the other piece of jQuery I have on that page works, the artist browse/select menu on the right. But the banner and image rollovers don't. Here are all the scripts I'm running: 1: the banner - doesn't work in IE8 <script type="text/javascript"> $(function() { $('#banner').crossSlide({ sleep: 5, fade: 1 }, [ <?php $pages = get_posts('numberposts=2000&post_type=artist&post_status=publish'); $i = 1; foreach( $pages as $page ) { $content = $page->post_title; if( empty($content) ) continue; $content = apply_filters('the_content', $content); ?> { src: '/wp-content/uploads/<?php echo $page->post_name ?>.jpg' }, <?php $i++; } ?> ]); }); </script> 2 - image rollovers - doesn't work in IE8 <script type="text/javascript"> $(function(){ $("ul#artists li").hover(function() { /* On hover */ var thumbOver = $(this).find("img").attr("src"); /* Find image source */ /* Swap background */ $(this).find("a.thumb").css({'background' : 'url(' + thumbOver + ') center bottom no-repeat'}); $(this).find("span").stop().fadeTo('fast', 0 , function() { $(this).hide() }); } , function() { $(this).find("span").stop().fadeTo('fast', 1).show(); }); }); </script> 3 - the artist select - works in IE 8 <script> $("#browse-select").change(function() { window.location.href = $(this).val(); }); </script> These scripts were done by referencing previously made scripts, like I said I'm still new to jQuery. The second works in IE8 and the first one is the one that doesn't. I noticed the third one, the only one working, is written differently than the first two non-working ones without a function declaration at the top. Could this have anything to do with it? Any help figuring out this problem would be so appreciated. Thanks a lot, Wade

    Read the article

  • Django: What's an awesome plugin to maintain images in the admin?

    - by meder
    I have an articles entry model and I have an excerpt and description field. If a user wants to post an image then I have a separate ImageField which has the default standard file browser. I've tried using django-filebrowser but I don't like the fact that it requires django-grappelli nor do I necessarily want a flash upload utility - can anyone recommend a tool where I can manage image uploads, and basically replace the file browse provided by django with an imagepicking browser? In the future I'd probably want it to handle image resizing and specify default image sizes for certain article types. Edit: I'm trying out adminfiles now but I'm having issues installing it. I grabbed it and added it to my python path, added it to INSTALLED_APPS, created the databases for it, uploaded an image. I followed the instructions to modify my Model to specify adminfiles_fields and registered but it's not applying in my admin, here's my admin.py for articles: from django.contrib import admin from django import forms from articles.models import Category, Entry from tinymce.widgets import TinyMCE from adminfiles.admin import FilePickerAdmin class EntryForm( forms.ModelForm ): class Media: js = ['/media/tinymce/tiny_mce.js', '/media/tinymce/load.js']#, '/media/admin/filebrowser/js/TinyMCEAdmin.js'] class Meta: model = Entry class CategoryAdmin(admin.ModelAdmin): prepopulated_fields = { 'slug': ['title'] } class EntryAdmin( FilePickerAdmin ): adminfiles_fields = ('excerpt',) prepopulated_fields = { 'slug': ['title'] } form = EntryForm admin.site.register( Category, CategoryAdmin ) admin.site.register( Entry, EntryAdmin ) Here's my Entry model: class Entry( models.Model ): LIVE_STATUS = 1 DRAFT_STATUS = 2 HIDDEN_STATUS = 3 STATUS_CHOICES = ( ( LIVE_STATUS, 'Live' ), ( DRAFT_STATUS, 'Draft' ), ( HIDDEN_STATUS, 'Hidden' ), ) status = models.IntegerField( choices=STATUS_CHOICES, default=LIVE_STATUS ) tags = TagField() categories = models.ManyToManyField( Category ) title = models.CharField( max_length=250 ) excerpt = models.TextField( blank=True ) excerpt_html = models.TextField(editable=False, blank=True) body_html = models.TextField( editable=False, blank=True ) article_image = models.ImageField(blank=True, upload_to='upload') body = models.TextField() enable_comments = models.BooleanField(default=True) pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique_for_date='pub_date') author = models.ForeignKey(User) featured = models.BooleanField(default=False) def save( self, force_insert=False, force_update= False): self.body_html = markdown(self.body) if self.excerpt: self.excerpt_html = markdown( self.excerpt ) super( Entry, self ).save( force_insert, force_update ) class Meta: ordering = ['-pub_date'] verbose_name_plural = "Entries" def __unicode__(self): return self.title Edit #2: To clarify I did move the media files to my media path and they are indeed rendering the image area, I can upload fine, the <<<image>>> tag is inserted into my editable MarkItUp w/ Markdown area but it isn't rendering in the MarkItUp preview - perhaps I just need to apply the |upload_tags into that preview. I'll try adding it to my template which posts the article as well.

    Read the article

  • Some help needed with setting up the PERFECT workflow for web development with 2-3 guys using subver

    - by Roeland
    Hey guys! I run a small web development company along side with my brother and friend. After doing extensive research I have decided on using subversion for version control. Here is how I currently plan on running typical development. Keep in mind there are 3 of us each in a separate location. I set up an account with springloops (springloops.com) subversion hosting. Each time I work on a new project, I create a repository for it. So lets say in this case I am working on site1. I want to have 3 versions of the site on the internet: Web Development - This is the server me and the other developers publish to. (site1.dev.bythepixel.com) Client Preview - This is the server that we update every few days with a good revision for the client to see. (site1.bythepixel.com) Live Site - The site I publish to when going live (site1.com) Each web development machine (at each location) will have a local copy of xamp running virtual host to allow multiple websites to be worked on. The root of the local copy is set up to be the same as the local copy of the subversion repository. This is set up so we can make small tweaks and preview them immediately. When some work has been done, a commit is made to the repository for the site. I will have the dev site automatically be pushed (its an option in springloops). Then, whenever I feel ready to push to the client site I will do so. Now, I have a few concerns with those work flow: I am using codeigniter currently, and in the config file I generally set the root of the site. Ex. http://www.site1.com. So, it looks like each time I publish to one of the internet servers, I will have to modify the config file? Is there any way to make it so certain files are set for each server? So when I hit publish to client preview it just uploads the config file for the client preview server. I don't want the live site , the client preview site and the dev site to share the same mysql server for a variety of reasons. So does this once again mean that I have to adjust the db server info each time I push to a different site? Does this workflow make sense? If you have any suggestion please let me know. I plan for this to be the work flow I use for the next few year. I just need to put a system in place that allows for future expansion! Thanks a bunch!!

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39  | Next Page >