Search Results

Search found 216 results on 9 pages for 'xlsx'.

Page 7/9 | < Previous Page | 3 4 5 6 7 8 9  | Next Page >

  • Storing an arbitrary R object onto HDD?

    - by Harokitty
    I understand that we can export data matrices to csv or xlsx files. What about complex objects like lm? For example, in my work I might have a list of length 1000, each with a single lm() object. Each time I load R I have to wait a long time to populate the 1000 length list with these lm objects with a for loop or a lapply. I would rather just save the list somewhere on my HDD at the end of a session and open it at the start of the next session.

    Read the article

  • Can you use Javascript to detect a file download window created server side?

    - by Zacho
    I have a jQuery plugin I use to dynamically create and render a form on a default.aspx asp.net page, then submit it. The page it gets submitted to is a pdf.aspx page. The page builds a PDF then uses Response.Write to write the file (application/pdf) to the browser. I use the same method to render XLSX files to the browser as well. It works really great, but I need a callback or some event to tell the button when to stop spinning. This prevents the user from continuously clicking the Excel or PDF buttons. Does anyone know a way to detect the file dialog window when it was not created using javascript? I am also open to other methods of callback from the server side as well.

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • Analysing SQLBits Feedback

    - by jamiet
    Earlier this week I received all the feedback that people offered on my session at SQLBits 7 in York – “SSIS Dataflow Performance Tuning” (the video is available online if you wish to see it). As you may have gathered from previous posts on this blog and my less-SQLy-focused Wordpress blog I am a big fan of collecting and tracking both personal and public data and session feedback lends itself very well to tracking because it is quantitative rather than qualitative; by that I mean attendees are invited to provide marks out of ten rather than (or, in the case of SQLBits, as well as) written comments. The SQLBits feedback is also useful because they use a consistent format – the same questions are asked each time – this means it is particularly easy to to track whether the scores that people give are trending up or down. I suspect that somewhere the SQLBits organisers have a big Analysis Services cube (ok, perhaps its an Excel pivot table) that allows them to analyse these scores per conference, speaker, track etc.… and there’s no reason that we as session speakers cannot do the same thing. To that end I have started to store my feedback in an Excel spreadsheet of my own which in the interests of transparency is available for public viewing (only a web browser required) on SkyDrive at http://cid-550f681dad532637.office.live.com/view.aspx/Public/Misc/Personal%20SQLBits%20Session%20Feedback.xlsx. I have used a pivot table to aggregate all that feedback and here is a screenshot: I am hereby making a public plea to the SQLBits organisers (on the off-chance that they are reading) to please continue to keep the feedback format consistent in the future and I encourage them to publish all of the feedback in an anonymised form. I would also encourage anyone doing conference speaking to track their conference feedback in the same way that I am doing so that you get an insight into whether or not you are improving over time. It is not difficult to setup and maintaining it as you do more sessions takes very little effort. Storing feedback data like this leads me to wider thoughts about well-known conventions and data format standardisation. Let’s imagine a utopia where there were a standard set of questions for capturing session feedback that were leveraged at every conference regardless of subject matter, location or culture; that would give rise to immense cross-conference and cross-discipline analysis – the data analyst in me goes giddy at the thought of it. It is scenarios like this that drive my interest both in data formats such as iCalendar, microformats and RDF, and in emerging movements such as the semantic web and linked data, all things which I have written about in the past. I don’t know whether we will ever reach the stage where every piece of data has structured, descriptive metadata associated with it but I live in hope. @Jamiet

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • Create App_Data and register Excel application on ASP.NET deployment? (IIS7.5)

    - by Francesco
    I am deploying an ASP.NET MVC3 application in IIS7. I already deployed other applications but they never made use of the App_Data folder or any additional component such as the Interop library. I used the one click deployement and I sue the default application pool. When I launch the application I immediately get an error stating: [web access] Sorry, an error occurred while processing your request. [browse from IIS7] Could not find a part of the path 'D:\Data\Apps\OppUpdate\App_Data\Test.xlsx'. Then I manually added the App_Data folder inside the deployment directory and the application starts regularly. Then when it comes to the taks that uses the Interop library, I get the following error: [web access] Sorry, an error occurred while processing your request. [browse from IIS7] Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). Is there any way to automatically add the App_Data folder when using 1 click deploy? How can I register the Interop services? Thanks you, Francesco

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • How to turn off Excel "Header Row" without losing data in it?

    - by Ken
    I've been sent an Excel spreadsheet with a weird first row. Some of the cells say "Column1", "Column2", etc., but I can't delete their contents. If I select the cell and hit backspace, it goes blank, but when I press return, it goes right back to saying "Column1". I found another answer here that suggested this could be caused by "Cell validation", but the validation window says "Any value", and also "show alert" (and I'm not seeing an alert), so I don't think that's it. The first row is white text on a blue background, if that means anything. The spreadsheet was sent to me in XLSX format, but I tried resaving as XLS and opening that, and it seems to make no difference. This is with the "ribbon" version of Excel (they got rid of the Help menu so I don't know how to see what version number it is!). Thanks! Update: The Excel online help says to use ribbon Home tab - Cells - Delete - ... to delete cells. When I select anything on the first row, this pop-up menu is dimmed. So maybe Excel doesn't think row 1 consists of "cells"? Though I don't know what else it would call them. Update 2: I found it, kind of. If I click the "Design" tab in the ribbon, then uncheck "Header Row", then first row becomes a normal row of cells again. Unfortunately, the contents disappear entirely. I want to delete a few cells, not all 50+! And if I copy the first row before turning off "Header Row", it disappears from the clipboard when I uncheck that. So I kind of know what mode it's stuck in, but not a good way out of it.

    Read the article

  • Making libmagic/file detect .docx files

    - by Jonatan Littke
    As seen elsewhere, docx, xlsx and pttx are ZIPs. When uploading them to my web application, file (via libmagic andpython-magic) detects them as being ZIP. I store the contents of the file as a blob in the database, but naturally I don't want to trust the user with what kind of file type this is. So I would like to trust file for and automatically generate a filename during download. I know one can modify /etc/magic but the format (magic(5)) is way too complicated for me. I found a bug report on the issue at Debian bugs but since it's from 2008 it doesn't seem to be fixed any time soon. I guess my only other alternative is to indeed trust the user (but still store the contents as a blob) and only check the file extension based on the file name. This way I can disallow some extensions and allow others. And when the user re-downloads his file, he can have it in whatever way he uploaded it. But this solution is insecure if the file is shared with others, since you can simply rename the file to allow uploading it. Any ideas? Lastly, I found a list of magic numbers for docx etc, but I'm unable to convert these into the magic(5) format.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • Restricting access to one controller of an MVC app with Nginx

    - by kgb
    I have an MVC app where one controller needs to be accessible only from several ips(this controller is an oauth token callback trap - for google/fb api tokens). My conf looks like this: geo $oauth { default 0; 87.240.156.0/24 1; 87.240.131.0/24 1; } server { listen 80; server_name some.server.name.tld default_server; root /home/user/path; index index.php; location /oauth { deny all; if ($oauth) { rewrite ^(.*)$ /index.php last; } } location / { if ($request_filename !~ "\.(phtml|html|htm|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|xlsx)$") { rewrite ^(.*)$ /index.php last; break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } It works, but does not look right. The following seems logical to me: location /oauth { allow 87.240.156.0/24; deny all; rewrite ^(.*)$ /index.php last; } But this way rewrite happens all the time, allow and deny directives are ignored. I don't understand why...

    Read the article

  • Prevent 'Run-time error '7' out of memory' error in Excel when using macro

    - by MasterJedi
    I keep getting this error whenever I run a macro in my excel file. Is there any way I can prevent this? My code is below. Debugging highlights the following line as the issue: ActiveSheet.Shapes.SelectAll My macro: Private Sub Save() Dim sh As Worksheet ActiveWorkbook.Sheets("Report").Copy 'Create new workbook with Sheets("Report"(2)) as only sheet. Set sh = ActiveWorkbook.Sheets(1) 'Set the new sheet to a variable. New workbook is now active workbook. sh.Name = sh.Range("B9") & "_" & Format(Date, "mmyyyy") 'Rename the new sheet to B9 value + date. With sh.UsedRange.Cells .Value = .Value 'eliminate all formulas .Validation.Delete 'remove all validation .FormatConditions.Delete 'remove all conditional formatting ActiveSheet.Buttons.Delete ActiveSheet.Shapes.SelectAll Selection.Delete lrow = Range("I" & Rows.Count).End(xlUp).Row 'select rows from bottom up to last containing data in column I Rows(lrow + 1 & ":" & Rows.Count).Delete 'delete rows with no data in column I Application.ScreenUpdating = False .Range("A410:XFD1048576").Delete Shift:=xlUp 'delete all cells outwith report range Application.ScreenUpdating = True Dim counter Dim nameCount nameCount = ActiveWorkbook.Names.Count counter = nameCount Do While counter > 0 ActiveWorkbook.Names(counter).Delete counter = counter - 1 Loop 'remove named ranges from workbook End With ActiveWorkbook.SaveAs "\\Marko\Report\" & sh.Name & ".xlsx" 'Save new workbook using same name as new sheet. ActiveWorkbook.Close False 'Close the new workbook. MsgBox ("Export complete. Choose the next ADP in cell B9 and click 'Calculate'.") 'Display message box to inform user that report has been saved. End Sub Not sure how to make this more efficient or to prevent this error.

    Read the article

  • What changed between Excel 2007 and 2010 that is causing my copied worksheet save to fail?

    - by snorehorse
    When I do this in Excel 2010 this fails, but works in Excel 2007: Create a new workbook and insert an image onto a worksheet, or get a preexisting worksheet with an image. Copy the worksheet into a new workbook by clicking the worksheet tab and clicking Move Or Copy and then choosing (new workbook) as the destination. Close the source workbook. Attempt to save the new workbook. The message is: "Errors were detected while saving 'myfilepathhere.xlsx'. Microsoft Excel may be able to save the file b removing or repairing some features. To make the repairs in a new file, click Continue. To cancel saving the file, click Cancel". Clicking continue brings up another file dialog window followed by more repair errors. It seems behind the scenes it is looking to the source workbook when it tries to save the image in the new destination workbook. No useful error message, of course, thanks microsoft. But this problem never happened in Excel 2007. The reason why I am closing the source notebook before the save, is because I don't need the end user to see it after I programmatically pull a coversheet (with the image) from it, in an interop app. Thanks for any help. Update: I don't encounter this problem if I open the source workbook as "Read Only" (I do this programmatically using Excel Interop).

    Read the article

  • Checking for valid document files

    - by sweb
    I need a simple way to check if my files are valid documents (pdf, doc, docx, ppt, pptx, xls, xlsx, odt, ods, odp and etc). I can't use file because magic does not work well at all. For example, for PDF files, this is my output. sweb@sweb-laptop: /media/files/ebooks/PDF and CHM$ file --mime *. Pdf PHP 5 for Dummies. Pdf: application/pdf; charset=binary PHP 6 and MySQL 5 for Dynamic Web Sites. Pdf: application/octet-stream; charset=binary PHP6 and MySQL Bible. Pdf: application/pdf; charset=binary PHP6.pdf: application/octet-stream; charset=binary PHP and MySQL for Dummies SE. Pdf: application/pdf; charset=binary For example, I use abiword – which is a good tool – but it converts any format. It doesn't check for valid documents: abiword --to=txt --to-name=output.txt audio.mp3 Is there any command available to check for valid documents then?

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • Reading Excel using ClosedXML

    I have used closedXML api to read the excel. Here is how you do it. Statistically, this performs better than OpenXML. public DataTable ReadDataFromExcelUsingClosedXML()         { string filePath ="@c:/temp/example.xlsx";             var LobjWorkbook = new XLWorkbook(filePath);             var LobjWorksheet = LobjWorkbook.Worksheets.First();             var LobjFullRange = LobjWorksheet.RangeUsed();             var LobjUsedRange = LobjWorksheet.Range(MobjImportMapper.HeaderRowIndex + 1, 1, LobjFullRange.RangeAddress.LastAddress.RowNumber,                                                     LobjFullRange.RangeAddress.LastAddress.ColumnNumber);             var LiNumberOfcolumnsInTheExcel = LobjUsedRange.ColumnCount();             //  for progress bar             int LiAggregateRowCounter = MobjImportMapper.HeaderRowIndex;             int LiTotalNumberOfRows = LobjWorksheet.RowCount() - LiAggregateRowCounter;             int LiPercentage = 0;             foreach (var LobjRow in LobjUsedRange.RowsUsed())             {                 int LiTemp = 0;                 object[] LobjrowData = new object[LiNumberOfcolumnsInTheExcel + 1];                 LobjrowData[LiTemp] = LobjRow.RangeAddress.FirstAddress.RowNumber;                 LiTemp++;                 LobjRow.Cells().ForEach(PobjCell => LobjrowData[LiTemp++] = PobjCell.Value);                 LdtExcelData.Rows.Add(LobjrowData);                 //  for progress bar                 LiPercentage = ((100 * LiAggregateRowCounter / LiTotalNumberOfRows) / 4) * 3;                 if (LiPercentage > 5)                     PobjBackgoundWorker.ReportProgress(LiPercentage);                 LiAggregateRowCounter++;                 // =====================             }             return LdtExcelData;         } span.fullpost {display:none;}

    Read the article

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • How to display ppt file in Android views using Docx4j

    - by Ganesh
    I am working on Android and using docx4j to view the docx,pptx and xlsx files into my application. I am unable to view the ppt files . I am getting compile time error at SvgExporter class. which is not there in docx4j library. Can any one help me out to get the SvgExporter class library and build my application and get the Svghtml to load on webview for ppt files. my code is as follows. String inputfilepath = System.getProperty("user.dir") + "/sample-docs/pptx/pptx-basic.xml"; // Where to save images SvgExporter.setImageDirPath(System.getProperty("user.dir") + "/sample-docs/pptx/"); PresentationMLPackage presentationMLPackage = (PresentationMLPackage)PresentationMLPackage.load(new java.io.File(inputfilepath)); // TODO - render slides in document order! Iterator partIterator = presentationMLPackage.getParts().getParts().entrySet().iterator(); while (partIterator.hasNext()) { Map.Entry pairs = (Map.Entry)partIterator.next(); Part p = (Part)pairs.getValue(); if (p instanceof SlidePart) { System.out.println( SvgExporter.svg(presentationMLPackage, (SlidePart)p) ); } } // NB: file suffix must end with .xhtml in order to see the SVG in a browser }

    Read the article

  • C# StreamReader.ReadLine() - Need to pick up line terminators

    - by Tony Trozzo
    I wrote a C# program to read an Excel .xls/.xlsx file and output to CSV and Unicode text. I wrote a separate program to remove blank records. This is accomplished by reading each line with StreamReader.ReadLine(), and then going character by character through the string and not writing the line to output if it contains all commas (for the CSV) or all tabs (for the Unicode text). The problem occurs when the Excel file contains embedded newlines (\x0A) inside the cells. I changed my XLS to CSV converter to find these new lines (since it goes cell by cell) and write them as \x0A, and normal lines just use StreamWriter.WriteLine(). The problem occurs in the separate program to remove blank records. When I read in with StreamReader.ReadLine(), by definition it only returns the string with the line, not the terminator. Since the embedded newlines show up as two separate lines, I can't tell which is a full record and which is an embedded newline for when I write them to the final file. I'm not even sure I can read in the \x0A because everything on the input registers as '\n'. I could go character by character, but this destroys my logic to remove blank lines. Any ideas would be greatly appreciated.

    Read the article

  • FileSystemWatcher.Changed fires immediately when Excel 2007 opens XLS file in compatibility mode

    - by Rick Mogstad
    We use a FileSystemWatcher to monitor documents opened from our Document Management system, and if the user saves the document, we ask if they would also like them updated in our system. We have a problem with XLS files in Excel 2007 (have not verified that the problem does not exist in 2003, but it only seems to be files that open in compatibility mode in 2007) where the Changed event fires immediately upon opening the file, and then once more upon closing the file, even if nothing has changed or the user chooses not to save upon closing. This same behavior does not exist when opening XLSX files. I wrote a test app to verify the behavior, which you can find at (http://www.just2guys.net/SOFiles/FSWExcel.zip). In the app, there is one FileSystemWatcher for each NotifyFilter type, so that it is apparent why the Changed event was fired. Any way you can think of to only prompt the user when the document is actually saved in some way by the user? I can start monitoring the file after Process.Start is called, which allows me to skip the message upon opening the document, but I still get one upon closing the document, even when nothing was changed.

    Read the article

  • Excel 2007 file writer in C# results in a corrupt file

    - by Martin
    Hi, I am using a BinaryReader to read an Excel 2007 file from an Exchange mailbox using a OWA, the file is then written to disk using a BinaryWriter. My problem is that the two files don't match when the writer finishes. Worse still Excel 2007 won't open the writen file. Previously Excel 2003 has had no problem with the solution below. And Excel 2007 doesn't have an issue if the file is an Excel 2003 format file, only if the file format is Excel 2007 (*.xlsx). BinaryReader: using(System.IO.Stream stream = resource.GetInputStream(attachedFiles[k].Address)) { using(System.IO.BinaryReader br = new System.IO.BinaryReader(stream)) { attachment.Data = new byte[attachedFiles[k].Size]; int bufPosn=0, len=0; while ((len = br.Read( attachment.Data, bufPosn, attachment.Data.Length-bufPosn )) > 0) { bufPosn += len; } br.Close(); } } BinaryWriter: FileStream fs = new FileStream(fileName, FileMode.Create); BinaryWriter binWriter = new BinaryWriter(fs); binWriter.Write( content, 0, content.Length ); binWriter.Close(); fs.Close(); Suggestions gratfully received.

    Read the article

  • Automation Error upon running VBA script in Excel

    - by brohjoe
    Hi guys, I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = _ "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0', & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. cnt.Close If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing End If End Sub

    Read the article

  • Zend file upload error

    - by jgnasser
    I am attempting to upload a file using Zend Framework 1.8 and I get some errors. Here is the code snippet: The form element: $element = new Zend_Form_Element_File('doc'); $element->setLabel('Upload an image:') ->setDestination('/path/to/my/upload/folder'); $element->addValidator('Count', false, 1); $element->addValidator('Size', false, 102400); $element->addValidator('Extension', false, 'jpg,png,gif,doc,docx,xls,xlsx,txt'); $this->addElement($element); The code for handling the upload: $adapter = new Zend_File_Transfer_Adapter_Http(); if (!$adapter->receive()) { $messages = $adapter->getMessages(); echo implode("\n", $messages); } This works fine and the file is uploaded but I get the error "The file 'doc' was illegal uploaded, possible attack". I managed to get past this problem by not creating a new Zend_File_Transfer_Adapter_Http() but instead using: $adapter = $form->doc->getTransferAdapter(); With this modification, the first error disappears but now I have an error saying I have provided 2 files instead of one (probably its reading the temp) and when I adjust the validator to accept two files I then get the arror saying "The file 'doc' was not found" and the upload now fails completely. Please help

    Read the article

< Previous Page | 3 4 5 6 7 8 9  | Next Page >