Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 87/189 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

  • How to migrate Fedora DS (389 DS) to a new machine?

    - by zengr
    Hello, I am trying to migrate a Fedora DS (1.2.2) to a new server (1.2.7.5). The process has been painful to say the least. The old server (1.2.2) was also an upgrade from an old fedora DS setup, so it does not contain migrate-ds-admin.pl. I found this question, but the URL does not open. I am aware that I need to use migrate-ds-admin.pl, but I am clueless. How do I use it? I assume this works like this: 1. Copy migrate-ds-admin.pl from server which has 1.2.7 to 1.2.2 2. Run migrate-ds-admin.pl to export the schema+ldif from 1.2.2 3. Import the schema+ldif to 1.2.7 using migrate-ds-admin.pl. If the above is true, then what parameters are need for export and import? Note: ./ldif2db -n NetscapeRoot -i /root/NetscapeRoot.ldif ./ldif2db -n userRoot -i /root/userRoot.ldif The above two commands work like a charm, but since the schema (custom schema) is not migrated, I see alot of errors during import.

    Read the article

  • grep --color=auto with -i option disables the matching text color, why?

    - by emptyset
    I was messing around with grep and put this in my .zshenv: export GREP_OPTIONS="--color=auto" export GREP_COLORS='mt=1;34' I was bonking my head on the keyboard and changing GREP_COLORS around for a minute trying to figure out why the folder colors were working, but the matching text wasn't. I was doing this: $ grep -R -n -i -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * The line number and file names were set with the default colors, but the matching text wasn't. After spending way too much time, I thought to do this: $ grep -R -n -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * (I removed the -i option.) That's all it took to get the matching text to correctly show up in bold blue. This is a Cygwin on Vista setup, with rxvt running zsh. Any idea why grep colors would break on specifying a case-insensitive match? Update: Under cygwin 1.7, it's a little bit better - case insensitive search works correctly, but it only highlights the word that matches the expression exactly. In other words, "FunctionFoo" highlights "FunctionFoo" but not "functionFoo" and vice versa. Probably a grep issue so I'll be submitting it to that list.

    Read the article

  • Importing long numerical identifiers into Excel

    - by Niels Basjes
    I have some data in a database that uses ids that have the form of 16 digit numbers. In some situations i need to export the data in such a way that it can be manipulated in excel. So i export the data into a file and import it into excel. I've tried several file formats and I'm stuck. The problem I'm facing is that when reading a file into excel that has a cell that looks like a number then excel treats it as a number. The catch is that (as far as i can tell) all numerical values in excel are double precision floating point which have a precision of less than 16 digits. So my ids are changed: very often the last digit its changed to a 0. So far I've only been able to convince excel to keep the Id unchanged by breaking it myself: by adding a letter or symbol to the Id. This however means that in order to use the value again it must be "unbroken". Is there a way to create a file where i can specify that excel must treat the value as a text without changing the value? Or its there a way to let excel treat the value as a long (64bit integer)?

    Read the article

  • Faking a Linux environment without chroot

    - by Pascal
    For a university project I want to test a C++11 program on a 32-core machine. Unfortunately the machine has Ubuntu 12.04 with GCC 4.6 installed (we need GCC 4.7 because of some C++11 threading features). In such an environment I would normally run a chroot with a custom linux (say a debootstrap with Ubuntu 12.10). Since we don't get root access on the machine we can't use chroot. So far I have prepared a run-time environment using debootstrap for our code, I compiled it in the debootstrap environemnt. Then copied it onto the server (using rsync). In order to run our C++ code I set the LD_LIBRARY_PATH to export LD_LIBRARY_PATH=~/debootstrap/usr/lib/:~/debootstrap/lib64/:~/debootstrap/usr/lib/x86_64-linux-gnu/:~/debootstrap/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH and so far our code seems to run. I'm however stuck with our python code. It doesn't seem to be sufficient to set the paths manually. export PYTHONPATH=~/debootstrap/usr/lib/python2.7/dist-packages:~/debootstrap/usr/lib/python2.7:~/debootstrap/usr/lib/python2.7/plat-linux2:~/debootstrap/usr/lib/python2.7/lib-tk:~/debootstrap/usr/lib/python2.7/lib-dynload:~/debootstrap/usr/local/lib/python2.7/dist-packages:~/debootstrap/usr/lib/pymodules/python2.7:~/debootstrap/usr/lib/python2.7/dist-packages/PIL:~/debootstrap/usr/lib/python2.7/dist-packages/gtk-2.0:~/debootstrap/usr/lib/python2.7 Executing our script results in ImportError: No module named _path Is there an easier way to accomplish a "fake"-chroot than just overriding and creating environment variables? Note I need python since we created a custom C++-Python module in order to run our tests. Maybe I should create two questions from this.

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • What does this example bash startup script do?

    - by Dimitri
    I am trying to set up GNU Octave on my computer (Mac OS X 10.7.4). I am newbie in using Terminal and I need help to understand what the following script actually does: if [ -f ~/.bashrc ];then<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;. ~/.bashrc<br> fi<br> PATH=$PATH:/usr/local/bin<br> BASH_ENV=~/.bashrc<br> export BASH_ENV PATH<br> export GNUTERM=aqua<br> alias octave="/Applications/Octave.app/Contents/Resources/bin/octave"<br> alias gnuplot="/Applications/Gnuplot.app/Contents/Resources/bin/gnuplot"<br> (taken from here: http://wikibox.stanford.edu/me112/index.php/Main/OctaveMatlabNotes) So this script begins with the simple conditional if statement. I don't understand the conditional expression - what is -f and .bashrc? What the statement . ~/.bashrc actually does? Then 2 variables are defined PATH and BASH_ENV. Why are they exported? Why GNUTERM=aqua is exported even if it's not defined anywhere? All I need is a script that would allow me to run Octave by simply typing octave in the terminal. I don't need an alias for the gnu plot. Thanks

    Read the article

  • Creating "save as" functionality to .eml file in AppleScript

    - by unieater
    I'm new to AppleScript and trying to figure out how to save a Mail.app message as an .eml message. Ideally, I would like it to act similar to the Mail menu bar actions, in which it saves the message and the attachments together. The workflow is that you have a selection in Mail, hit a hotkey and the function creates a filename (newFile) for the email to be saved. I just need help with how to save the message to the path (theFolder) in an .eml format. tell application "Mail" set msgs to selection if length of msgs is not 0 then display dialog "Export selected message(s)?" if the button returned of the result is "OK" then set theFolder to choose folder with prompt "Save Exported Messages to..." without invisibles repeat with msg in msgs -- determine date received of msg and put into YYYYMMDD format set msgDate to date received of msg -- parse date SEMversion below using proc pad2() set {year:y, month:m, day:d, hours:h, minutes:min} to (msgDate) set msgDate to ("" & y & my pad2(m as integer) & my pad2(d)) -- assign subject of msg set msgSubject to (subject of msg) -- create filename.eml to be use as title saved set newFile to (msgDate & "_" & msgSubject & ".eml") as Unicode text -- copy mail message to the folder and prepend date-time to file name -- THIS IS WEHRE I AM COMPLETE LOST HOW SAVE THE EMAIL into theFolder end repeat beep 2 display dialog "Done exporting " & length of msgs & " messages." end if -- OK to export msgs end if -- msgs > 0 end tell on pad2(n) return text -2 thru -1 of ("00" & n) end pad2

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

  • Extracting information from active directory

    - by Nop at NaDa
    I work in the IT support department of a branch of a huge company. I have to take care of a database with all the users, computers, etc. I'm trying to find a way to automatically update the database as much as possible, but the IT infrastructure guys doesn't give me enough privileges to use Active Directory in order to dump the users, nor they have the time to give me the information that I need. Some days ago I found Active Directory explorer from Sysinternals that allows me to browse through Active Directory, and I found all the information that I need there (username, real name, date when it was created, privileges, company, etc.). Unfortunately I'm unable to export the data to a human readable format. I'm just able to take a snapshot of the whole database in a machine-readable format. Doing the snapshot takes hours and I'm afraid that the infrastructure guys won't like me doing entire snapshots on a regular basis. Do you know of any tool (command-line is preferable) that would allow me to retrieve the values of the keys or export it to XML, CSV, etc?

    Read the article

  • Dumping active directory

    - by Nop at NaDa
    I work in the IT support department of a branch of a huge company. I have to take care of a database with all the users, computers, etc. I'm trying to find a way to automatically update the database as much as possible, but the IT infrastructure guys doesn't give me enough privileges to use Active Directory in order to dump the users, nor they have the time to give me the information that I need. Some days ago I found Active Directory explorer from Sysinternals that allows me to browse through Active Directory, and I found all the information that I need there (username, real name, date when it was created, privileges, company, etc.). Unfortunately I'm unable to export the data to a human readable format. I'm just able to take a snapshot of the whole database in a machine-readable format. Doing the snapshot takes hours and I'm afraid that the infrastructure guys won't like me doing entire snapshots on a regular basis. Do you know of any tool (command-line is preferable) that would allow me to retrieve the values of the keys or export it to XML, CSV, etc?

    Read the article

  • Weblogic WLST classpath

    - by user43736
    When I run the WLST .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar: /directory/ols_wls/utils/config/10.3/config-launch.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar: /directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar: /directory/ols_wls/wlserver_10.3/server/lib/webservices.jar: /directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar: /directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin: /directory/ols_wls/modules/org.apache.ant_1.7.0/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/bin: /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin: /directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • Windows 7 - "Magic" frequent folder

    - by TheAdamGaskins
    Every week, I export an mp3 file from audacity into a folder with that day's date (e.g. this past sunday I exported the file to a folder named 20130609). Then I close everything and that's it for a while. Then, I come back a few hours later to upload the file to ftp. I usually have some folders open, so to open a new one, I right click on the folder icon on the taskbar... to open a new folder window and browse to this folder I just created, right? Well I look up a little bit and: So I click it and upload the file, and it actually saves me 30 seconds, which is really awesome... but what in the world? It happens every single week without fail. I create the folder inside the audacity export window. The folder stays on the frequent list until I create a new folder the following week. This was definitely not an advertised feature of Windows 7, and it's extremely handy... but it really just seems like magic to me. How does it work?

    Read the article

  • uploadify scriptData problem

    - by elpaso66
    Hi, I'm having problems with scriptData on uploadify, I'm pretty sure the config syntax is fine but whatever I do, scriptData is not passed to the upload script. I tested in both FF and Chrome with flash v. Shockwave Flash 9.0 r31 This is the config: $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : '/media/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': 'e1b552afde044bdd188ad51af40cfa8e'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : '/media/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); I checked that other values (file name) are passed correctly but session_key is not. This is the decorator code from django-filebrowser, you can see it checks for request.POST.get('session_key'), the problem is that request.POST is empty. def flash_login_required(function): """ Decorator to recognize a user by its session. Used for Flash-Uploading. """ def decorator(request, *args, **kwargs): try: engine = __import__(settings.SESSION_ENGINE, {}, {}, ['']) except: import django.contrib.sessions.backends.db engine = django.contrib.sessions.backends.db print request.POST session_data = engine.SessionStore(request.POST.get('session_key')) user_id = session_data['_auth_user_id'] # will return 404 if the session ID does not resolve to a valid user request.user = get_object_or_404(User, pk=user_id) return function(request, *args, **kwargs) return decorator

    Read the article

  • function not working R

    - by user3722828
    I've never programmed before and am trying to learn. I'm following that "coursera" course that I've seen other people post about — a course offered by Johns Hopkins on R programming. Anyway, this was supposed to be my first function. Yet, it doesn't work! But when I type out all the steps individually, it runs just fine... Can anyone tell me why? > pollutantmean <- function(directory, pollutant, id = 1:332){ + x<- list.files("/Users/mike******/Desktop/directory", full.names=TRUE) + y<- lapply(x, read.csv) + z<- do.call(rbind.data.frame, y[id]) + + mean(z$pollutant, na.rm=TRUE) + } > pollutantmean(specdata,nitrate,1:10) [1] NA Warning message: In mean.default(z$pollutant, na.rm = TRUE) : argument is not numeric or logical: returning NA #### > x<- list.files("/Users/mike******/Desktop/specdata",full.names=TRUE) > y<- lapply(x,read.csv) > z<- do.call(rbind.data.frame,y[1:10]) > mean(z$nitrate,na.rm=TRUE) [1] 0.7976266

    Read the article

  • Accessing Yahoo realtime stock quotes

    - by DVK
    There's a fairly easy way of retrieving 15-minute delayed quotes off of Yahoo! Finance web site ("quotes.csv" API). However, so far I was unable to find any info on how to access real-time quotes. The hang-ups with real-time quotes are: Only available to logged-in user No API Non-obvious how to scrape the info - I'm somewhat convinced they are placed on the page by some weird Ajax call. So I was wondering if anyone had managed to develop a publically available solution to retrieve real-time quotes for a stock from Yahoo! Finance. Notes: Implementation language/framework need is flexible but Perl or Excel is highly preferred. Assume that security is not an issue - I'm willing to supply yahoo userid and pasword, even in cleartext. I'm not 100% hung up on Yahoo - they are merely the only provider of free realtime stock quotes I'm familiar with. if the same thing can be done with Google Finance, I'd be just as happy. This is for a personal project, so scalability/fault tolerance/etc... are not important. I'm looking for a "do the whole retrieval" library ideally, but if I'm pointed to partial solutions (e.g. how to retrieve info from Yahoo's user-logged-in pages; how to scrape realtime quotes from Yahoo's page) I can fill in the blanks. I saw Finance::YahooQuote but it does not seem to allow you to supply log-in information and appears to use the lagging quotes.csv API Thanks!

    Read the article

  • Implementing Naïve Bayes algorithm in Java - Need some guidance

    - by techventure
    hello stackflow people As a School assignment i'm required to implement Naïve Bayes algorithm which i am intending to do in Java. In trying to understand how its done, i've read the book "Data Mining - Practical Machine Learning Tools and Techniques" which has a section on this topic but am still unsure on some primary points that are blocking my progress. Since i'm seeking guidance not solution in here, i'll tell you guys what i thinking in my head, what i think is the correct approach and in return ask for correction/guidance which will very much be appreciated. please note that i am an absolute beginner on Naïve Bayes algorithm, Data mining and in general programming so you might see stupid comments/calculations below: The training data set i'm given has 4 attributes/features that are numeric and normalized(in range[0 1]) using Weka (no missing values)and one nominal class(yes/no) 1) The data coming from a csv file is numeric HENCE * Given the attributes are numeric i use PDF (probability density function) formula. + To calculate the PDF in java i first separate the attributes based on whether they're in class yes or class no and hold them into different array (array class yes and array class no) + Then calculate the mean(sum of the values in row / number of values in that row) and standard divination for each of the 4 attributes (columns) of each class + Now to find PDF of a given value(n) i do (n-mean)^2/(2*SD^2), + Then to find P( yes | E) and P( no | E) i multiply the PDF value of all 4 given attributes and compare which is larger, which indicates the class it belongs to In temrs of Java, i'm using ArrayList of ArrayList and Double to store the attribute values. lastly i'm unsure how to to get new data? Should i ask for input file (like csv) or command prompt and ask for 4 values? I'll stop here for now (do have more questions) but I'm worried this won't get any responses given how long its got. I will really appreciate for those that give their time reading my problems and comment.

    Read the article

  • using paperclip with secure and non-secure files

    - by crankharder
    First off, we have this namespaced/sti'd structure for our different types of 'Media' Media< Ar::Base Media::Local < Media Media::Local::Image < Media::Local Media::Local::Csv < Media::Local etc... etc.. This is excellent since a user can have many media, and how we display each piece of media is based on the class name and a co-responding partial. But what if we have some Csv's that need to be secure? That is, they can't reside inside of public. I really hate the idea of branching Media again and doing something like this: Media::Secure < Media Media::Secure::Image < Media::Secure Media::NotSecure < Media Media::NotSecure::Image < Media::NotSecure ...where Secure and NotSecure would have different params passed to has_attached_file. Now there are two classes that represent Image and it makes my view/helper system that much more complicated -- not to mention it feels very obtuse. What I would really like to do is be able to change where certain Paperclip::Attachment objects get saved before they get saved (e.g. anything uploaded through foo_secure_action) -- but I can't seem to make this work. Paperclip::Attachment has an @options hash with :path and :url, but changing those before it is saved doesn't have an effect on where it actually gets set. Even if this is possible, I'm not sure if it would have further consequences... I'm open to alternative ideas for structuring this data, but for the moment I like the idea of using STI for this situation.

    Read the article

  • Issues with cross-domain uploading

    - by meder
    I'm using a django plugin called django-filebrowser which utilizes uploadify. The issue I'm having is that I'm hosting uploadify.swf on a remote static media server, whereas my admin area is on my django server. At first, the browse button wouldn't invoke my browser's upload. I fixed this by modifying the sameScriptAccess to always instead of sameDomain. Now the progress bar doesn't move at all, I probably have to enable some server setting for cross domain file uploading, or most likely actually host a separate upload script on my media server. I thought I could solve this by adding a crossdomain.xml to enable any site at the root of both servers, but that doesn't seem to solve it. $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : 'http://media.site.com:8080/admin/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': '...'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : 'http://media.site.com:8080/admin/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'always', //'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); The page I'm viewing this on is http://site.com/admin/filebrowser/browse on port 80.

    Read the article

  • Magento MAGMI: Product attributes (custom options) not showing up in import

    - by Rodgers and Hammertime
    When importing a CSV into Magento with the MAGMI importing tool, I am unable to import Custom Options (as in size: smalee/medium/large). The import manages to put in the basic products, but the Custom Options don't transfer accross. By custom options I mean the fields Title, Input Type, Is Required, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order and so on ... Found in the custom options menu... Even using the example CSV from the MAGMI SourceForge Wiki: sku,name,description,price,Size:drop_down:1 T-Shirt1,T-Shirt,A T-Shirt,5.00,Small|Medium|Large T-Shirt2,T-Shirt2,Another T-Shirt,6.00,XS|S|M|L|XL ...it fails to import the attributes. So i'm simply using MAGMI with the supplied example data from SourceForge on a blank magento product list, and it doesn't transfer properly. Can anyone shed any light on what might be wrong? I am using Magento ver. 1.6.1.0 if that changes anything. Thanks.

    Read the article

  • FancyURLOpener failing since moving to python 3.1.2

    - by Andrew Shepherd
    I had an application that was downloading a .CSV file from a password-protected website then processing it futher. I was using FancyURLOpener, and simply hardcoding the username and password. (Obviously, security is not a high priority in this particular instance). Since downloading Python 3.1.2, this code has stopped working. Does anyone know of the changes that have happened to the implementation? Here is a cut down version of the code: import urllib.request; class TracOpener (urllib.request.FancyURLopener) : def prompt_user_passwd(self, host, realm) : return ('andrew_ee', '_my_unenctryped_password') csvUrl='http://mysite/report/19?format=csv@USER=fred_nukre' opener = TracOpener(); f = opener.open(csvUrl); s = f.read(); f.close(); s; For the sake of completeness, here's the entire call stack: Traceback (most recent call last): File "C:\reporting\download_csv_file.py", line 12, in <module> f = opener.open(csvUrl); File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1624, in _open_generic_http response.status, response.reason, response.msg, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1640, in http_error result = method(url, fp, errcode, errmsg, headers) File "C:\Program Files\Python31\lib\urllib\request.py", line 1878, in http_error_401 return getattr(self,name)(url, realm) File "C:\Program Files\Python31\lib\urllib\request.py", line 1950, in retry_http_basic_auth return self.open(newurl) File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1590, in _open_generic_http auth = base64.b64encode(user_passwd).strip() File "C:\Program Files\Python31\lib\base64.py", line 56, in b64encode raise TypeError("expected bytes, not %s" % s.__class__.__name__) TypeError: expected bytes, not str

    Read the article

  • Zend Framework: Zend_translate and routing related issue

    - by Dan
    I have implemented Zend_Navigation, Zend_Translate in my application. The routing is setup in Bootstrap.php like below. $fc = Zend_Controller_Front::getInstance(); $zl=new Zend_Locale(); Zend_Registry::set('Zend_Locale',$zl); $lang=$zl->getLanguage().'_'.$zl->getRegion(); $router = $fc->getRouter(); $route = new Zend_Controller_Router_Route(':lang/:module/:controller/:action/*', array( 'lang'=>$lang, 'module'=>'default', 'controller'=>'index', 'action'=>'index' )); $router->addRoute('default', $route); $fc->setRouter($router); $fc->registerPlugin( new Plugin_LanguageSetup()); in LaunguageSetup Plugin i have defined the dispatchLoopStartup method to do the checking of the language param public function dispatchLoopStartup(Zend_Controller_Request_Abstract $request) { $this->createLangUrl($request); $this->_language = $request->getParam('lang'); if ((!isset($this->_language)) || !in_array($this->_language, $this->_languagesArray)) { $this->_language = 'en_US'; $request->setParam('lang', 'en_US'); } $file = APPLICATION_PATH.$this->_directory.$this->_language.'.csv'; $translate = new Zend_Translate('csv', $file, $this->_language); Zend_Registry::set('Zend_Translate', $translate); $zl = Zend_Registry::get('Zend_Locale'); $zl->setLocale($this->_language); Zend_Registry::set('Zend_Locale', $zl); // $fc = Zend_Controller_Front::getInstance(); // $router = $fc->getRouter(); // $route = new Zend_Controller_Router_Route(':lang/:module/:controller/:action/*', array( // 'lang'=>$this->_language, 'module'=>'default', 'controller'=>'index', 'action'=>'index' // )); // $router->addRoute('default', $route); // $fc->setRouter($router); } What happen is the language always have the default value, the 'lang' param never default lang value in route, even if i type it in the address bar manually i.e /en_US/module/controller/action/ It always get revert back to the default Zend_locale(); Only way i can fix it is to setup the route again in the plugin and inject a correct language value as default. Any Idea why?

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • Using prepared statements with JDBCTemplate

    - by Bernhard V
    Hi. I'm using the Jdbc template and want to read from the database using prepared statements. I iterate over many lines in a csv file and on every line I execute some sql select queries with it's values. Now I want to speed up my reading from the database but I just can't get the Jdbc template to work with prepared statements. Actually I even don't know how to do it. There is the PreparedStatementCreator and the PreparedStatementCreator. As in this example both of them are created with anonymous inner classes. But inside the PreparedStatementCreator class I don't have access to the values I want to set in the prepared statement. Since I'm iterating through a csv file I can't hard code them as a String because I don't know them. I also can't pass them to the PreparedStatementCreator because there are no arguments for the constructor. I was used to the creation of prepared statements being fairly simple. Something like PreparedStatement updateSales = con.prepareStatement( "UPDATE COFFEES SET SALES = ? WHERE COF_NAME LIKE ? "); updateSales.setInt(1, 75); updateSales.setString(2, "Colombian"); updateSales.executeUpdate(): as in the Java tutorial. Your help would be very appreciated.

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >