Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 63/189 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • How to configure SoapUI with client certificate authentication

    - by gvdmaaden
    SoapUI is one of the best free tools around to test web services. Some time ago I was trying to send a soap message towards a SSL web service that was set up for client certificate authentication. I pretty soon got stuck at the “javax.net.ssl.SSLException: HelloRequest followed by an unexpected handshake message” error, but after reading several posts on the internet I solved that issue. It’s not really that complicated after all, but since I could not find a decent place on the internet that explains this scenario in a proper way, here’s a list of steps that you need to do to make it work. Note: this following steps are based on a Windows environment   Step one: Export your certificate (the one that you want to use as the client certificate) using the export wizard with the private key and with all certificates in the certification path: Give it a password (anything you want): And export it as a PFX file to a location somewhere on disk: Step two: Install the newest version of SOAP UI (currently it is 3.6.1) Open the file C:\Program Files\eviware\soapUI-3.6.1\bin\ soapUI-3.6.1.vmoptions and add this line at the bottom: -Dsun.security.ssl.allowUnsafeRenegotiation=true This is needed because of a JAVA security feature in their newest frameworks (For further reading about this issue, read this: http://www.soapui.org/forum/viewtopic.php?t=4089 and this: http://java.sun.com/javase/javaseforbusiness/docs/TLSReadme.html).   Open SOAPUI and go to preferences>SSL Settings and configure your certificate in the keystore (use the same password as in step one): That should be it. Just create a new project and import the WSDL from the client authenticated SSL webservice: And now you should be able to send soap messages with client certificate authentication. The above steps worked for me, but please drop a note if it does not work for you.

    Read the article

  • apt-mirror - Changing source mirror creates new folder for downloads

    - by I Kazi
    I have a local Ubuntu mirror running on ubuntu 10.04 in my office which uses archive.ubuntu.com to download updates and releases. I have been running this mirror since Ubuntu's Hardy Heron release. It downloads everything under /export/ubuntu-repo1/apt-mirror/mirror/archive.ubuntu.com/ folder. Recently I came to know that the mirror in India i.e. in.archive.ubuntu.com is a lot faster for me than http://archive.ubuntu.com which is based in UK. Therefore to download latest release QUANTAL QUETZAL I configured Indian mirror in /etc/apt/mirror.list After making this change and leaving apt-mirror to run overnight I found that it downloaded everything to a new folder called "in.archive.ubuntu.com" so now I have two folders where apt-mirror downloads updates. /export/ubuntu-repo1/apt-mirror/mirror/archive.ubuntu.com/ /export/ubuntu-repo1/apt-mirror/mirror/in.archive.ubuntu.com/ Now, since apache does not have "in.archive.ubuntu.com" configured, Ubuntu clients are unable to access QUANTAL QUETZAL release and its updates. My question is: Is there a way I could copy everything downloaded under "in.archive.ubuntu.com" to "archive.ubuntu.com" so all new updates of the latest release become accessible to Ubuntu clients? Secondly, Can I configure apt-mirror to download everything to archive.ubuntu.com even using Indian mirror? Thanks a lot for your help in advance. I Kazi

    Read the article

  • How to keep word document, html and pdf documentation aligned

    - by dendini
    Is there a way to write documentation in a WYSIWYG editor which can then export into HTML, WORD and PDF and keep copies synchronized? This documentation are mostly technical notes and some contextual help for some softwares so they must contain images and some styling, they are not programmer's documentation (API list or functions list) for which probably a program like Javadoc or Doxygen would be the best choice. For example how do companies with hundreds different software lines and thousands of programmers deal with this? I have several solutions but they all seem lacking in some aspect: Latex/Tex : very good pdf and html export, not very user friendly and no full-blown WYSIWYG editor available. LibreOffice/OpenOffice : full blown WYSIWYG editor however html export not so good (need to edit manually exported html which needs to be maintained separately ) Mediawiki or any other wiki : could be keeping documentation in wikitext format, so html is automatically generated, pdf exportation is quite good with many available plugins. Again however need some formation for the staff to use it and need to setup a server for this. Notice I'm not asking for software A vs software B, I'm asking for general advice, big companies procedures for documentation and yes some software product names if available.

    Read the article

  • Screwed up terminal after modifying bashrc

    - by omgzor
    I ended up screwing up my terminal, while setting up Sbt for the Coursera Scala course. I can't summon gedit (or anything else) anymore. I get the following error: Command 'gedit' is available in '/usr/bin/gedit' The command could not be located because '/usr/bin' is not included in the PATH environment variable. Also, each new instance of Terminal writes these messages before any command is written: -bash: :/home/antonio/jdk7/jdk1.7.0_07/bin: No such file or directory -bash: export: `/home/antonio/Desktop/Scala/install/sbt/bin:/home/antonio/jdk7/jdk1.7.0_07/bin': not a valid identifier I recently did a manual installation of the jdk 7, which apparently works: java -version java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) While setting up Sbt, I made the mistake of editing bashrc by writing gedit ~/.bashrc on my terminal instead of writing gedit .bashrc, I wrote the following lines at the end of the bashrc file that opened: export PATH=/PATH/TO/YOUR/jdk1.7.0-VERSION/bin:$PATH export PATH=/home/antonio/jdk7/jdk1.7.0_07/bin:$PATH What is wrong here? How can I access my bashrc file and modify it again?

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • Including BLOB images in your PDF Reports

    - by thatjeffsmith
    Earlier this year we walked through how to work with BLOBs in Oracle SQL Developer. So you already know how to INSERT, UPDATE and view the BLOBs stored in your tables. But now I want to show you how to include those images in your PDF reports. You know how to work with SQL Developer reports, right? No? OK, let’s do a quick run down memory lane then: How to Build a Bar Chart Child reports – click on parent record for on-the-fly children records Alright, so if you have a GRID report that contains a BLOB column, you have the option of including the BLOB contents when you create a PDF export: At design time, specify how you want the BLOB content to be treated when you export to PDF Note that you must specify the treatment of the BLOBs in the report design. You won’t be prompted when you launch the Export wizard dialog. When you open your PDF, there will be a link to the image. Click it. Click then confirm. It will launch the default image viewer on your machine. I hope your pictures are more excited than mine.

    Read the article

  • Efficient use of Bundling

    - by ACShorten
    One of the discussions I am having with customers and consulting people is about the use of Bundling and its appropriate use. We introduced Bundling post release in the V2.2 code line to allow partners and consultants to build solutions using the Configuration Tools objects such as UI Maps, Service Scripts, Business Objects, Business Services etc and then export and migrate them as solutions. Whilst that was the original intent I have found a few teams using the facility for other data and then complaining about the efficiency or relevance of the tool. Here are a number of guidelines to help optimize the use of Bundling for your implementation: Not all objects can be bundled. Only specific objects in the product can be bundled. These are targetted at Configuration Tools objects and a select group of other objects that are required for these objects. Maintenance Objects with the option "Eligble for Bundling" set to Y (and also contains a Bundling Add BO). Add objects to the Bundle as you complete them - Bundling can have issues with sequencing objects. The best way of combating this is to add objects to the bundle as you complete them. This will help with making sure you sequence the loading of the objects as you are building them in the correct order. Remember Bundling was designed for developers and partners to deliver solutions. If you leave adding objects to a Bundle using the Bundle Export zones then you will have less control of what sequence they are applied and this can cause timing issues. Bundling takes the latest revision  - If you combine Bundling with Revision Control then the Bundling will take the latest release of the object at the time of the export operation. Bundling and Version Control products - If you use a version control tool to control your java code then you can also check in the Bundle to associate a release between code and a bundle. Bundling is quite a powerful feature of the Oracle Utilities Application Framework that allows sales, partners, consultants and customers to package and import their Configuration Tools based solutions.

    Read the article

  • object of type 'closure' is not subsettable - contradiction?

    - by Alex
    I'm writing a function to produce time series plots of stock prices. However, I'm getting the following error "Error in df[, 7] : object of type 'closure' is not subsettable" Here's an example of the function: plot.prices <- function(df) { require(ggplot2) g <- ggplot(df, aes(x= as.Date(Date, format= "%Y-%m-%d"), y= df[, 7])) + geom_point(size=1) # ... code not shown... g } And example data: spy <- read.csv(file= 'http://ichart.finance.yahoo.com/table.csv?s=SPY&d=11&e=1&f=2012&g=d&a=0&b=29&c=1993&ignore=.csv', header= T) plot.prices(spy) # produces error ggplot(spy, aes(x= as.Date(Date, format= "%Y-%m-%d"), y= spy[, 7])) + geom_point(size=1) ## does not produce error As you can see, the code is identical. I get an error if the call to ggplot() is INSIDE the function but not if the call to ggplot() is OUTSIDE the function. Anyone have any idea why the seeming contradiction?

    Read the article

  • AJAX: how to get progress feedback in web apps, and to avoid timeouts on long requests?

    - by David Dombrowsky
    This is a general design question about how to make a web application that will receive a large amount of uploaded data, process it, and return a result, all without the dreaded spinning beach-ball for 5 minutes or a possible HTTP timeout. Here's the requirements: make a web form where you can upload a CSV file containing a list of URLs when the user clicks "submit", the server fetches the file, and checks each URL to see if its alive, and what the title tag of the page is. the result is a downloadable CSV file containing the URL, and the result HTTP code the input CSV can be very large ( 100000 rows), so the fetch process might take 5-30 minutes. My solution so far is to have a spinning javascript loop on the client site, which queries the server every second to determine the overall progress of the job. This seems kludgy to me, and I'm hesitant to accept this as the best solution. I'm using perl, template toolkit, and jquery, but any solution using any web technology would be acceptable.

    Read the article

  • How do I open an already opened file with a .net StreamReader?

    - by Jon Cage
    I have some .csv files which I'm using as part of a test bench. I can open them and read them without any problems unless I've already got the file open in Excel in which case I get an IOException: System.IO.IOException : The process cannot access the file 'TestData.csv' because it is being used by another process. This is a snippet from the test bench: using (CsvReader csv = new CsvReader(new StreamReader(new FileStream(fullFilePath, FileMode.Open, FileAccess.Read)), false)) { // Process the file } Is this a limitation of StreamReader? I can open the file in other applications (Notepad++ for example) so it can't be an O/S problem. Maybe I need to use some other class? If anyone knows how I can get round this (aside from closing excel!) I'd be very grateful.

    Read the article

  • How to prevent form submission for a form using onchange to submit the form when certain values are

    - by Terrence Brannon
    I have a form I have built: <form class="myform" action="cgi.pl"> <select name="export" onchange='this.form.submit()'> <option value="" selected="selected">Choose an export format</option> <option value="html">HTML</option> <option value="csv">CSV</option> </select> </form> Now, this form works fine if I pull down and select "HTML" or "CSV". But if I hit the back button and select "Choose an export format", the form is submitted, even though I dont want it to be. Is there any way to prevent form submission for that option?

    Read the article

  • Need an algorithm to group several parameters of a person under the persons name

    - by QuickMist
    Hi. I have a bunch of names in alphabetical order with multiple instances of the same name all in alphabetical order so that the names are all grouped together. Beside each name, after a coma, I have a role that has been assigned to them, one name-role pair per line, something like whats shown below name1,role1 name1,role2 name1,role3 name1,role8 name2,role8 name2,role2 name2,role4 name3,role1 name4,role5 name4,role1 ... .. . I am looking for an algorithm to take the above .csv file as input create an output .csv file in the following format name1,role1,role2,role3,role8 name2,role8,role2,role4 name3,role1 name4,role5,role1 ... .. . So basically I want each name to appear only once and then the roles to be printed in csv format next to the names for all names and roles in the input file. The algorithm should be language independent. I would appreciate it if it does NOT use OOP principles :-) I am a newbie.

    Read the article

  • Windows 7 Task Scheduler

    - by Btibert3
    Hi All, Very new to this, and I have no idea where to start. I want to schedule a python script using Task Scheduler in Windows 7. When I add a "New Action", I place the following command as the script/program : c:\python25\python.exe As the argument, I add the full path to the location of my python script path\script.py Here is my script: import datetime import csv import os now = datetime.datetime.now() print str(now) os.chdir('C:/Users/Brock/Desktop/') print os.getcwd() writer = csv.writer(open("test task.csv", "wb")) row = ('This is a test', str(now)) writer.writerow(row) I got an error saying the script could not run. Any help you can provide to get me up and running will be very much appreciated! Thanks, Brock

    Read the article

  • Modifying generator.yml views in Symfony

    - by Alex Ciminian
    Hey! I'm currently working on a web app written in Symfony. I'm supposed to add an "export to CSV" feature in the backend/administration part of the app for some modules. In the list view, there should be an "Export" button which should provide the user with a csv file of the elements that are displayed (considering filtering criteria). I've created a method in the actions class of the module that takes a comma separated list of ids and generates the CSV, but I'm not really sure how to add the link to it in the view. The problem is that the view doesn't exist anywhere, it's generated on the fly from the data in the generator.yml configuration file. I've posted the relevant part of the file below. list: display: [=name, indemn, _status, _participants, _approved_, created_at] title: Lista actiuni object_actions: _edit: ~ _delete: ~ filters: [name, county_id, _status_filter, activity_id] fields: name: name: Nume Actiune indemn: name: Îndemn la actiune description: name: Descriere approved_: name: Operatiune created_at: name: Creata la status: name: Status Actiune I'm new to Symfony, so any help would be appreciated :). Thanks, Alex

    Read the article

  • How to query data from a password protected https website using C# .NET

    - by Addie
    I'd like my application to query a csv file from a secure website. I have no experience with web programming so I'd appreciate detailed instructions. Currently I have the user login to the site, manually query the csv, and have my application load the file locally. I'd like to automate this by having the user enter his login information, authenticating him on the website, and querying the data. The application is written in C# .NET. The url of the site is: https://www2.emidas.com/default.asp. I've tested the following code already and am able to access the file once the user has already authenticated himself and created a manual query. System.Net.WebClient Client = new WebClient(); Stream strm = Client.OpenRead("https://www3.emidas.com/users/<username>/file.csv");

    Read the article

  • How to debug macruby?

    - by Dan
    Hi, I've encountered an inconsistent bug with MacRuby and have no idea how to go about debugging this. If anyone could help would be great. I don't know if this is due to my own code or is it a bug in the MacRuby framework. I have a feeling it's my own code, something about over-retaining a piece of memory and hence the garbage collection failed. This is the error from Xcode. Thanks. CSV Wizard(30245,0x7fff704f7ca0) malloc: resurrection error for object 0x20199da20 while assigning {conservative-block}[196608](0x302360060)[117616] = Array[64](0x20199da20) garbage pointer stored into reachable memory, break on auto_zone_resurrection_error to debug CSV Wizard(30245,0x103781000) malloc: garbage block 0x20199da20(Array[64]) was over-retained during finalization, refcount = 1 This could be an unbalanced CFRetain(), or CFRetain() balanced with -release. Break on auto_zone_resurrection_error() to debug. CSV Wizard(30245,0x103781000) malloc: fatal resurrection error for garbage block 0x20199da20(Array[64]): over-retained during finalization, refcount = 1

    Read the article

  • Issue with plotting daily data using ggplot

    - by user1723765
    I tried to plot daily data from 9 variables in ggplot, but the graph I get cannot handle the date variable properly. The x axis is unreadable and its impossible to read the plot. I'm guessing there's an issue with the handling of dates. Here's the data: https://dl.dropbox.com/u/22681355/su.csv Here's the code I've been using: su=read.csv(file="su.csv", head=TRUE) meltdf=melt(su) ggplot(meltdf, aes(x=Date, y=value, colour=variable, group=variable))+geom_line() and here's the output: https://dl.dropbox.com/u/22681355/output.jpg here's the same plot done in excel, why does it look completely different?

    Read the article

  • Detecting regional settings (List Separator) from web

    - by Toms Mikoss
    After having the unpleasant surprise that Comma Seperated Value (CSV) files are not necessarily comma-separated, I'm trying to find out if there is any way to detect what the regional settings list separator value is on the client machine from http request. Scenario is as follows: A user can download some data in CSV format from web site (RoR, if it matters). That CSV file is generated on the fly, sent to the user, and most of the time double-clicked and opened in MS Excel on Windows machine at the destination. Now, if the user has ',' set as the list separator, the data is properly arranged in columns, but if any other separator (';' is widely used here) is set, it all just gets thrown into a single column. So, is there any way to detect what separator is used on the client machine, and generate the file accordingly? I have a sinking feeling that it is not, but I'd like to be sure before I pass the 'can't be done, sorry' line to the customer :)

    Read the article

  • rails rollback updates when task fails

    - by ash34
    Hi, I have the following "generate_report" method being called from a rake task, which gets a hash as an input, that contains the reported hours spent by each user on a task and outputs the data as a .csv report. desc "Task reporting" task :report, [:inp_dt] => [:environment] do |t, args| h = select_data(args.inp_dt) /* not shown here */ generate_report(h) end def generate_report(h) out_dir = File.dirname(__FILE__) + '/../../output' myfile = "#{out_dir}" + "/monthly_#{Date.today.strftime("%m%d%Y")}.csv" writer = CSV.open(myfile, 'w') h.each do |h,v| v.each do |key,val| writer << val end end writer.close end where h = {:BILL=>{:PROJA=>["CYR", "00876", "2", 24], :PROJB=>["EPR", "00876", "2", 16]}, :JANE=>{:PROJA=>["TRB", "049576", "2", 16]}} I would like to set/update a 'processed' flag for each reported transaction and only commit the update when the file is written correctly or rollback the updates when the task fails. How can I accomplish this. thanks, ash

    Read the article

  • Optimizing simple search script in PowerShell

    - by cc0
    I need to create a script to search through just below a million files of text, code, etc. to find matches and then output all hits on a particular string pattern to a CSV file. So far I made this; $location = 'C:\Work*' $arr = "foo", "bar" #Where "foo" and "bar" are string patterns I want to search for (separately) for($i=0;$i -lt $arr.length; $i++) { Get-ChildItem $location -recurse | select-string -pattern $($arr[$i]) | select-object Path | Export-Csv "C:\Work\Results\$($arr[$i]).txt" } This returns to me a CSV file named "foo.txt" with a list of all files with the word "foo" in it, and a file named "bar.txt" with a list of all files containing the word "bar". Is there any way anyone can think of to optimize this script to make it work faster? Or ideas on how to make an entirely different, but equivalent script that just works faster? All input appreciated!

    Read the article

  • How do I plot a STOCK historical graph in android app?

    - by jer
    I wanted to plot a stock historical graph based on google finance in my android app . The problem is I can't find the api for just the stock chart alone and I must try to find another ways to do it. I thought of a way but don't know what whether it works the steps are as follows.. 1) get the details from csv file 2) read the csv file 3) plot the graph using the information of the csv file.(WHICH I DON'T KNOW HOW TO DO IT)! so if my steps above works , I would only want to know how to plot the graph.

    Read the article

  • Help with proper character encoding.

    - by mmattax
    I have a HTML form that is sometimes submitted with accented characters: à, è, ì, ò, ù I have a PHP script that exports these form submissions into CSV format, when I look at the CSV format in a text editor (vim or notepad for example) the characters look fine, but when opened with Open Office or Word, I get some funky results: ????? I am also passing these submission to salesforce and am getting an error: "The entity "Atilde" was referenced, but not declared." What can I do to ensure portability of my CSV file? What's the proper way to handle the encoding? My HTML file is content-type is set as: Content-Type: text/html; charset=utf-8 Data is being stored in MySQL as latin1_swedish_ci collation.

    Read the article

  • Django admin proper urls inside listview

    - by hinnye
    Hi, My current target is to give users the chance to download CSV files from the admin site of my application. I successfully managed to create an additional column in the model's list view this way: def doc_link(self): return '<a href="files/%s">%s</a>' % (self.output, self.output) doc_link.allow_tags = True This shows the file name and creates the link, but sadly - because it's inside my 'searches' view - it has an URL: my_site/my_app/searches/files/13.csv. This is my problem, I would like to have my files stored in the admin media directory, like this: http://my_site/media/files/13.csv Does somebody know how to give url which points "outer" from the model's directory? Maybe somehow tell Django to use the ADMIN_MEDIA_PREFIX in the link? I'd really appreciate any help, thanks!

    Read the article

  • Forcing A Postback Asp.Net

    - by Nick LaMarca
    Please take a look at the following click event... Protected Sub btnDownloadEmpl_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnDownloadEmpl.Click Dim emplTable As DataTable = SiteAccess.DownloadEmployee_H() Dim d As String = Format(Date.Now, "d") Dim ad() As String = d.Split("/") Dim fd As String = ad(0) & ad(1) Dim fn As String = "E_" & fd & ".csv" Response.ContentType = "text/csv" Response.AddHeader("Content-Disposition", "attachment; filename=" & fn) CreateCSVFile(emplTable, Response.Output) Response.Flush() Response.End() lblEmpl.Visible = True End Sub This code simply exports data from a datatable to a csv file. The problem here is lblEmpl.Visible=true never gets hit because this code doesnt cause a postback to the server. Even if I put the line of code lblEmpl.Visible=true at the top of the click event the line executes fine, but the page is never updated. How can I fix this?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >