Search Results

Search found 33834 results on 1354 pages for 'site column'.

Page 483/1354 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • OAuth gives me 401 error

    - by Radek
    I am trying to get the access key but I cannot make it work. `request_token.get_access_token is giving me 401 Unauthorized (OAuth::Unauthorized) error. I copy the authorize_url into my browser, allow the application, I receive some kind of PIN from twitter but after hitting enter in my script I always get 401 error. I did some search and I found this helped access_token = request_token.get_access_token(:oauth_verifier => params[:oauth_verifier]) but it is giving me undefined local variable or methodparams' for main:Object (NameError)` the ruby script is like ( I was following this tutorial ) gem 'oauth' require 'oauth/consumer' consumer_key = 'your key' consumer_secret ='your secret' consumer=OAuth::Consumer.new "consumer_key", "consumer_secret", {:site=>"http://twitter.com"} #{:site=>"https://agree2.com"} request_token = consumer.get_request_token puts request_token.token puts request_token.secret puts request_token.authorize_url puts "Hit enter when you have completed authorization." STDIN.gets access_token = request_token.get_access_token #access_token = request_token.get_access_token(:oauth_verifier => params[:oauth_verifier]) puts access_token.token puts access_token.secret puts puts access_token.inspect

    Read the article

  • Unix shell script with Iseries command

    - by user293058
    I am trying to ftp a file from unix to as400 and executing iseries command in the script. ftp is working fine,I am getting an error in jobd command as HOST=KCBNSXDD.svr.us.bank.net USER=test PASS=1234 #This is the password for the FTP user. ftp -env $HOST << EOF # Call 2. Here the login credentials are supplied by calling the variables. user $USER $PASS # Call 3. Here you will change to the directory where you want to put or get cd "\$QARCVBEN" # Call4. Here you will tell FTP to put or get the file. #Ebcdic #Mode b quote site crtccsid *user quote site crtccsid *sysval put prod.txt quote rcmd sbmjob cmd(call pgm(pmtiprcc0) parm('prod' 'DEV')) job(\$pmtiprcc) jobd(orderbatch) 550-Error occurred on command SBMJOB cmd(call pgm(pmtiprcc0)) job($pmtiprcc) jobd(orderbatch). 550 Errors occurred on SBMJOB command.. 221 QUIT subcommand received.

    Read the article

  • Hadoop hdfs namenode is throwing an error

    - by KarmicDice
    Full list of error: hb@localhost:/etc/hadoop/conf$ sudo service hadoop-hdfs-namenode start * Starting Hadoop namenode: starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-localhost.out 12/09/10 14:41:09 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = localhost/127.0.0.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.0.0-cdh4.0.1 STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/netty-3.2.3.Final.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.15.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jdiff-1.0.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.4.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-mapreduce/.//* STARTUP_MSG: build = file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.0.1-Packaging-Hadoop-2012-06-28_17-01-57/hadoop-2.0.0+91-1.cdh4.0.1.p0.1~precise/src/hadoop-common-project/hadoop-common -r 4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on Thu Jun 28 17:39:19 PDT 2012 ************************************************************/ 12/09/10 14:41:10 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties hdfs-site.xml: hb@localhost:/etc/hadoop/conf$ cat hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera CM on 2012-09-03T10:13:30.628Z--> <configuration> <property> <name>dfs.https.address</name> <value>localhost:50470</value> </property> <property> <name>dfs.https.port</name> <value>50470</value> </property> <property> <name>dfs.namenode.http-address</name> <value>localhost:50070</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.blocksize</name> <value>134217728</value> </property> <property> <name>dfs.client.use.datanode.hostname</name> <value>false</value> </property> </configuration>

    Read the article

  • Workflows for User Profile properties

    - by Nigel
    Hi Guys, I have a requirement from a client with regards to user profile properties. The requirement is as follows: We would like to setup custom user profile properties such that these values are editable by individuals via their my site. When a user edits this user profile property value, it should not be immediately visible to other users that happen to go to the individual's my site, but rather go through an approval workflow. If the value submitted is approved the user profile property value is updated. As far as I know this is not possible. I do have a work around for this but I want to ensure that I haven't missed anything before going down this path. Thanks in Advance

    Read the article

  • Determining your websites color scheme

    - by Steve Hayes
    One of the biggest issues I have, from a UI standpoint, when building a new website is figuring out what colors I will use and of those colors, do they actually work well together. I found this site that has really helped me out and I figured I would share it with all of you and also get some responses back about either sites similar or other ways that you get/figure out your color schemes. Here is the site that I currently use: http://kuler.adobe.com/ With Adobe Kuler, you can chose a base color and it will suggest 5 colors, including your color, that go well with one another. You can, of course, modify the colors it chooses. Also,one of the main features I use, is the image color matching. You can upload an image and it will determine a color scheme based on the colors of the image. So if you have a logo and want to use the colors of the logo, then this works perfectly for you. Thank you and I look forward to your feedback!

    Read the article

  • Opening a local file in Eclipse from the web

    - by Victor Nicollet
    Right now, when I notice a problem on a page on my PHP web site, I have to look at the URL, mentally deduce what file is responsible for displaying that page, then navigate the Eclipse PDT file tree to open that file. This is annoying and uses brain power that could have been applied to solving the issue instead. I would like my PHP web site to display on every page a link that I could click to automatically open the correct file in Eclipse. I can easily compute the complete absolute path for the file I need to open (for example, open C:/xampp/htdocs/controllers/Foo/Bar.php when visiting /foo/bar), and I can make sure that Eclipse is currently open with the correct project loaded, but I'm stuck on how I can have Firefox/Chrome/IE tell Eclipse to open that specific file.

    Read the article

  • Unable to launch the ASP.NET Development server because port '1900' is in use.

    - by Shaul
    I don't know what has got into my computer today. I was developing just fine in VS 2008 and testing my ASP.NET web site on my development server. Then suddenly, out of the blue, I can't run my web site any more! As soon as I hit F5, the message appears: Unable to launch the ASP.NET Development server because port '1900' is in use. And it doesn't matter what port I change to, it's always in use! AAARRRGGGHH!!! I have tried: Changing the port number Restarting Visual Studio Rebooting my machine Installing IIS Clue: My IIS refuses to start. But I didn't have IIS installed when I was happily working earlier, so that is probably not the issue; it might just be highlighting something else... Thanks in advance... Update: after rebooting, IIS does start, but the problem here persists.

    Read the article

  • Are there any B-tree programs or sites that show visually how a B-tree works

    - by Phenom
    I found this website that lets you insert and delete items from a B-tree and shows you visually what the B-tree looks like: java b-tree I'm looking for another website or program similar to this. This site does not allow you to specify a B-tree of order 4 (4 pointers and 3 elements), it only lets you specify B-trees with an even number of elements. Also, if possible, I'd like to be able to insert letters instead of numbers. I think I actually found a different site but that was a while ago and can't find it anymore.

    Read the article

  • .net activeX object

    - by Ali YILDIRIM
    Hi, I am trying to use my .net image editor user control as an activeX object in a web form. After internet search, I created a asp.net web site from VS2008 and added the following code <object classid="res/ImageEditor.dll#ImageEditor.Editor" height="400" width="400" id="myControl1" name="myControl1" > </object> <INPUT id="Button1" type="button" value="Btn" name="Btn" onclick="return Button1_onclick()"> </script> <script language=javascript> function Button1_onclick() { alert(document.getElementById("myControl1").WatermarkText); } </script> I have two problems 1-) When i first create the project i see the user control on browser but, after rebuilding the user control and changing the dll file at web site, the object no more appears on browser. Instead i see something like an error image. 2-) i can not access public properties. The user control is marked as "make com visible", and register for com is checked at properties.

    Read the article

  • SignalR cross domain does not work after updating to 0.5.1

    - by jlp
    My site uses SignalR to communicate cross-domain. It worked great but I updated SignalR from 0.5.0 to 0.5.1 and my site broke. Here's my script: function start() { $.connection.hub.url = "http://otherdomain.com/signalr"; $.connection.hub.start().done(function () { $.connection.myHub.join(); }); } When script calls join - I see in Firebug that it is a POST (I believe it shoold be GET (jsonp) since it is cross domain) with no response. EDIT: I tried $.connection.hub.start({jsonp: true}). Now I have can call server from client but calls from server don't execute on client. I noticed that there are following calls: negotiate and send whereas locally (the same domain) there are: negotiate, connect, send.

    Read the article

  • Data refresh and drill down problem with SSAS cube and excel services

    - by chaitanya
    I have a SSAS cube which I am using in an excel document, prepare a report which has drill-down etc and i am publishing it to a sharepoint site. It gets published alright but when I try to drill down it throws an error "Data Refresh failed" etc.The data source and the sharpoint site are on the same machine(running windows server 2008) and we have windows authentication running. From what I have been able to find on the internet there is a problem with passing the windows authentication credentials to the database etc.But I have not been able to find the exact way to sort out these problem. What is the solution for this????

    Read the article

  • Appengine Apps Vs Google bot web crawler

    - by sandeep koduri
    i built an appengine web app cricket.hover.in. The web app consists of about 15k url's linked in it, But even after a long time of my launch, no pages are indexed on google. Any base link place on my root site hover.in are being indexed with in minutes. but i placed the same link home page of root site a long back. but its of no use. can any one analyse , if there is any issue with cricket.hover.in or if bots have any issues with Google app engine actually tested the url using labs app of webmaster tools of google there the return is fine and html is clear. but when tested the same (cricket.hover.in) at the following urls its showing different results of failure www.dnsqueries.com/en/googlebot_simulator.php www.smart-it-consulting.com/internet/google/googlebot-spoofer/ but if i test some of my php or word press links at the above url's the results are good and fine. please help me with this.

    Read the article

  • Unable to run winpdb

    - by hekevintran
    I tried to run winpdb.py, but I got an error saying that it could not find wxPython. This is strange to me because I know I have wxPython installed and included in my PYTHONPATH. I can import wx in the Python interpreter. Mac OS X 10.5.8 Python 2.6 PYTHONPATH=/sw/lib/python2.6/site-packages/:/usr/local/lib/wxPython-unicode-2.8.10.1/lib/python2.6/site-packages/wx-2.8-mac-unicode: $ python winpdb.py wxPython was not found. wxPython 2.6 or higher is required to run the winpdb GUI. wxPython is the graphical user interface toolkit used by Winpdb. You can find more information on wxPython at http://www.wxpython.org/ The Unicode version of wxPython is recommended for Winpdb. To use the debugger without a GUI, run rpdb2. What could be causing this?

    Read the article

  • Redirect folder to different server

    - by yuval
    I know you can redirect subdomains to a different server, but can you do the same with folders? Say I have example.com. I can redirect mysubdomain.example.com to a different server, but can I redirect example.com/mysubdomain to a different server? I'd like to host a rails app in that folder on a site that runs php while still maintaining good search engines ratings (by not creating a sub domain which in my experience in recognized as a different site). Any help? Thanks!

    Read the article

  • Configuring asp.net web applications. Best practices

    - by Andrew Florko
    Hello everybody, There is a lot of configurable information for a web-site: UI messages Number of records used in pagination & other UI parameters Cache duration for web-pages & timeouts Route maps & site structure ... There are many approaches to store all this information also: AppSettings (web.config) Custom sections (web.config) External xml/text files referred from web.config Internal static class(es) of constants Database table(s) ... What approaches do you usually choose for your tasks & what approaches do you find unsuitable? Thank you in advance!

    Read the article

  • jqmodal IE (7 or 8) flashes black before modal loaded

    - by brad
    This is killing me. In both IE7 and 8, using jqModal, the screen flashes black before the modal content is loaded. I've set up a test app to show you what's happening. I've taken jqModal EXACTLY from the site, no changes whatsoever, no external css that could be affecting my app. It works perfectly in every other browser (including IE6). http://jqmtest.heroku.com/ So, first two links are ajax calls, second is straight up inline HTML. (I originally thought it was the ajax that was affecting it, but that doesn't seem to be the case, I then thought it was slow loading ajax, hence to two differen ajax links) What's crazy is that the jqmodal site itself works perfectly in IE, no flashing of black, but I can't see what I'm doing wrong. Code is straight forward html: <body> <div id="ajaxModal" class="jqmWindow"></div> <div id="inlineModal" class="jqmWindow"> <div style="height:300px;position:relative;"> <p>Here's some inline content</p> <a href="#" onclick='$("#inlineModal").jqmHide();return false;' style="position:absolute;bottom:10px;right:10px">Close</a> </div> </div> <div style="width:600px;height:400px;margin:auto;background:#eee;"> <p><a href="/ajax/short" class="jqModal">Short loading modal</a></p> <br /> <p><a href="/ajax/long" class="jqModal">Longer loading modal</a></p> <br /> <p><a href="#" class="jqInline">inline modal</a></p> </div> </body> Javascript: <script type="text/javascript"> $(function(){ $("#ajaxModal").jqm({ajax:'@href', modal:true}); $("#inlineModal").jqm({modal:true, trigger:'.jqInline'}); }); </script> CSS is exactly the same as the one downloaded from jqModal's site so I'll omit it, but you can see it on my app Has anyone experienced this? I don't get how his works and mine doesn't.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Page.ClientScript.RegisterClientScriptBlock problem in chrome, not registering

    - by user270399
    Hello! I am using Page.ClientScript.RegisterClientScriptBlock to register a variable that tells the site to show something for the user (Don't need to get in to what is show). I am going to try to use RegisterStartupScript instead and see if i get any other result, i will get back on that. In my site i use jquery to look for that variable and do something if it exists. This works very well in IE 6,7,8 & FF but not in Chrome. In Chrome it works sometimes but most of the time it does not work, the variable does not get inserted on the page. The variable is beeing inserted in the Page_PreRender. Someone that has got a workaround for this or have experienced the same problem? Thanks

    Read the article

  • Visual Studio / Visual Source Safe / Integrated Security / IIS 7

    - by Jason
    Using Visual Source Safe with IIS integration (the working dir is the IIS site) Visual Studio, pointed to the IIS site would load up the Web project. It would be under VSS control (have to check out files, etc). Recently, we had to switch to Integrated Security for our database connections from the web app. This means changing the impersonation of the IIS app pool (and anon authentication) to the impersonated account. Since I did this -- my project loads in Visual Studio, but it acts as if I'm not me, and the files aren't under source control anymore. I'm going to assume it's something with the pass-through from IIS to the VSS (as if you'll remember you had to add IIS_USERS to the VSS list of users). Even trying to add the impersonated account didn't work. Any ideas?

    Read the article

  • How to set access-control-allow-origin in webrick under rails?

    - by brad
    I have written a small rails app to serve up content to another site via xmlhttprequests that will be operating from another domain (it will not be possible to get them running on the same server). I understand I will need to set access-control-allow-origin on my rails server to allow the requesting web page to access this material. It seems fairly well documented how to do this with Apache and this is probably the server I will use once I deploy the site. While I am developing though I hope to just use webrick as I am used to doing with rails. Is there a way of configuring webrick to provide the appropriate http header within rails?

    Read the article

  • How to track humans, but not spiders?

    - by johnnietheblack
    I am writing a nice little PHP/MySQL/JS ad system for my site - charging by impression. Therefore, my clients would like to be sure that they aren't paying for impressions caused by robots / spiders / etc. Is there a decently effective way to decipher between human and robot, without doing something obstructive like a captcha, or requesting "no index" or something? Basically, is there some sort of "name tag" that legit spiders have on so that my site knows their presence, therefore letting me react accordingly? And, if there isn't...how do ad companies like Google defend against this?

    Read the article

  • iPhone - Web Access Authentication

    - by Terry
    I am building a secure app for our exec's... here is my setup. It's a somewhat Macgyver approach, but bear with me :) There are only 10 users, I have a record of each uniqueIdentifier on my backend in a database table. (This is internal only for our users, so I don't believe I am breaking the public user registration rule mentioned in the API docs) Through adhoc distribution I install my app on all 10 devices My app is simply composed of a UIWebView. When the app starts it does a POST to our https site sending the uniqueIdentifier. (Thanks to this answer) The server page that recieves the POST, checks the uniqueIdentifier and if found sets a session cookie that automatically logs them into the site. This way the user doesn't have to enter in their credentials every time. So what do you think, is there a security hole with this? Thanks

    Read the article

  • Html index page and files in that directory

    - by Frank
    On my web site, there is an index page, but if I take out that index page, users will see the files in that directory, for instance my site is : XYZ.com and I have a directory called "My_Dir", so when a user typed in "XYZ.com/My_Dir" he will see the index.html if there is one, but if it's not there, he will see a list of all my files inside "My_Dir", so is it safe to assume that with an index page in any of my sub directories, I can hide all the files in those directories from users, in other words if I have "123.txt, abc.html and index.html" in "My_Dir", users won't be able to see "123.txt, abc.html" because of the existence of "index.html" [ unless of course I mention those two files inside index.html ] ? Frank

    Read the article

  • ASP.NET - Missing #includes cause compilation errors: Failed to map the path '...'

    - by frankadelic
    I have an ASP.NET application which features some server-side includes. For example: <!--#include virtual="/scripts.inc" --> These files are not present in my ASP.NET website project because my website starts in a virtual directory: /path-to-my-application When I choose Build Web Site, I get this error: Failed to map the path '/scripts.inc' Visual Studio cannot resolve these include files that are defined at the root directory level. They are not visible in the website project. Aside from manually commenting out the #include references, is there any way I can get the website to build? Can I force Visual Studio to ignore those errors and compile the site? Once the website is pushed out to IIS, there is no problem, because all the #include files are in place. NOTE - Web Controls are not an option for this application. Please assume #include files are a requirement. Also, I cannot move the include files since they are used by other applications.

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >