Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 833/916 | < Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >

  • How to create custom omniauth provider (how to return data)

    - by user2803917
    I searched all around the net, how to create a custom provider for omniauth.. and i succedded partly.. I created a gem, and it worked perfectly, except the part, that i cant understand how to return the gathered data to sessions controller, like other providers do.. here is the code in auth gem: require 'multi_json' require 'digest/md5' require 'rest-client' module OmniAuth module Strategies class Providername < OmniAuth::Strategies::OAuth attr_accessor :app_id, :api_key, :auth def initialize(app, app_id = nil, api_key = nil, options = {}) super(app, :providername) @app_id = app_id @api_key = api_key end protected def request_phase redirect "http://valid_url" end def callback_phase if request.params['code'] && request.params['status'] == 'ok' response = RestClient.get("http://valid_url2/?code=#{request.params['auth_code']}") auth = MultiJson.decode(response.to_s) unless auth['error'] @auth_data = auth if @auth_data @return_data = OmniAuth::Utils.deep_merge(super, { 'uid' => @auth_data['uid'], 'nickname' => @auth_data['nick'], 'user_info' => { 'first_name' => @auth_data['name'], 'last_name' => @auth_data['surname'], 'location' => @auth_data['place'], }, 'credentials' => { 'apikey' => @auth_data['apikey'] }, 'extra' => {'user_hash' => @auth_data} }) end end else fail!(:invalid_request) end rescue Exception => e fail!(:invalid_response, e) end end end end and here i call it in my initializers: Rails.application.config.middleware.use OmniAuth::Builder do provider "providername", Settings.providers.providername.app_id, Settings.providers.providername.app_secret end in this code, everything works fine so far, the provider gets called, i get the info from provider, i create a hash (@auth_data) with info, but how do i return it

    Read the article

  • GZipStream in an WebHttpResponse producing no data

    - by Pierre 303
    I want to compress my HTTP Responses for client that supports it. Here is the code used to send a standard response: IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } The code above works fine. Now I added gzip support below. When I test it with a browser that supports gzip or a custom method, it returns an empty string. I'm sure I'm missing something simple, but I just can't find it... IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string acceptEncoding = request.Headers["Accept-Encoding"]; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); if (acceptEncoding != null && acceptEncoding.Contains("gzip")) { byte[] bytes = ASCIIEncoding.ASCII.GetBytes(responseBody); response.AddHeader("Content-Encoding", "gzip"); using (GZipStream writer = new GZipStream(response.Body, CompressionMode.Compress)) { writer.Write(bytes, 0, bytes.Length); writer.Flush(); response.Send(); } } else { using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } }

    Read the article

  • FreeTDS runs out of memory from DBD::Sybase

    - by skiphoppy
    When I add client charset = UTF-8 to my freetds.conf file, my DBD::Sybase program emits: Out of memory! and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error. All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.) The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv? If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error. I'm connecting to SQL Server. What do I need to do to get these queries to return successfully?

    Read the article

  • Why does System.Net.Mail work in one part of my c#.net web app, but not in another?

    - by Marc
    I have a web application that is running on IIS within my company's domain, and is being accessed via intranet. I have this application sending out email based on some user actions. For example, its a scheduling application in part, so if a task is completed, an email is sent out notifying other users of that. The problem is, the email works flawlessly in some cases, and not at all in others. I have a login.aspx page which sends out report emails when the page is loaded (its loaded once a day via windows task scheduler) - this always seems to work perfectly. I have an update page which is supposed to send email when text is entered and the "Update" button is clicked - this operation will fail most of the time. Both of these tasks use the same static overloaded method I wrote to send email using System.Net.Mail. I have tried using gmail as my smtp server (instead of our internal one), and get the same results. I investigated whether having the local SMTP Service running makes any difference, and it doesn't seem to. Besides, since C# is server-side code, shouldn't it only matter whats running on the server, and not the client? Please help me figure out whats wrong! Where should I look? What can I try?

    Read the article

  • MS Word opens documents hosted on WebDav share read-only on Windows Vista and 7 but only if no other

    - by rjmunro
    We have a WebDav server with some Word documents on it. (We are using PHP's HTTP_WebDAV_Server but get the same issue on tests with Apache mod_dav - both use digest authentication, basic auth doesn't work on Vista or later) We have a web page that opens the word documents using javascript like: Doc = new ActiveXObject("Sharepoint.OpenDocuments.3"); Doc.EditDocument(url, 'Word.Document'); which causes word to connect to the webdav server and open the document, bypassing IE and most of windows built in WebDav client. On Windows XP, this works perfectly, and (after prompting you to log in) allows you to edit the word document and save it back to the server. On Windows 7 and Windows Vista, this usually opens the document read only, but not in all cases. After quite a bit of trial and error, we found that it worked (i.e. opened read/write) if Explorer happened to be already connected to a WebDav server. Note that this works with any Webdav server, not neccesarily the one with the document that you are trying to edit. So other than telling our users to change settings on their machine, is there anything we can do in the javascript sharepoint call, or on the WebDav server that will fix this issue. Ps. We have the same problem when launching Word from an HTA file version of our system, with javascript like: wordApp = new ActiveXObject("Word.application"); wordApp = new ActiveXObject("Word.application"); wordApp.Visible = true; doc = wordApp.Documents.Open(url); Pps. Sorry if you think this question should be on Serverfault (or even SuperUser). I couldn't decide, but because we are programming the WebDav server ourself (in PHP) and I have more rep on this site than the others, I decided to post it here :-)

    Read the article

  • JqueryUI dialog box causes button to lose styling

    - by superexsl
    Hey everyone, I'm using JQueryUI dialog boxes and, while it works, it seems to remove any CSS styling on the button that opens the dialog. I have to manually refresh the page to get the styling back. An example button which launches the dialog: (button_submit is my CSS theme and launc_popup is used to detect the button click in JQuery. <asp:Button ID="btnLaunch" CssClass="button_sub launch_popup" runat="server" Text="Dialog..." CausesValidation="false" OnClientClick="return false;" /> My jquery: $('.launch_popup').click(function() { $("#dialog-form").dialog("open"); }); My dialog-form: $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 300, modal: true, buttons: { 'Close': function() { $(this).dialog('close'); } } }); This happens to all the buttons that open dialogs. Is there a way to 'restyle' the button without refreshing the page? (It's in an updatepanel if that makes a difference, although this bit's client-side so not sure if that should affect it). Thanks for any help

    Read the article

  • How to make the value of one select box drive the options of a second select box

    - by Ben McCormack
    I want to make an HTML form with 2 select boxes. The selected option in the first select box should drive the options in the second select box. I would like to solve this dynamically on the client (using javascript or jQuery) rather than having to submit data to the server. For example, let's say I have the following Menu Categories and Menu Items: Sandwiches Turkey Ham Bacon Sides Mac 'n Cheese Mashed Potatoes Drinks Coca Cola Sprite Sweetwater 420 I would have two select boxes, named Menu Category and Items, respectively. When the user selects Sandwiches in the Menu Category box, the options in the Items box will only show Sandwich options. I'm stuck as how I might approach this. Once I filter out the 2nd list one time, how do I "find" the list options once I change my menu category in the 1st list? Also, if I'm thinking in SQL, I would have a key in the 1st box that would be used to link to the data in the 2nd box. However, I can't see where I have room for a "key" element in the 2nd box. How could this problem be solved with a combination of jQuery or plain javascript?

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • SQL Server: preventing dirty reads in a stored procedure

    - by pcampbell
    Consider a SQL Server database and its two stored procs: *1. A proc that performs 3 important things in a transaction: Create a customer, call a sproc to perform another insert, and conditionally insert a third record with the new identity. BEGIN TRAN INSERT INTO Customer(CustName) (@CustomerName) SELECT @NewID = SCOPE_IDENTITY() EXEC CreateNewCustomerAccount @NewID, @CustomerPhoneNumber IF @InvoiceTotal > 100000 INSERT INTO PreferredCust(InvoiceTotal, CustID) VALUES (@InvoiceTotal, @NewID) COMMIT TRAN *2. A stored proc which polls the Customer table for new entries that don't have a related PreferredCust entry. The client app performs the polling by calling this stored proc every 500ms. A problem has arisen where the polling stored procedure has found an entry in the Customer table, and returned it as part of its results. The problem was that it has picked up that record, I am assuming, as part of a dirty read. The record ended up having an entry in PreferredCust later, and ended up creating a problem downstream. Question How can you explicitly prevent dirty reads by that second stored proc? The environment is SQL Server 2005 with the default configuration out of the box. No other locking hits are given in either of these stored procedures.

    Read the article

  • how to retrieve informatin from deleted row

    - by JM
    How can I retrie infromation from delete rows. I delete some rows from table in dataset, then I use method GetChanges(DataRowState.Deleted) to get deleted rows. I try delete rows in original table on server side, but it finished with this errors. System.Data.DeletedRowInaccessibleException: Deleted row information cannot be accessed through the row. What is correct way? Here is my code, any advice? Thank you everybody Dataset ds = //get dataset from client side //get changes DataTable delRows = ds.Tables[0].GetChanges(DataRowState.Deleted); //try delete rows in table in DB if (delRows != null) { string connStr = WebConfigurationManager.ConnectionStrings["Employees"].ConnectionString; conn = new SqlConnection(connStr); conn.Open(); for (int i = 0; i < delRows.Rows.Count; i++) { string cmdText = string.Format("DELETE Tab1 WHERE Surname=@Surname"); cmd = new SqlCommand() { Connection = conn, CommandText = cmdText }; //here is problem, I need get surnames from rows which was deleted var sqlParam = new SqlParameter(@"Surname", SqlDbType.VarChar) { Value = delRows.Rows[i][1].ToString() }; cmd.Parameters.Add(sqlParam); cmd.CommandText = cmdText; cmd.Connection = conn; cmd.ExecuteNonQuery(); } }

    Read the article

  • Low-Hanging Fruit: Obfuscating non-critical values in JavaScript

    - by Piskvor
    I'm making an in-browser game of the type "guess what place/monument/etc. is in this satellite/aerial view", using Google Maps JS API v3. However, I need to protect against cheaters - you have to pass a google.maps.LatLng and a zoom level to the map constructor, which means a cheating user only needs to view source to get to this data. I am already unsetting every value I possibly can without breaking the map (such as center and the manipulation functions like setZoom()), and initializing the map in an anonymous function (so the object is not visible in global namespace). Now, this is of course in-browser, client-side, untrusted JavaScript; I've read much of the obfuscation tag and I'm not trying to make the script bullet-proof (it's just a game, after all). I only need to make the obfuscation reasonably hard against the 1337 Java5kryp7 haxz0rz - "kid sister encryption", as Bruce Schneier puts it. Anything harder than base64 encoding would deter most cheaters by eliminating the lowest-hanging fruit - if the cheater is smart and determined enough to use a JS debugger, he can bypass anything I can do (as I need to pass the value to Google Maps API in plaintext), but that's unlikely to happen on a mass scale (there will also be other, not-code-related ways to prevent cheating). I've tried various minimizers and obfuscators, but those will mostly deal with code - the values are still shown verbatim. TL;DR: I need to obfuscate three values in JavaScript. I'm not looking for bullet-proof armor, just a sneeze-guard. What should I use?

    Read the article

  • [grails] setting cookies when render type is "contentType: text/json"

    - by Robin Jamieson
    Is it possible to set cookies on response when the return render type is set as json? I can set cookies on the response object when returning with a standard render type and later on, I'm able to get it back on the subsequent request. However, if I were to set the cookies while rendering the return values as json, I can't seem to get back the cookie on the next request object. What's happening here? These two actions work as expected with 'basicForm' performing a regular form post to the action, 'withRegularSubmit', when the user clicks submit. // first action set the cookie and second action yields the originally set cookie def regularAction = { // using cookie plugin response.setCookie("username-regular", "regularCookieUser123",604800); return render(view: "basicForm"); } // called by form post def withRegularSubmit = { def myCookie = request.getCookie("username-regular"); // returns the value 'regularCookieUser123' return render(view: "resultView"); } When I switch to setting the cookie just before returning from the response with json, I don't get the cookie back with the post. The request starts by getting an html document that contains a form and when doc load event is fired, the following request is invoked via javascript with jQuery like this: var someUrl = "http://localhost/jsonAction"; $.get(someUrl, function(jsonData) { // do some work with javascript} The controller work: // this action is called initially and returns an html doc with a form. def loadJsonForm = { return render(view: "jsonForm"); } // called via javascript when the document load event is fired def jsonAction = { response.setCookie("username-json", "jsonCookieUser456",604800); // using cookie plugin return render(contentType:'text/json') { 'pair'('myKey': "someValue") }; } // called by form post def withJsonSubmit = { def myCookie = request.getCookie("username-json"); // got null value, expecting: jsonCookieUser456 return render(view: "resultView"); } The data is returned to the server as a result of the user pressing the 'submit' button and not through a script. Prior to the submit of both 'withRegularSubmit' and 'withJsonSubmit', I see the cookies stored in the browser (Firefox) so I know they reached the client.

    Read the article

  • Panel not displaying while dowloading file

    - by James123
    I wrote download excel file in my code. If I click download button I need show ajax-load image (pnlPopup panel). But it is not displaying. I think because of Some "Response" statements (see below code). Download working fine, but simultaniously I want show loader panel too. <asp:Panel ID="pnlPopup" runat="server" visible="false"> <div align="center" style="margin-top: 13px;"> <asp:Image runat ="server" ID="imgDownload" src="Images/ajax-loader.gif" alt="" /> <br /> <span class="updateProgressMessage">downloading ...</span> </div> Protected Sub btnDownload_Click(ByVal sender As Object, ByVal e As EventArgs) 'Handles btnDownload.Click' Try pnlPopup.Visible = True Dim mSurvey As New Survey Dim mUser As New User Dim dtExcel As DataTable mUser = CType(Session("user"), User) dtExcel = mSurvey.CreateExcelWorkbook(mUser.UserID, mUser.Client.ID) Dim filename As String = "Download.xls" InitializeWorkbook() GenerateData(dtExcel) Response.ContentType = "application/vnd.ms-excel" Response.AddHeader("Content-Disposition", String.Format("attachment;filename={0}", filename)) Response.Clear() Response.BinaryWrite(WriteToStream.GetBuffer) Response.End() Catch ex As Exception Finally End Try End Sub

    Read the article

  • Best way to make a WYSIWYG in Flex?

    - by hngry4powr
    Hello, I have a client that wants a WYSIWYG form that behaves like MS word. It will have some section labels in it, with users expected to enter notes for each section, and it needs a place holder for a chart. The trick is that the user needs to know where page breaks are going to fall when printed. The customer agrees to fixed size paper with fixed sized margins. How should I approach this challenge? Thank you very much for your suggestions. Here is an example: Section 1: Explain Idea Here { As user types in this section, the space between this section and the next section expands, but I as the user can't delete the lines that say Section 1 or 2, or the place holder for the chart. However if I type a lot of text, I need to display where the page breaks are with a dashed line or something, so the user can make sure the content enters appears on the desired page. If I type a lot here for example, section 2 will get pushed onto the next page} Section 2: State why this is a Good Idea [ PLACE HOLDER FOR CHART . . . ] Section 3: Additional Remarks:

    Read the article

  • XBAP Browser Control - Invoking Click event of the html Input type button

    - by maharaj
    Hi, Here is what I have. 1.XBAP application with WPF Browser control, hosted on Page1.xaml 2.XBAP in Full Trust, certificate installed in client browser 3.Once the XBAP loaded, the browser control is navigated to some third party site. 4.We are using MVVM for XAML stuff So, when a certain page is loaded, I attach click event handler to the input button with id="submit" on the html page displayed in the browser control (used the code similar to whats in this URL http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/a4f0e4d0-78bf-44c5-a3fe-8faf2e7a0568/). It works just fine as long as I dont make a wcf web service call in my ViewModel, before or after I attach this event hander. Idea is to invoke the click event for the html button and grab the data from the html page before calling the webservice to save data from the page. Here is the issue: When I make the wcf webservice call (sync or async, it doesnt matter) the click event doesnt happen but if I comment out the the code for wcf service call the click event of the html input of type button gets invoked. Any help would be appreciated. Thanks, Salil

    Read the article

  • Calculating a date when a date has been chosen by the number of days

    - by Andy
    I have three selection drop downs in a form comprising of day, month, year. Pretty standard. Ive omitted all the individual select options for the purposes of this question. <label for="start_date">Start Date<font class="required">*</font>:</label> <select name="place_booking[day_val]"> <select name="place_booking[month_val]"> <select name="place_booking[year_val]"> Underneath this i have a selection for the number of days the client wishes to stay at the letting. <label for="number_of_days">Number of Days<font class="required">*</font>:</label> <select name="place_booking[number_of_days]"> Underneath there is a space to display the departure date based on there two selections above. <label for="departure_date">Departure Date<font class="required">*</font>:</label> ? - this bit i would like to display the calculated date after the above is selected Any help would be grealty appreciated.

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Rewrite a URL that's already been redirected?

    - by Jack
    Hi guys, I'm running an Apache2 web server with a dynamic IP address. I bought exampledomain.net, and I use no-ip.com's domain-update service to redirect any visitors to my current ip address (endnote #1). For example, someone visits exampledomain.net and they get redirected to 73.181.57.34. It works like a charm. However, it isn't all that user-friendly. Can I rewrite the redirected, ip-address URL? I tried these rewrite rules in the root folder's .htaccess... RewriteEngine On RewriteCond %{HTTP_HOST} ^73\.181\.57\.34:88 RewriteRule ^(.*)$ http://www.exampledomain.net/$1 [L,NC] # I simplified the RewriteCond. I would use regex in a real situation. Of course, this creates an infinite loop. The user visits www.exampledomain.net. They're redirected to 73.181.57.34:88 by no-ip. Apache redirects them to www.exampledomain.net which redirects them back to 73.181.57.34:88... so on and so forth. I'm a noob when it comes to rewriting, but is there a way to rewrite a URL without redirecting? I tried these rewrite rules too (a shot in the dark)... RewriteEngine On RewriteCond %{HTTP_HOST} ^73\.181\.57\.34:88 RewriteRule ^(.*)$ my.exampledomain.net/$1 [L,NC] # I'd read that Apache replied with a redirect header when you include http Thanks! (1) No-IP works like this: You download and install their dynamic update client on your server. Every couple of minutes it polls your server for its current external ip address. If it's changed, it updates your server's ip address in no-ip's records.

    Read the article

  • How to stay DRY when using both Javascript and ERB templates (Rails)

    - by user94154
    I'm building a Rails app that uses Pusher to use web sockets to push updates to directly to the client. In javascript: channel.bind('tweet-create', function(tweet){ //when a tweet is created, execute the following code: $('#timeline').append("<div class='tweet'><div class='tweeter'>"+tweet.username+"</div>"+tweet.status+"</div>"); }); This is nasty mixing of code and presentation. So the natural solution would be to use a javascript template. Perhaps eco or mustache: //store this somewhere convenient, perhaps in the view folder: tweet_view = "<div class='tweet'><div class='tweeter'>{{tweet.username}}</div>{{tweet.status}}</div>" channel.bind('tweet-create', function(tweet){ //when a tweet is created, execute the following code: $('#timeline').append(Mustache.to_html(tweet_view, tweet)); //much cleaner }); This is good and all, except, I'm repeating myself. The mustache template is 99% identical to the ERB templates I already have written to render HTML from the server. The intended output/purpose of the mustache and ERB templates are 100% the same: to turn a tweet object into tweet html. What is the best way to eliminate this repetition? UPDATE: Even though I answered my own question, I really want to see other ideas/solutions from other people--hence the bounty!

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • XMLHttpRequest() and Google Analytics Tracking

    - by sjw
    I have implemented an XMLHttpRequest() call to a standalone html page which simply has an html, title & body tag which Google Analytics Tracking code. I want to track when someone makes a request to display information (i.e. phone number) to try and understand what portion of people look at my directory versus obtaining a phone number to make a call. It is very simple code: var xhReq = new XMLHttpRequest(); xhReq.open("GET", "/registerPhoneClick.htm?id=" + id, false); xhReq.send(null); var serverResponse = xhReq.responseText Yet I cannot see the "hit" in Analytics... Has anyone had this issue? All the analytics tracking code does is call: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-XXXXXXX"); pageTracker._trackPageview(); } catch(err) {}</script> So realistically, my XmlHTTPRequest() calls an htm file within which a script is execute to make an outbound call to Google Analytics. Is there any reason why an XmlHTTPRequest() would not execute this? Does an XmlHTTPRequest() still bring the code to the client before execution? Help Please

    Read the article

  • Some clarification needed about synchronous versus asynchronous asio operations

    - by Old newbie
    As far as I know, the main difference between synchronous and asynchronous operations. I.e. write() or read() vs async_write() and async_read() is that the former, don't return until the operation finish -or error-, and the last ones, returns inmediately. Due the fact that the asynchronous operations are controlled by an io_service.run() that does not finish until the controlled operations has finalized. It seems to me that in sequencial operations as those involved in TCP/IP connections with protocols such as POP3, in which the operaton is a sequence such as: C: <connect> S: Ok. C: User... S: Ok. C: Password S: Ok. C: Command S: answer C: Command S: answer ... C: bye S: <close> The difference between synchronous/asynchronous opperatons does not make much sense. Of course, in both operations there is allways the risk that the program flow stops indefinitely by some circunstance -there the use of timers-, but I would like know some more authorized opinions in this matter. I must admit that the question is rather ill-defined, but I like hear some advices about when use one or other, because I've problems in debugging with MS Visual Studio, asynchronous SSL operations in a POP3 client in wich I'm working now -about some of who surely I would write here soon-, and sometimes think that perhaps is a bad idea use asynchronous in this. Not to say that I'm an absolute newbie with this librarys, that additionally to the difficult with the idioma, and some obscure concepts in the STL, must suffer the brevity of the asio documentation.

    Read the article

  • Multiple queries using same datacontext throws SqlException

    - by Raj
    I've search control with which I'm trying to implement search as user types something. I'm using Linq to SQL to fire queries against the database. Though it works fine usually, when user types the queries really fast some random SqlException is thrown. These are the two different error message I stumbled across recently: A severe error occurred on the current command. The results, if any, should be discarded. Invalid attempt to call Read when reader is closed. Edit: Included code DataContextFactory class: public DataContextFactory(IConnectionStringFactory connectionStringFactory) { this.dataContext = new RegionDataContext(connectionStringFactory.ConnectionString); } public DataContext Context { get { return this.dataContext; } } public void SaveAll() { this.dataContext.SubmitChanges(); } Registering IDataContextFactory with Unity // Get connection string from Application.Current settings ConnectionInfo connectionInfo = Application.Current.Properties["ConnectionInfo"] as ConnectionInfo; // Register ConnectionStringFactory with Unity container as a Singleton this.container.RegisterType<IConnectionStringFactory, ConnectionStringFactory>(new ContainerControlledLifetimeManager(), new InjectionConstructor(connectionInfo.ConnectionString)); // Register DataContextFactory with Unity container this.container.RegisterType<IDataContextFactory, DataContextFactory>(); Connection string: Data Source=.\SQLEXPRESS2008;User Instance=true;Integrated Security=true;AttachDbFilename=C:\client.mdf;MultipleActiveResultSets=true; Using datacontext from a repository class: // IDataContextFactory dependency is injected by Unity public CompanyRepository(IDataContextFactory dataContextFactory) { this.dataContextFactory = dataContextFactory; } // return List<T> of companies var results = this.dataContextFactory.Context.GetTable<CompanyEntity>() .Join(this.dataContextFactory.Context.GetTable<RegionEntity>(), c => c.regioncode, r => r.regioncode, (c, r) => new { c = c, r = r }) .Where(t => t.c.summary_region != null) .Select(t => new { Id = t.c.compcode, Company = t.c.compname, Region = t.r.regionname }).ToList(); What is the work around?

    Read the article

  • In .NET, how do I prevent, or handle, tampering with form data of disabled fields before submission?

    - by David
    Hi, If a disabled drop-down list is dynamically rendered to the page, it is still possible to use Firebug, or another tool, to tamper with the submitted value, and to remove the "disabled" HTML attribute. This code: protected override void OnLoad(EventArgs e) { var ddlTest = new DropDownList() {ID="ddlTest", Enabled = false}; ddlTest.Items.AddRange(new [] { new ListItem("Please select", ""), new ListItem("test 1", "1"), new ListItem("test 2", "2") }); Controls.Add(ddlTest); } results in this HTML being rendered: <select disabled="disabled" id="Properties_ddlTest" name="Properties$ddlTest"> <option value="" selected="selected">Please select</option> <option value="1">test 1</option> <option value="2">test 2</option> </select> The problem occurs when I use Firebug to remove the "disabled" attribute, and to change the selected option. On submission of the form, and re-creation of the field, the newly generated control has the correct value by the end of OnLoad, but by OnPreRender, it has assumed the identity of the submitted control and has been given the submitted form value. .NET seems to have no way of detecting the fact that the field was originally created in a disabled state and that the submitted value was faked. This is understandable, as there could be legitimate, client-side functionality that would allow the disabled attribute to be removed. Is there some way, other than a brute force approach, of detecting that this field's value should not have been changed? I see the brute force approach as being something crap, like saving the correct value somewhere while still in OnLoad, and restoring the value in the OnPreRender. As some fields have dependencies on others, that would be unacceptable to me.

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

< Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >