Search Results

Search found 7294 results on 292 pages for 'out parameters'.

Page 265/292 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • function parameter used to store value

    - by user248247
    Hi, I have to define an interface. The API in my homework is stated below: int generate_codes(char * ssn, char * student_id); int denotes 0 or 1 for pass or fail. studentid is an output param should return a 6 digit id. ssn is a 9 digit input param they school program will take ssn's and use my code to generate the student id. now from an API perspective should I not be using const char * for both parameters. should the studentid not be passed in by reference? rather than by pointer? can someone tell me how i can easily use the pointer in my test app which uses my api to get the pointer such that it prints a std::string from a char *? my app code looks something like const char * ssn = "987098765" const char * studnt_id = new char [7]; int value = -1; value = generate_codes(ssn,studnt_id); std::string test(studnt_id); std::cout<<"student id= "<<test<<" Pass/fail= "<<value<<std::endl; delete [] studnt_id; return 0; I basically got an error about << not being compatible with the right hand side of the operand. When i changed the code to std::cout<<"student id= "<<test.c_str()<<" Pass/fail= "<<value<<std::endl; then it worked but i get garbage for the value. not sure how to do get the value form the pointer. THe value inside the function prints just fine. but when i try to print it outside of the function it prints garbage. Inside the above function I do set the studndt_id like so std::string str_studnt_id = studnt_id; should that make the address of the str_studnt point to the address of studnt_id and thus any changes I make to the value that its pointing to it should reflect outside the function?

    Read the article

  • Is Java assert broken?

    - by BlairHippo
    While poking around the questions, I recently discovered the assert keyword in Java. At first, I was excited. Something useful I didn't already know! A more efficient way for me to check the validity of input parameters! Yay learning! But then I took a closer look, and my enthusiasm was not so much "tempered" as "snuffed-out completely" by one simple fact: you can turn assertions off.* This sounds like a nightmare. If I'm asserting that I don't want the code to keep going if the input listOfStuff is null, why on earth would I want that assertion ignored? It sounds like if I'm debugging a piece of production code and suspect that listOfStuff may have been erroneously passed a null but don't see any logfile evidence of that assertion being triggered, I can't trust that listOfStuff actually got sent a valid value; I also have to account for the possibility that assertions may have been turned off entirely. And this assumes that I'm the one debugging the code. Somebody unfamiliar with assertions might see that and assume (quite reasonably) that if the assertion message doesn't appear in the log, listOfStuff couldn't be the problem. If your first encounter with assert was in the wild, would it even occur to you that it could be turned-off entirely? It's not like there's a command-line option that lets you disable try/catch blocks, after all. All of which brings me to my question (and this is a question, not an excuse for a rant! I promise!): What am I missing? Is there some nuance that renders Java's implementation of assert far more useful than I'm giving it credit for? Is the ability to enable/disable it from the command line actually incredibly valuable in some contexts? Am I misconceptualizing it somehow when I envision using it in production code in lieu of statements like if (listOfStuff == null) barf();? I just feel like there's something important here that I'm not getting. *Okay, technically speaking, they're actually off by default; you have to go out of your way to turn them on. But still, you can knock them out entirely.

    Read the article

  • How to send hashed data in SOAP request body?

    - by understack
    I want to imitate following request using Zend_Soap_Client for this wsdl. <SOAP-ENV:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:clr="http://schemas.microsoft.com/soap/encoding/clr/1.0" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Header> <h3:__MethodSignature xsi:type="SOAP-ENC:methodSignature" xmlns:h3="http://schemas.microsoft.com/clr/soap/messageProperties" SOAP-ENC:root="1" xmlns:a2="http://schemas.microsoft.com/clr/ns/System.Collections">xsd:string a2:Hashtable</h3:__MethodSignature> </SOAP-ENV:Header> <SOAP-ENV:Body> <i4:ReturnDataSet id="ref-1" xmlns:i4="http://schemas.microsoft.com/clr/nsassem/Interface.IRptSchedule/Interface"> <sProc id="ref-5">BU</sProc> <ht href="#ref-6"/> </i4:ReturnDataSet><br/> <a2:Hashtable id="ref-6" xmlns:a2="http://schemas.microsoft.com/clr/ns/System.Collections"> <LoadFactor>0.72</LoadFactor> <Version>1</Version> <Comparer xsi:null="1"/> <HashCodeProvider xsi:null="1"/> <HashSize>11</HashSize> <Keys href="#ref-7"/> <Values href="#ref-8"/> </a2:Hashtable> <SOAP-ENC:Array id="ref-7" SOAP-ENC:arrayType="xsd:anyType[1]"> <item id="ref-9" xsi:type="SOAP-ENC:string">@AppName</item> </SOAP-ENC:Array><br/> <SOAP-ENC:Array id="ref-8" SOAP-ENC:arrayType="xsd:anyType[1]"> <item id="ref-10" xsi:type="SOAP-ENC:string">AAGENT</item> </SOAP-ENC:Array> </SOAP-ENV:Body> </SOAP-ENV:Envelope> It seems somehow I've to send hashed "ref-7" and "ref-8" array embedded inside body? How can I do this? Function ReturnDataSet takes two parameters, how can I send additional "ref-7" and "ref-8" array data? $client = new SoapClient($wsdl_url, array('soap_version' => SOAP_1_1)); $result = $client->ReturnDataset("BU", $ht); I don't know how to set $ht, so that hashed data is sent as different body entry. Thanks.

    Read the article

  • What's the correct way to use Cakephp urls?

    - by Pichan
    Hello all, it's my first post here :) I'm having some difficulties with dealing with urls and parameters. I've gone through the router class api documentation over and over again and found nothing useful. First of all, I'd like to know if there is any 'universal' format in CakePHP(1.3) for handling urls. I'm currently handling all my urls as simple arrays(in the format that Router::url and $html-link accepts) and it's easy as long as I only need to pass them as arguments to cake's own methods. It usually gets tricky if I need something else. Mainly I'm having problems with converting string urls to the basic array format. Let's say I want to convert $arrayUrl to string and than again into url: $arrayUrl=array('controller'=>'SomeController','action'=>'someAction','someValue'); $url=Router::url($arrayUrl); //$url is now '/path/to/site/someController/someAction/someValue' $url=Router::normalize($url); //remove '/path/to/site' $parsed=Router::parse($url); /*$parsed is now Array( [controller] => someController [action] => someAction [named] => Array() [pass] => Array([0] => someValue) [plugin] => ) */ That seems an awful lot of code to do something as simple as to convert between 2 core formats. Also, note that $parsed is still not in the same as $arrayUrl. Of course I could tweak $parsed manually and actually I've done that a few times as a quick patch but I'd like to get to the bottom of this. I also noticed that when using prefix routing, $this-params in controller has the prefix embedded in the action(i.e. [action] = 'admin_edit') and the result of Router::parse() does not. Both of course have the prefix in it's own key. To summarize, how do I convert an url between any of these 3(or 4, if you include the prefix thing) mentioned formats the right way? Of course it would be easy to hack my way through this, but I'd still like to believe that cake is being developed by a bunch of people who have a lot more experience and insight than me, so I'm guessing there's a good reason for this "perceived misbehavior". I've tried to present my problem as good as I can, but due to my rusty english skills, I had to take a few detours :) I'll explain more if needed.

    Read the article

  • Logging in with WebFinger and OpenID

    - by Ryan
    I would like to apologize in advance for the ugly formatting. In order to talk about the problem, I need to be posting a bunch of URLs, but the excessive URLs and my lack of reputation makes StackOverflow think I could be a spammer. Any instance of 'ht~tp' is supposed to be 'http'. '{dot}' is supposed to be '.' and '{colon}' is supposed to be ':'. Also, my lack of reputation has prevented me from tagging my question with 'webfinger' and 'google-profiles'. Onto my question: I am messing around with WebFinger and trying to create a small rails app that enables a user to log in using nothing but their WebFinger account. I can succesfully finger myself, and I get back an XRD file with the following snippet: Link rel="ht~tp://specs{dot}openid{dot}net/auth/2.0/provider" href="ht~tp://www{dot}google{dot}com/profiles/{redacted}"/ Which, to me, reads, "I have an OpenID 2.0 login at the url: ht~tp://www{dot}google{dot}com/profiles/{redacted}". But when I try to use that URL to log in, I get the following error OpenID::DiscoveryFailure (Failed to fetch identity URL ht~tp://www{dot}google{dot}com/profiles/{redacted} : Error encountered in redirect from ht~tp://www{dot}google{dot}com/profiles/{redacted}: Error fetching /profiles/{Redacted}: Connection refused - connect(2)): When I replace the profile URL with 'ht~tps://www{dot}google{dot}com/accounts/o8/id', the login works perfectly. here is the code that I am using (I'm using RedFinger as a plugin, and JanRain's ruby-openid, installed without the gem) require "openid" require 'openid/store/filesystem.rb' class SessionsController < ApplicationController def new @session = Session.new #render a textbox requesting a webfinger address, and a submit button end def create ####################### # # Pay Attention to this section right here # ####################### #use given webfinger address to retrieve openid login finger = Redfinger.finger(params[:session][:webfinger_address]) openid_url = finger.open_id.first.to_s #openid_url is now: ht~tp://www{dot}google{dot}com/profiles/{redacted} #Get needed info about the acquired OpenID login file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.begin(openid_url) #ERROR HAPPENS HERE #send user to OpenID login for verification redirect_to response.redirect_url('ht~tp://localhost{colon}3000/','ht~tp://localhost{colon}3000/sessions/complete') end def complete #interpret return parameters file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.complete params case response.status when OpenID::SUCCESS session[:openid] = response.identity_url #redirect somehwere here end end end Is it possible for me to use the URL I received from my WebFinger to log in with OpenID?

    Read the article

  • Rails: AJAX Controller JS not firing...

    - by neezer
    I'm having an issue with one of my controller's AJAX functionality. Here's what I have: class PhotosController < ApplicationController # ... def create @photo = Photo.new(params[:photo]) @photo.image_content_type = MIME::Types.type_for(@photo.image_file_name).to_s @photo.image_width = Paperclip::Geometry.from_file(params[:photo][:image]).width.to_i @photo.image_height = Paperclip::Geometry.from_file(params[:photo][:image]).height.to_i @photo.save! respond_to do |format| format.js end end # ... end This is called through a POST request sent by this code: $(function() { // add photos link $('a.add-photos-link').colorbox({ overlayClose: false, onComplete: function() { wire_add_photo_modal(); } }); function wire_add_photo_modal() { <% session_key = ActionController::Base.session_options[:key] %> $('#upload_photo').uploadify({ uploader: '/swf/uploadify.swf', script: '/photos', cancelImg: '/images/buttons/cancel.png', buttonText: 'Upload Photo(s)', auto: true, queueID: 'queue', fileDataName: 'photo[image]', scriptData: { '<%= session_key %>': '<%= u cookies[session_key] %>', commit: 'Adding Photo', controller: 'photos', action: 'create', '_method': 'post', 'photo[gallery_id]': $('#gallery_id').val(), 'photo[user_id]': $('#user_id').val(), authenticity_token: encodeURIComponent('<%= u form_authenticity_token if protect_against_forgery? %>') }, multi: true }); } }); Finally, I have my response code in app/views/photos/create.js.erb: alert('photo added!'); My log file shows that the request was successful (the photo was successfully uploaded), and it even says that it rendered the create action, yet I never get the alert. My browser shows NO javascript errors. Here's the log AFTER a request from the above POST request is submitted: Processing PhotosController#create (for 127.0.0.1 at 2010-03-16 14:35:33) [POST] Parameters: {"Filename"=>"tumblr_kx74k06IuI1qzt6cxo1_400.jpg", "photo"=>{"user_id"=>"1", "image"=>#<File:/tmp/RackMultipart20100316-54303-7r2npu-0>}, "commit"=>"Adding Photo", "_edited_session"=>"edited", "folder"=>"/kakagiloon/", "authenticity_token"=>"edited", "action"=>"create", "_method"=>"post", "Upload"=>"Submit Query", "controller"=>"photos"} [paperclip] Saving attachments. [paperclip] saving /public/images/assets/kakagiloon/thumbnail/tumblr_kx74k06IuI1qzt6cxo1_400.jpg [paperclip] saving /public/images/assets/kakagiloon/profile/tumblr_kx74k06IuI1qzt6cxo1_400.jpg [paperclip] saving /public/images/assets/kakagiloon/original/tumblr_kx74k06IuI1qzt6cxo1_400.jpg Rendering photos/create Completed in 248ms (View: 1, DB: 6) | 200 OK [http://edited.local/photos] NOTE: I edited out all the SQL statements and I put "edited" in place of sensitive info. What gives? Why aren't I getting my alert();? Please let me know if you need anymore info to help me solve this issue! Thanks.

    Read the article

  • fileToSTring keeps on returning " "

    - by karikari
    I managed to get this code to compile with out error. But somehow it did not return the strings that I wrote inside file1.txt and file.txt that I pass its path through str1 and str2. My objective is to use this open source library to measure the similarity between strings contains inside 2 text files. Inside the its Javadoc, its states that ... public static java.lang.StringBuffer fileToString(java.io.File f) private call to load a file and return its content as a string. Parameters: f - a file for which to load its content Returns: a string containing the files contents or "" if empty or not present Here's is my modified code trying to use the FileLoader function, but fails to return the strings inside the file. The end result keeps on returning me the "" . I do not know where is my fault: package uk.ac.shef.wit.simmetrics; import java.io.File; import uk.ac.shef.wit.simmetrics.similaritymetrics.*; import uk.ac.shef.wit.simmetrics.utils.*; public class SimpleExample { public static void main(final String[] args) { if(args.length != 2) { usage(); } else { String str1 = "arg[0]"; String str2 = "arg[1]"; File objFile1 = new File(str1); File objFile2 = new File(str2); FileLoader obj1 = new FileLoader(); FileLoader obj2 = new FileLoader(); str1 = obj1.fileToString(objFile1).toString(); str2 = obj2.fileToString(objFile2).toString(); System.out.println(str1); System.out.println(str2); AbstractStringMetric metric = new MongeElkan(); //this single line performs the similarity test float result = metric.getSimilarity(str1, str2); //outputs the results outputResult(result, metric, str1, str2); } } private static void outputResult(final float result, final AbstractStringMetric metric, final String str1, final String str2) { System.out.println("Using Metric " + metric.getShortDescriptionString() + " on strings \"" + str1 + "\" & \"" + str2 + "\" gives a similarity score of " + result); } private static void usage() { System.out.println("Performs a rudimentary string metric comparison from the arguments given.\n\tArgs:\n\t\t1) String1 to compare\n\t\t2)String2 to compare\n\n\tReturns:\n\t\tA standard output (command line of the similarity metric with the given test strings, for more details of this simple class please see the SimpleExample.java source file)"); } }

    Read the article

  • How to best integrate generated code

    - by Arne
    I am evaluating the use of code generation for my flight simulation project. More specifically there is a requirement to allow "the average engineer" (no offense I am one myself) to define the differential equations that describe the dynamic system in a more natural syntax than C++ provides. The idea is to devise a abstract descriptor language that can be easily understood and edited to generate C++ code from. This descriptor is supplied by the modeling engineer and used by the ones implementing and maintaining the simulation evironment to generate code. I've got something like this in mind: model Aircraft has state x1, x2; state x3; input double : u; input bool : flag1, flag2; algebraic double : x1x2; model Engine : tw1, tw2; model Gear : gear; model ISA : isa; trim routine HorizontalFight; trim routine OnGround, General; constant double : c1, c2; constant int : ci1; begin differential equations x1' = x1 + 2.*x2; x2' = x2 + x1x2; begin algebraic equations x1x2 = x1*x2 + x1'; end model It is important to retain the flexibility of the C language thus the descriptor language is meant to only define certain parts of the definition and implementation of the model class. This way one enigneer provides the model in from of the descriptor language as examplified above and the maintenance enigneer will add all the code to read parameters from files, start/stop/pause the execution of the simulation and how a concrete object gets instatiated. My first though is to either generate two files from the descriptor file: one .h file containing declarations and one .cpp file containing the implementation of certain functions. These then need to be #included at appropriate places [File Aircarft.h] class Aircraft { public: void Aircraft(..); // hand-written constructor void ReadParameters(string &file_name); // hand-written private: /* more hand wirtten boiler-plate code */ /* generate declarations follow */ #include "Aircraft.generated.decl" }; [File Aircraft.cpp] Aircarft::Aircraft(..) { /* hand-written constructer implementation */ } /* more hand-written implementation code */ /* generated implementation code follows */ #include "Aircraft.generated.impl" Any thoughts or suggestions?

    Read the article

  • Exception opening TAdoDataset: Arguments are of the wrong type, are out of acceptable range, or are

    - by Dave Falkner
    I've been trying to debug the following problem for several weeks now - this method is called from several places within the same datamodule, but this exception (from the subject line of this post) only occurs when integers for a certain purpose (pickup orders vs. orders that we ship through a carrier) are used - and don't ask me how the application can tell the difference between one integer's purpose and another! Furthermore, I cannot duplicate this issue on my machine - the error occurs on a warehouse machine but not my own development machine, even when working with the same production database. I have suspected an MDAC version conflict between the two machines, but have run a version checker and confirmed that both machines are running 2.8, and additionally have confirmed this by logging the TAdoDataset's .Version property at runtime. function TdmESShip.SecondaryID(const PrimaryID : Integer ): String; begin try with qESPackage2 do begin if Active then Close; LogMessage('-----------------------------------'); LogMessage('Version: ' + FConnection.Version); LogMessage('DB Info: ' + FConnection.Properties['Initial Catalog'].Value + ' ' + FConnection.Properties['Data Source'].Value); LogMessage('Setting the parameter.'); Parameters.ParamByName('ParameterName').Value := PrimaryID; LogMessage('Done setting the parameter.'); Open; Ninety-nine times out of 100 this logging code logs a successful operation as follows: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. Opened the dataset. But then whenever a "pickup" order is processed, this exception gets thrown whenever the dataset is opened: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. GetESPackageID() threw an exception. Type: EOleException, Message: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another for packageID 10813711 I've tried eliminating the parameter and have built the commandtext for this dataset programmatically, suspecting that some part of the TParameter's configuration might be out of whack, but the same error occurs under the same circumstances. I've tried every combination of TParameter properties that I can think of - this is the millionth TParameter I've created for my millionth dataset, and I've never encountered this error. I've even created a second dataset from scratch and removed all references to the original dataset in case some property of the original dataset in the .dfm might be corrupted, but the same error occurs under the same circumstances. The commandtext for this dataset is a simple select ValueA from TableName where ValueB = @ParameterB I'm about ready to do something extreme, such as writing a web service to look these values up - it feels right now as though I could destroy my machine, rebuild it, rewrite this entire application from scratch, and the application would still know to throw an exception whenever I try to look up a secondary value from a primary value, but only for pickup orders, and only from the one machine in the warehouse, but I'm probably missing something simple. So, any help anyone could provide would be greatly appreciated.

    Read the article

  • How to set a key binding to make Emacs as transparent/opaque as I want?

    - by Vivi
    I want to have a command in Emacs to make it as opaque/transparent as I want (refer to the fabulous question that pointed out that transparency is possible in Emacs, and the EmacsWiki page linked there which has the code I am using below). The EmacsWiki code sets "C-c t" to toggle the previously set transparency on and off: ;;(set-frame-parameter (selected-frame) 'alpha '(<active> [<inactive>])) (set-frame-parameter (selected-frame) 'alpha '(85 50)) (add-to-list 'default-frame-alist '(alpha 85 50)) enter code here(eval-when-compile (require 'cl)) (defun toggle-transparency () (interactive) (if (/= (cadr (find 'alpha (frame-parameters nil) :key #'car)) 100) (set-frame-parameter nil 'alpha '(100 100)) (set-frame-parameter nil 'alpha '(85 60)))) (global-set-key (kbd "C-c t") 'toggle-transparency) What I would like to do is to be able to choose the % transparency when I am in Emacs. If possible, I would like a command where I type for example "C-c t N" (where N is the % opaqueness) for the active frame, and then "M-c t N" for the inactive window. If that can't be done like that, then maybe a command where if I type "C-c t" it asks me for the number which gives the opaqueness of the active window (and the same for the inactive window using "M-c t"). Thanks in advance for your time :) Below are just some comments that are not important to answer the question if you are not interested: I really want this because when I told my supervisor I was learning Emacs he said TexShop is much better and that I am using software from the 80's. I told him about the wonders of Emacs and he said TexShop has all of it and more. I matched everything he showed me except for the transparency (though he couldn't match the preview inside Emacs from preview-latex). I found the transparency thing by chance, and now I want to show him Emacs rules! I imagine this will be a piece of cake for some of you, and even though I could get it done if I spent enough time trying to learn lisp or reading around, I am not a programmer and I have only been using Emacs and a mac for a week. I am lost already as it is! So thanks in advance for your time and help - I will learn lisp eventually!

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Ajax post request, an object that includes an array and other objects, can't be parsed correctly int

    - by Waheedi
    what i want is to get a proper parameter, if you see the parameter been logged you would tell there is something wrong my javasript: first run the runMe function Ajax: function() { var xmlhttp, bComplete = false; try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) { try { xmlhttp = new XMLHttpRequest(); } catch (e) { xmlhttp = false; }}} if (!xmlhttp) return null; this.connect = function(sURL, sMethod, sVars, fnDone) { if (!xmlhttp) return false; bComplete = false; sMethod = sMethod.toUpperCase(); try { if (sMethod == "GET") { xmlhttp.open(sMethod, sURL+"?"+sVars, true); sVars = ""; } else { xmlhttp.open(sMethod, sURL); xmlhttp.setRequestHeader("Method", "POST "+sURL+" HTTP/1.1"); xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", sVars.length); } xmlhttp.onreadystatechange = function(){ if (xmlhttp.readyState == 4 && !bComplete) { bComplete = true; fnDone(xmlhttp); }}; xmlhttp.send(sVars); } catch(z) { return false; } return true; }; return this; }, tOrigin: function(origin){ this.origin = origin; }, tObject: function(origins,url,apik){ this.origins=origins; //this is an array this.url=url; this.apik=apik; this.host= "http://localhost:3000/";//window.location.hostname; } runMe: function(){ var t = new tObject(['this','word','word me please','and me please','word','word','okay','word','go','go'],window.location.href,"helloapik"); // console.log(t); ajax = new Ajax(); ajax.connect("http://localhost:3000/","POST",JSON.stringify(t), callBackFunc) } this is what I'm getting in my rails server log Parameters: {"{\"origins\":"={"{\"origin\":\"this\"},{\"origin\":\"word\"},{\"origin\":\"word me please\"},{\"origin\":\"and me please\"},{\"origin\":\"word\"},{\"origin\":\"word\"},{\"origin\":\"word\"},{\"origin\":\"okay\"},{\"origin\":\"word\"},{\"origin\":\"go\"},{\"origin\":\"go\"}"={",\"url\":\"file:///Users/waheed/Desktop/untitled.html\",\"apik\":\"helloapik\",\"host\":\"http://localhost:3000/\"}"=nil}}}

    Read the article

  • C++ type-checking at compile-time

    - by Masterofpsi
    Hi, all. I'm pretty new to C++, and I'm writing a small library (mostly for my own projects) in C++. In the process of designing a type hierarchy, I've run into the problem of defining the assignment operator. I've taken the basic approach that was eventually reached in this article, which is that for every class MyClass in a hierarchy derived from a class Base you define two assignment operators like so: class MyClass: public Base { public: MyClass& operator =(MyClass const& rhs); virtual MyClass& operator =(Base const& rhs); }; // automatically gets defined, so we make it call the virtual function below MyClass& MyClass::operator =(MyClass const& rhs); { return (*this = static_cast<Base const&>(rhs)); } MyClass& MyClass::operator =(Base const& rhs); { assert(typeid(rhs) == typeid(*this)); // assigning to different types is a logical error MyClass const& casted_rhs = dynamic_cast<MyClass const&>(rhs); try { // allocate new variables Base::operator =(rhs); } catch(...) { // delete the allocated variables throw; } // assign to member variables } The part I'm concerned with is the assertion for type equality. Since I'm writing a library, where assertions will presumably be compiled out of the final result, this has led me to go with a scheme that looks more like this: class MyClass: public Base { public: operator =(MyClass const& rhs); // etc virtual inline MyClass& operator =(Base const& rhs) { assert(typeid(rhs) == typeid(*this)); return this->set(static_cast<Base const&>(rhs)); } private: MyClass& set(Base const& rhs); // same basic thing }; But I've been wondering if I could check the types at compile-time. I looked into Boost.TypeTraits, and I came close by doing BOOST_MPL_ASSERT((boost::is_same<BOOST_TYPEOF(*this), BOOST_TYPEOF(rhs)>));, but since rhs is declared as a reference to the parent class and not the derived class, it choked. Now that I think about it, my reasoning seems silly -- I was hoping that since the function was inline, it would be able to check the actual parameters themselves, but of course the preprocessor always gets run before the compiler. But I was wondering if anyone knew of any other way I could enforce this kind of check at compile-time.

    Read the article

  • What to Return? Error String, Bool with Error String Out, or Void with Exception

    - by Ranger Pretzel
    I spend most of my time in C# and am trying to figure out which is the best practice for handling an exception and cleanly return an error message from a called method back to the calling method. For example, here is some ActiveDirectory authentication code. Please imagine this Method as part of a Class (and not just a standalone function.) bool IsUserAuthenticated(string domain, string user, string pass, out errStr) { bool authentic = false; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; // No exception thrown? We must be good, then. authentic = true; } catch (Exception e) { errStr = e.Message().ToString(); } return authentic; } The advantages of doing it this way are a clear YES or NO that you can embed right in your If-Then-Else statement. The downside is that it also requires the person using the method to supply a string to get the Error back (if any.) I guess I could overload this method with the same parameters minus the "out errStr", but ignoring the error seems like a bad idea since there can be many reasons for such a failure... Alternatively, I could write a method that returns an Error String (instead of using "out errStr") in which a returned empty string means that the user authenticated fine. string AuthenticateUser(string domain, string user, string pass) { string errStr = ""; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } catch (Exception e) { errStr = e.Message().ToString(); } return errStr; } But this seems like a "weak" way of doing things. Or should I just make my method "void" and just not handle the exception so that it gets passed back to the calling function? void AuthenticateUser(string domain, string user, string pass) { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } This seems the most sane to me (for some reason). Yet at the same time, the only real advantage of wrapping those 2 lines over just typing those 2 lines everywhere I need to authenticate is that I don't need to include the "LDAP://" string. The downside with this way of doing it is that the user has to put this method in a try-catch block. Thoughts? Is there another way of doing this that I'm not thinking of?

    Read the article

  • What is `objc_msgSend_fixup`, exactly?

    - by Luis Antonio Botelho O. Leite
    I'm messing around with the Objective-C runtime, trying to compile objective-c code without linking it against libobjc, and I'm having some segmentation fault problems with a program, so I generated an assembly file from it. I think it's not necessary to show the whole assembly file. At some point of my main function, I've got the following line (which, by the way, is the line after which I get the seg fault): callq *l_objc_msgSend_fixup_alloc and here is the definition for l_objc_msgSend_fixup_alloc: .hidden l_objc_msgSend_fixup_alloc # @"\01l_objc_msgSend_fixup_alloc" .type l_objc_msgSend_fixup_alloc,@object .section "__DATA, __objc_msgrefs, coalesced","aw",@progbits .weak l_objc_msgSend_fixup_alloc .align 16 l_objc_msgSend_fixup_alloc: .quad objc_msgSend_fixup .quad L_OBJC_METH_VAR_NAME_ .size l_objc_msgSend_fixup_alloc, 16 I've reimplemented objc_msgSend_fixup as a function (id objc_msgSend_fixup(id self, SEL op, ...)) which returns nil (just to see what happens), but this function isn't even being called (the program crashes before calling it). So, my question is, what is callq *l_objc_msgSend_fixup_alloc supposed to do and what is objc_msgSend_fixup (after l_objc_msgSend_fixup_alloc:) supposed to be (a function or an object)? Edit To better explain, I'm not linking my source file against the objc library. What I'm trying to do is implement some parts of the libray, just to see how it works. Here is an approach of what I've done: #include <stdio.h> #include <objc/runtime.h> @interface MyClass { } +(id) alloc; @end @implementation MyClass +(id) alloc { // alloc the object return nil; } @end id objc_msgSend_fixup(id self, SEL op, ...) { printf("Calling objc_msgSend_fixup()...\n"); // looks for the method implementation for SEL in self's vtable return nil; // Since this is just a test, this function doesn't need to do that } int main(int argc, char *argv[]) { MyClass *m; m = [MyClass alloc]; // At this point, according to the assembly code generated // objc_msgSend_fixup should be called. So, the program should, at least, print // "Calling objc_msgSend_fixup()..." on the screen, but it crashes before // objc_msgSend_fixup() is called... return 0; } If the runtime needs to access the object's vtable to find the correct method to call, what is the function which actually does this? I think it is objc_msgSend_fixup, in this case. So, when objc_msgSend_fixup is called, it receives an object as one of its parameters, and, if this object hasn't been initialized, the function fails. So, I've implemented my own version of objc_msgSend_fixup. According to the assembly source above, it should be called. It doesn't matter if the function is actually looking for the implementation of the selector passed as parameter. I just want objc_msgSend_lookup to be called. But, it's not being called, that is, the function that looks for the object's data is not even being called, instead of being called and cause a fault (because it returns a nil (which, by the way, doesn't matter)). The program seg fails before objc_msgSend_lookup is called...

    Read the article

  • Hibernate Communications Link Failure in Hibernate Based Java Servlet application powered by MySQL

    - by Vatsala
    Let me describe my question - I have a Java application - Hibernate as the DB interfacing layer over MySQL. I get the communications link failure error in my application. The occurence of this error is a very specific case. I get this error , When I leave mysql server unattended for more than approximately 6 hours (i.e. when there are no queries issued to MySQL for more than approximately 6 hours). I am pasting a top 'exception' level description below, and adding a pastebin link for a detailed stacktrace description. javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: java.net.ConnectException: Connection refused: connect the link to the pastebin for further investigation - http://pastebin.com/4KujAmgD What I understand from these exception statements is that MySQL is refusing to take in any connections after a period of idle/nil activity. I have been reading up a bit about this via google search, and came to know that one of the possible ways to overcome this is to set values for c3p0 properties as c3p0 comes bundled with Hibernate. Specifically, I read from here http://www.mchange.com/projects/c3p0/index.html that setting two properties idleConnectionTestPeriod and preferredTestQuery will solve this for me. But these values dont seem to have had an effect. Is this the correct approach to fixing this? If not, what is the right way to get over this? The following are related Communications Link Failure questions at stackoverflow.com, but I've not found a satisfactory answer in their answers. http://stackoverflow.com/questions/2121829/java-db-communications-link-failure http://stackoverflow.com/questions/298988/how-to-handle-communication-link-failure Note 1 - i dont get this error when I am using my application continuosly. Note 2 - I use JPA with Hibernate and hence my hibernate.dialect,etc hibernate properties reside within the persistence.xml in the META-INF folder (does that prevent the c3p0 properties from working?) edit - Here are the c3p0 parameters I tried out - select 1; 2

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • function which given a point and a value of the area of a square as input parameter returns four squ

    - by osabri
    in this code i don't understand why teacher used sometimes +value, - value; /******************************************/ // function void returnSquares(POINT point, int value) void returnSquares(POINT point, int value) { SQUARE tabSquares[4]; // table of squares that we are creating int i; // getting points of 4 squares // for first square input point is point C tabSquares[0].pointA.dimX = point.dimX - value; tabSquares[0].pointA.dimY = point.dimY + value; tabSquares[0].pointB.dimX = point.dimX; tabSquares[0].pointB.dimY = point.dimY + value; tabSquares[0].pointC.dimX = point.dimX; tabSquares[0].pointC.dimY = point.dimY; tabSquares[0].pointD.dimX = point.dimX - value; tabSquares[0].pointD.dimY = point.dimY; // for 2nd square input point is point D tabSquares[1].pointA.dimX = point.dimX; tabSquares[1].pointA.dimY = point.dimY + value; tabSquares[1].pointB.dimX = point.dimX + value; tabSquares[1].pointB.dimY = point.dimY + value; tabSquares[1].pointC.dimX = point.dimX + value; tabSquares[1].pointC.dimY = point.dimY; tabSquares[1].pointD.dimX = point.dimX; tabSquares[1].pointD.dimY = point.dimY; // for 3rd square input point is point A tabSquares[2].pointA.dimX = point.dimX; tabSquares[2].pointA.dimY = point.dimY; tabSquares[2].pointB.dimX = point.dimX + value; tabSquares[2].pointB.dimY = point.dimY; tabSquares[2].pointC.dimX = point.dimX + value; tabSquares[2].pointC.dimY = point.dimY - value; tabSquares[2].pointD.dimX = point.dimX; tabSquares[2].pointD.dimY = point.dimY - value; // for 4th square input point is point B tabSquares[3].pointA.dimX = point.dimX - value; tabSquares[3].pointA.dimY = point.dimY; tabSquares[3].pointB.dimX = point.dimX; tabSquares[3].pointB.dimY = point.dimY; tabSquares[3].pointC.dimX = point.dimX; tabSquares[3].pointC.dimY = point.dimY - value; tabSquares[3].pointD.dimX = point.dimX - value; tabSquares[3].pointD.dimY = point.dimY - value; for (i=0; i<4; i++) { printf("Square number %d\n",i); // now we print parameters of each point in current Square printf("point A x= %0.2f y= %0.2f\n",tabSquares[i].pointA.dimX,tabSquares[i].pointA.dimY); printf("point B x= %0.2f y= %0.2f\n",tabSquares[i].pointB.dimX,tabSquares[i].pointB.dimY); printf("point C x= %0.2f y= %0.2f\n",tabSquares[i].pointC.dimX,tabSquares[i].pointC.dimY); printf("point D x= %0.2f y= %0.2f\n",tabSquares[i].pointD.dimX,tabSquares[i].pointD.dimY); } }

    Read the article

  • Realtime Twitter Replies?

    - by ejunker
    I have created Twitter bots for many geographic locations. I want to allow users to @-reply to the Twitter bot with commands and then have the bot respond with the results. I would like to have the bot reply to the user as quickly as possible (realtime). Apparently, Twitter used to have an XMPP/Jabber interface that would provide this type of realtime feed of replies but it was shut down. As I see it my options are to use one of the following: REST API This would involve polling every X minutes for each bot. The problem with this is that it is not realtime and each Twitter account would have to be polled. Search API The search API does allow specifying a "-to" parameter in the search and replies to all bots could be aggregated in a search such as "-to bot1 OR -to bot2...". Though if you have hundreds of bots then the search string would get very long and probably exceed the maximum length of a GET request. Streaming API The streaming API looks very promising as it provides realtime results. The API allows you to specify a follow and track parameters. follow is not useful as the bot does not know who will be sending it commands. track allows you to specify keywords to track. This could possibly work by creating a daemon process that connects to the Streaming API and tracks all references to the bot's names. Once again since there are lots of bots to track the length and complexity of the query may be an issue. Another idea would be to track a special hashtag such as #botcommand and then a user could send a command using this syntax @bot1 weather #botcommand. Then by using the Streaming API to track all references to #botcommand would give you a realtime stream of all the commands. Further parsing could then be done to determine which bot to send the command to. Third-party service Are there any third-party companies that have access to the Twitter firehouse and offer realtime data? I haven't investigated these, but here are a few that I have found: Gnip Tweet.IM excla.im TwitterSpy - seems to use polling, not realtime I'm leaning towards using the Streaming API. Is there a better way to get near realtime @-replies for many (hundreds) of Twitter accounts?

    Read the article

  • Seeking STL-aware c++filt

    - by Don Wakefield
    In my development environment, I'm compiling a code base using GNU C++ 3.4.6. Code is under development, and unfortunately crashes now and then. It's nice to be able to run the traceback through a demangler, and I use c++filt 3.4. The problem comes when functions have a number of STL parameters. Consider My_callback::operator()( Status&, std::set<std::string> const&, std::vector<My_parameter*> const&, My_attribute_set const&, std::vector<My_parameter_base*> const&, std::vector<My_parameter> const&, std::set<std::string> const& ) { // ... } When this function is in the traceback, the mangled output on my platform is: (_ZN30My_callbackclER11StatusRKSt3setISsSt4lessISsESaISsEERKSt6vectorIP13My_parameterSaISB_EERK17My_attribute_setRKS9_IP18My_parameter_baseSaISK_EERKS9_ISA_SaISA_EES8_+0x76a) [0x13ffdaa] c++filt kindly demangles it to (My_callback::operator()(Status&, std::set<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, std::vector<My_parameter*, std::allocator<My_parameter*> > const&, My_attribute_set const&, std::vector<My_parameter_base*, std::allocator<My_parameter_base*> > const&, std::vector<My_parameter, std::allocator<My_parameter> > const&, std::set<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)+0x76a) [0x13ffdaa] This is the same problem as compiler errors encountered when using templates. However, the STL is a fairly regular and recognizable package of templates. So what I'm hoping is that someone out there has created an enhanced version of c++filt which would dump something closer to the original function signature. Any hints?

    Read the article

  • A SelfHosted WCF Service over Basic HTTP Binding doesn't support more than 1000 concurrent requests

    - by Krishnan
    I have self hosted a WCF Service over BasicHttpBinding consumed by an ASMX Client. I'm simulating a concurrent user load of 1200 users. The service method takes a string parameter and returns a string. The data exchanged is less than 10KB. The processing time for a request is fixed at 2 seconds by having a Thread.Sleep(2000) statement. Nothing additional. I have removed all the DB Hits / business logic. The same piece of code runs fine for 1000 concurrent users. I get the following error when I bump up the number to 1200 users. System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at WCF.Throttling.Client.Service.Function2(String param) This exception is often reported on DataContract mismatch and large data exchange. But never when doing a load test. I have browsed enough and have tried most of the options which include, Enabled Trace & Message log on server side. But no errors logged. To overcome Port Exhaustion MaxUserPort is set to 65535, and TcpTimedWaitDelay 30 secs. MaxConcurrent Calls is set to 600, and MaxConcurrentInstances is set to 1200. The Open, Close, Send and Receive Timeouts are set to 10 Minutes. The HTTPWebRequest KeepAlive set to false. I have not been able to nail down the issue for the past two days. Any help would be appreciated. Thank you.

    Read the article

  • How do I set libavcodec to use 4:2:2 chroma when encoding MPEG-2 4:2:2 profile?

    - by Mike Pollitt
    I have a project using libavcodec (ffmpeg). I'm using it to encode MPEG-2 video at 4:2:2 Profile, Main Level. I have the pixel format PIX_FMT_YUV422P selected in the AVCodecContext, however the video output I'm getting has all the colours wrong, and looks to me like the encoder is incorrectly reading the buffers as though it thinks it is 4:2:0 chroma rather than 4:2:2. Here's my codec setup: // // AVFormatContext* _avFormatContext previously defined as mpeg2video // // // Set up the video stream for output // AVVideoStream* _avVideoStream = av_new_stream(_avFormatContext, 0); if (!_avVideoStream) { err = ccErrWFFFmpegUnableToAllocateStream; goto bail; } _avCodecContext = _avVideoStream->codec; _avCodecContext->codec_id = CODEC_ID_MPEG2VIDEO; _avCodecContext->codec_type = CODEC_TYPE_VIDEO; // // Set up required parameters // _avCodecContext->rc_max_rate = _avCodecContext->rc_min_rate = _avCodecContext->bit_rate = src->_avCodecContext->bit_rate; _avCodecContext->flags = CODEC_FLAG_INTERLACED_DCT; _avCodecContext->flags2 = CODEC_FLAG2_INTRA_VLC | CODEC_FLAG2_NON_LINEAR_QUANT; _avCodecContext->qmin = 1; _avCodecContext->qmax = 1; _avCodecContext->rc_buffer_size = _avCodecContext->rc_initial_buffer_occupancy = 2000000; _avCodecContext->rc_buffer_aggressivity = 0.25; _avCodecContext->profile = 0; _avCodecContext->level = 5; _avCodecContext->width = f->GetWidth(); // f is a private Frame class with width, height properties etc. _avCodecContext->height = f->GetHeight(); _avCodecContext->time_base.den = 25; _avCodecContext->time_base.num = 1; _avCodecContext->gop_size = 12; _avCodecContext->max_b_frames = 2; _avCodecContext->pix_fmt = PIX_FMT_YUV422P; if (_avFormatContext->oformat->flags & AVFMT_GLOBALHEADER) { _avCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER; } if (av_set_parameters(_avFormatContext, NULL) < 0) { err = ccErrWFFFmpegUnableToSetParameters; goto bail; } // // Set up video codec for encoding // AVCodec* _avCodec = avcodec_find_encoder(_avCodecContext->codec_id); if (!_avCodec) { err = ccErrWFFFmpegUnableToFindCodecForOutput; goto bail; } if (avcodec_open(_avCodecContext, _avCodec) < 0) { err = ccErrWFFFmpegUnableToOpenCodecForOutput; goto bail; } A screengrab of the resulting video frame can be seen at http://ftp.limeboy.com/images/screen_grab.png (the input was standard colour bars). I've checked by outputting debug frames to TGA format at various points in the process, and I can confirm that it is all fine and dandy up until the point that libavcodec encodes the frame. Any assistance most appreciated! Cheers, Mike.

    Read the article

  • passing custom object as parameter to a webmethod of asp.net web service

    - by Jon
    Hi, I have a custom class declared as follows (in vb.net) <Serializable()> _ Public Class NumInfo Public n As String Public f As Integer Public fc As char() Public t As Integer Public tc As char() Private validFlag As Boolean = True Public Sub New() End Sub 'I also have public properties(read/write) for all the public variablesEnd Class In my service.asmx codebehind class I have a webmethod as follows: <WebMethod()> _ <XmlInclude(GetType(NumInfo))> _ Public Function ConvertTo(ByVal info As NumInfo) As String Return mbc(info)'mbc is another function defined in my service.asmx "service" class End Function The problem is that when I start debugging it to test it, the page that I get does not contain any fields where I could input the values for the public fields of numInfo. How do I initialise the class? There is no "Invoke" button either. All I see are soap details as below: ConvertToTestThe test form is only available for methods with primitive types as parameters.SOAP 1.1The following is a sample SOAP 1.1 request and response. The placeholders shown need to be replaced with actual values.POST /Converter/BC.asmx HTTP/1.1Host: localhostContent-Type: text/xml; charset=utf-8Content-Length: lengthSOAPAction: "http://Services/ConvertTo"<?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <ConvertTo xmlns="http://Services/"> <info> <n>string</n> <f>int&lt/f> <fc> <char>char</char> <char>char>/char> </fc>..etc.. What am I doing wrong? For the record I tried replacing char() with string to see if it was the array causing problems but that didn't help either. I'm fairly new to web services. I tried replacing the custom object parameter with a primitive parameter just to check how things worked and it rendered a page with an input field and invoke button. I just can't seem to get it working with custom object. Help!

    Read the article

  • How do I use accepts_nested_attributes_for? I cannot use the .build method (!)

    - by Angela
    Editing my question for conciseness and to update what I've done: How do I model having multiple Addresses for a Company and assign a single Address to a Contact, and be able to assign them when creating or editing a Contact? Here is my model for Contacts: class Contact < ActiveRecord::Base attr_accessible :first_name, :last_name, :title, :phone, :fax, :email, :company, :date_entered, :campaign_id, :company_name, :address_id, :address_attributes belongs_to :company belongs_to :address accepts_nested_attributes_for :address end Here is my model for Address: class Address < ActiveRecord::Base attr_accessible :street1, :street2, :city, :state, :zip has_many :contacts end I would like, when creating an new contact, access all the Addresses that belong to the other Contacts that belong to the Company. So here is how I represent Company: class Company < ActiveRecord::Base attr_accessible :name, :phone, :addresses has_many :contacts has_many :addresses, :through => :contacts end Here is how I am trying to create a field in the View for _form for Contact so that, when someone creates a new Contact, they pass the address to the Address model and associate that address to the Contact: <% f.fields_for :address, @contact.address do |builder| %> <p> <%= builder.label :street1, "Street 1" %> </br> <%= builder.text_field :street1 %> <p> <% end %> When I try to Edit, the field for Street 1 is blank. And I don't know how to display the value from show.html.erb. At the bottom is my error console -- can't seem to create values in the address table: My Contacts controller is as follows: def new @contact = Contact.new @contact.address.build # I GET AN ERROR HERE: says NIL CLASS @contact.date_entered = Date.today @campaigns = Campaign.find(:all, :order => "name") if params[:campaign_id].blank? else @campaign = Campaign.find(params[:campaign_id]) @contact.campaign_id = @campaign.id end if params[:company_id].blank? else @company = Company.find(params[:company_id]) @contact.company_name = @company.name end end def create @contact = Contact.new(params[:contact]) if @contact.save flash[:notice] = "Successfully created contact." redirect_to @contact else render :action => 'new' end end def edit @contact = Contact.find(params[:id]) @campaigns = Campaign.find(:all, :order => "name") end Here is a snippet of my error console: I am POSTING the attribute, but it is not CREATING in the Address table.... Processing ContactsController#create (for 127.0.0.1 at 2010-05-12 21:16:17) [POST] Parameters: {"commit"="Submit", "authenticity_token"="d8/gx0zy0Vgg6ghfcbAYL0YtGjYIUC2b1aG+dDKjuSs=", "contact"={"company_name"="Allyforce", "title"="", "campaign_id"="2", "address_attributes"={"street1"="abc"}, "fax"="", "phone"="", "last_name"="", "date_entered"="2010-05-12", "email"="", "first_name"="abc"}} Company Load (0.0ms)[0m [0mSELECT * FROM "companies" WHERE ("companies"."name" = 'Allyforce') LIMIT 1[0m Address Create (16.0ms)[0m [0;1mINSERT INTO "addresses" ("city", "zip", "created_at", "street1", "updated_at", "street2", "state") VALUES(NULL, NULL, '2010-05-13 04:16:18', NULL, '2010-05-13 04:16:18', NULL, NULL)[0m Contact Create (0.0ms)[0m [0mINSERT INTO "contacts" ("company", "created_at", "title", "updated_at", "campaign_id", "address_id", "last_name", "phone", "fax", "company_id", "date_entered", "first_name", "email") VALUES(NULL, '2010-05-13 04:16:18', '', '2010-05-13 04:16:18', 2, 2, '', '', '', 5, '2010-05-12', 'abc', '')[0m

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >