Search Results

Search found 11142 results on 446 pages for 'incognito mode'.

Page 283/446 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • The maximum message size quota for incoming messages (65536) has been exceeded.

    - by DaleyKD
    My WCF Service has an OperationContract that accepts, as a parameter, an array of objects. This can potentially be quite large. After looking for fixes for Bad Request: 400, I found the real reason: the maximum message size. I know this question has been asked before in MANY places. I've tried what everyone says: "Increase the sizes in the client and server config files." I have. It still doesn't work. My Service's web.config: <system.serviceModel> <services> <service name="myService"> <endpoint name="myEndpoint" address="" binding="basicHttpBinding" bindingConfiguration="myBinding" contract="Meisel.WCF.PDFDocs.IPDFDocsService" /> </service> </services> <bindings> <basicHttpBinding> <binding name="myBinding" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:15:00" sendTimeout="00:15:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" transferMode="Buffered" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> My Client's app.config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IPDFDocsService" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:10:00" sendTimeout="00:11:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost:8451/PDFDocsService.svc" behaviorConfiguration="MoreItemsInObjectGraph" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IPDFDocsService" contract="PDFDocsService.IPDFDocsService" name="BasicHttpBinding_IPDFDocsService" /> </client> <behaviors> <endpointBehaviors> <behavior name="MoreItemsInObjectGraph"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> What can I possibly be missing or doing wrong? It's as though the service is ignoring what I typed in the maxReceivedBufferSize. Thanks in advance, Kyle UPDATE Here are two other StackOverflow questions where they never received an answer, either: http://stackoverflow.com/questions/2880623/maxreceivedmessagesize-adjusted-but-still-getting-the-quotaexceedexception-with http://stackoverflow.com/questions/2569715/wcf-maxreceivedmessagesize-property-not-taking

    Read the article

  • Databinding to ObservableCollection in a different UserControl - how to preserve current selections?

    - by Dave
    Scope of question expanded on 2010-03-25 I ended up figuring out my problem, but here's a new problem that came up as a result of solving the original question, because I want to be able to award the bounty to someone!!! Once I figured out my problem, I soon found out that when the ObservableCollection updates, the databound ComboBox has its contents repopulated, but most of the selections have been blanked out. I assume that in this case, MVVM is going to make it difficult for me to remember the last selected item. I have an idea, but it seems a little nasty. I'll award the bounty to whomever comes up with a nice solution for this! Question re-written on 2010-03-24 I have two UserControls, where one is a dialog that has a TabControl, and the other is one that appears within said TabControl. I'll just call them CandyDialog and CandyNameViewer for simplicity's sake. There's also a data management class called Tracker that manages information storage, which for all intents and purposes just exposes a public property that is an ObservableCollection. I display the CandyNameViewer in CandyDialog via code behind, like this: private void CandyDialog_Loaded( object sender, RoutedEventArgs e) { _candyviewer = new CandyViewer(); _candyviewer.DataContext = _tracker; candy_tab.Content = _candyviewer; } The CandyViewer's XAML looks like this (edited for kaxaml): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <DataTemplate x:Key="CandyItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="120"></ColumnDefinition> <ColumnDefinition Width="150"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Grid.Column="0" Text="{Binding CandyName}" Margin="3"></TextBox> <!-- just binding to DataContext ends up using InventoryItem as parent, so we need to get to the UserControl --> <ComboBox Grid.Column="1" SelectedItem="{Binding SelectedCandy, Mode=TwoWay}" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.CandyNames}" Margin="3"></ComboBox> </Grid> </DataTemplate> </Page.Resources> <Grid> <ListBox DockPanel.Dock="Top" ItemsSource="{Binding CandyBoxContents, Mode=TwoWay}" ItemTemplate="{StaticResource CandyItemTemplate}" /> </Grid> </Page> Now everything works fine when the controls are loaded. As long as CandyNames is populated first, and then the consumer UserControl is displayed, all of the names are there. I obviously don't get any errors in the Output Window or anything like that. The issue I have is that when the ObservableCollection is modified from the model, those changes are not reflected in the consumer UserControl! I've never had this problem before; all of my previous uses of ObservableCollection updated fine, although in those cases I wasn't databinding across assemblies. Although I am currently only adding and removing candy names to/from the ObservableCollection, at a later date I will likely also allow renaming from the model side. Is there something I did wrong? Is there a good way to actually debug this? Reed Copsey indicates here that inter-UserControl databinding is possible. Unfortunately, my favorite Bea Stollnitz article on WPF databinding debugging doesn't suggest anything that I could use for this particular problem.

    Read the article

  • Django ImageField issue with JPEG's

    - by Kieran Lynn
    I am having a major issue with PIL (Python Image Library) in Django and have jumpped through a lot of hoops and have thus far not been able to figure out what the root of the issue is. The problem essentially breaks down to not being able to upload JPEG images through the ImageField in the Django admin. But the issue is not as simple as installing libjpeg. First, I installed PIL (through Buildout) and realized once it was installed that I had not installed libjpeg because JPEG support was not available. Having not setup the server myself, I just assumed that it was not installed and I compiled libjpeg 8 from the source. This ended up in my /usr/local/lib/ directory. I cleared out my Buildout files and rebuilt everything. This time when PIL compiled I had JPEG support. But I went to the Django Admin and tried to upload a JPEG though an ImageField with no luck. I got the "Upload a valid image. The file you uploaded was either not an image or a corrupted image" error. Just as a test I opened up a the Djano shell and ran the following: > import Image > i = Image.open( "/absolute_path/file.jpg" ) > print i <JpegImagePlugin.JpegImageFile image mode=RGB size=940x375 at 0x7F908C529BD8> This runs with no errors and shows that PIL is able to open JPEG's. After doing some reading, I come across this thread: Is it possible to control which libraries apache uses? Looks like PHP also uses libjpeg and is loading before Django, and therefor loading libjpeg 6.2 before. This is show when using lsof: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME apache2 2561 www-data mem REG 202,1 146032 639276 /usr/lib/libjpeg.so.62.0.0 So my thought is that I should be using libjpeg 6.2. So I removed libjpeg located in my /usr/local/lib directory. After rereading the PIL installation instructions, I realized that I might not have the dev/header files for libjpeg that PIL needs. So I also uninstalled libjpeg using the aptitude uninstaller (sudo aptitude remove libjpeg62). Then to ensure that I got the header files that PIL needed I installed libjpeg using aptitude: (sudo aptget install libjpeg62-dev). From here I cleaned out my Buildout directory, and reran Buildout, which in turn reinstalled PIL. Once again, I have JPEG support, now using the libjpeg62. So I go to test in the Django Admin. Still no JPEG support. So I wanted to test JPEG support in general and see if the exception was not handled, what kind of error it would throw. So in my homepage view I added the following code to open a JPEG image: import Image i = Image.open( "/absolute_path/file.jpg" ) v = i.verify() Then I pass i to the HTML view just to easily see the output. I deploy these changes to the server and restart. I am surprised not to see an error and get the following output: {{ i }} - <JpegImagePlugin.JpegImageFile image mode=RGB size=940x375 at 0x7F908C529BD8> {{ v }} - None So at this point I am really confused: Why can I successfully open a JPEG while the admin cannot? Am I missing something, is this not an issue with libjpeg? If not an issue with libjpeg, why can I upload a PNG with no issues? Any help would be much appreciated, I have been on this for 2 days debugging with no luck. Setup: 1. Rackspace Cloud Server 2. Ubuntu 10.04 3. Django 1.2.3 (Installed though Buildout) 4. PIL 1.1.7 (Installed though Buildout) 5. libjpeg 6.2 (installed through aptitude (sudo aptget install libjpeg62-dev)

    Read the article

  • How can I get the output of a command terminated by a alarm() call in Perl?

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • claimsResponse Always Return Null

    - by Chirag Pandya
    hello i have a following code in asp.net. i have used DotNetOpenAuth.dll for openID. the code is under protected void openidValidator_ServerValidate(object source, ServerValidateEventArgs args) { // This catches common typos that result in an invalid OpenID Identifier. args.IsValid = Identifier.IsValid(args.Value); } protected void loginButton_Click(object sender, EventArgs e) { if (!this.Page.IsValid) { return; // don't login if custom validation failed. } try { using (OpenIdRelyingParty openid = this.createRelyingParty()) { IAuthenticationRequest request = openid.CreateRequest(this.openIdBox.Text); // This is where you would add any OpenID extensions you wanted // to include in the authentication request. ClaimsRequest objClmRequest = new ClaimsRequest(); objClmRequest.Email = DemandLevel.Request; objClmRequest.Country = DemandLevel.Request; request.AddExtension(objClmRequest); // Send your visitor to their Provider for authentication. request.RedirectToProvider(); } } catch (ProtocolException ex) { this.openidValidator.Text = ex.Message; this.openidValidator.IsValid = false; } } protected void Page_Load(object sender, EventArgs e) { this.openIdBox.Focus(); if (Request.QueryString["clearAssociations"] == "1") { Application.Remove("DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.ApplicationStore"); UriBuilder builder = new UriBuilder(Request.Url); builder.Query = null; Response.Redirect(builder.Uri.AbsoluteUri); } OpenIdRelyingParty openid = this.createRelyingParty(); var response = openid.GetResponse(); if (response != null) { switch (response.Status) { case AuthenticationStatus.Authenticated: // This is where you would look for any OpenID extension responses included // in the authentication assertion. var claimsResponse = response.GetExtension<ClaimsResponse>(); State.ProfileFields = claimsResponse; // Store off the "friendly" username to display -- NOT for username lookup State.FriendlyLoginName = response.FriendlyIdentifierForDisplay; // Use FormsAuthentication to tell ASP.NET that the user is now logged in, // with the OpenID Claimed Identifier as their username. FormsAuthentication.RedirectFromLoginPage(response.ClaimedIdentifier, false); break; case AuthenticationStatus.Canceled: this.loginCanceledLabel.Visible = true; break; case AuthenticationStatus.Failed: this.loginFailedLabel.Visible = true; break; // We don't need to handle SetupRequired because we're not setting // IAuthenticationRequest.Mode to immediate mode. ////case AuthenticationStatus.SetupRequired: //// break; } } } private OpenIdRelyingParty createRelyingParty() { OpenIdRelyingParty openid = new OpenIdRelyingParty(); int minsha, maxsha, minversion; if (int.TryParse(Request.QueryString["minsha"], out minsha)) { openid.SecuritySettings.MinimumHashBitLength = minsha; } if (int.TryParse(Request.QueryString["maxsha"], out maxsha)) { openid.SecuritySettings.MaximumHashBitLength = maxsha; } if (int.TryParse(Request.QueryString["minversion"], out minversion)) { switch (minversion) { case 1: openid.SecuritySettings.MinimumRequiredOpenIdVersion = ProtocolVersion.V10; break; case 2: openid.SecuritySettings.MinimumRequiredOpenIdVersion = ProtocolVersion.V20; break; default: throw new ArgumentOutOfRangeException("minversion"); } } return openid; } for above code i am always getting var claimsResponse = response.GetExtension<ClaimsResponse>(); i am always getting claimsResponse= null. what is the reason why it happen. is there any requirement which is required for openid like domain validation for RelyingParty?? please give me answer as soon as possible.

    Read the article

  • Clipplanes, vertex shaders and hardware vertex processing in Direct3D 9

    - by Igor
    Hi, I have an issue with clipplanes in my application that I can reproduce in a sample from DirectX SDK (February 2010). I added a clipplane to the HLSLwithoutEffects sample: ... D3DXPLANE g_Plane( 0.0f, 1.0f, 0.0f, 0.0f ); ... void SetupClipPlane(const D3DXMATRIXA16 & view, const D3DXMATRIXA16 & proj) { D3DXMATRIXA16 m = view * proj; D3DXMatrixInverse( &m, NULL, &m ); D3DXMatrixTranspose( &m, &m ); D3DXPLANE plane; D3DXPlaneNormalize( &plane, &g_Plane ); D3DXPLANE clipSpacePlane; D3DXPlaneTransform( &clipSpacePlane, &plane, &m ); DXUTGetD3D9Device()->SetClipPlane( 0, clipSpacePlane ); } void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext ) { // Update the camera's position based on user input g_Camera.FrameMove( fElapsedTime ); // Set up the vertex shader constants D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = *g_Camera.GetWorldMatrix(); mView = *g_Camera.GetViewMatrix(); mProj = *g_Camera.GetProjMatrix(); mWorldViewProj = mWorld * mView * mProj; g_pConstantTable->SetMatrix( DXUTGetD3D9Device(), "mWorldViewProj", &mWorldViewProj ); g_pConstantTable->SetFloat( DXUTGetD3D9Device(), "fTime", ( float )fTime ); SetupClipPlane( mView, mProj ); } void CALLBACK OnFrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext ) { // If the settings dialog is being shown, then // render it instead of rendering the app's scene if( g_SettingsDlg.IsActive() ) { g_SettingsDlg.OnRender( fElapsedTime ); return; } HRESULT hr; // Clear the render target and the zbuffer V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) ); // Render the scene if( SUCCEEDED( pd3dDevice->BeginScene() ) ) { pd3dDevice->SetVertexDeclaration( g_pVertexDeclaration ); pd3dDevice->SetVertexShader( g_pVertexShader ); pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( D3DXVECTOR2 ) ); pd3dDevice->SetIndices( g_pIB ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, D3DCLIPPLANE0 ); V( pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, g_dwNumVertices, 0, g_dwNumIndices / 3 ) ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, 0 ); RenderText(); V( g_HUD.OnRender( fElapsedTime ) ); V( pd3dDevice->EndScene() ); } } When I rotate the camera I have different visual results when using hardware and software vertex processing. In software vertex processing mode or when using the reference device the clipping plane works fine as expected. In hardware mode it seems to rotate with the camera. If I remove the call to RenderText(); from OnFrameRender then hardware rendering also works fine. Further debugging reveals that the problem is in ID3DXFont::DrawText. I have this issue in Windows Vista and Windows 7 but not in Windows XP. I tested the code with the latest NVidia and ATI drivers in all three OSes on different PCs. Is it a DirectX issue? Or incorrect usage of clipplanes? Thanks Igor

    Read the article

  • How we should load the MFMailViewController in cocos2d ?

    - by srikanth rongali
    I am writing an app in using cocos2d. This method I have written for the selector goToFirstScreen: . The view is in landscape mode. I need to send an email. So, I need to launch the MFMailComposeViewController. I need it in portrait mode. But, the control is not entering in to viewDidLoad of the mailMe class. The problem is in goToScreen: method. But, I do not get where I am wrong ? -(void)goToFirstScreen:(id)sender { NSLog(@"goToFirstScreen: "); CCScene *Scene = [CCScene node]; CCLayer *Layer = [mailME node]; [Scene addChild:Layer]; [[CCDirector sharedDirector] setAnimationInterval:1.0/60]; [[CCDirector sharedDirector] pushScene: Scene]; } This is my mailMe class to launch mail controller #import <UIKit/UIKit.h> #import <MessageUI/MessageUI.h> #import <MessageUI/MFMailComposeViewController.h> #import "cocos2d.h" @interface mailME : CCLayer <MFMailComposeViewControllerDelegate> { UIViewController *mailComposer; } -(void)displayComposerSheet; -(void)launchMailAppOnDevice; @end #import "mailME.h" @implementation mailME -(void)viewDidLoad { NSLog(@"Enetrd in to mail"); Class mailClass = (NSClassFromString(@"MFMailComposeViewController")); if (mailClass != nil) { if ([mailClass canSendMail]) { [self displayComposerSheet]; } else { [self launchMailAppOnDevice]; } } else { [self launchMailAppOnDevice]; } } -(void)displayComposerSheet { CCDirector *director = [CCDirector sharedDirector]; [director pause]; [director stopAnimation]; [director.openGLView setUserInteractionEnabled:NO]; mailComposer = [[UIViewController alloc] init]; [mailComposer setView:[[CCDirector sharedDirector]openGLView]]; [mailComposer setModalTransitionStyle:UIModalTransitionStyleCoverVertical]; MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; [picker setSubject:@"Hello!"]; NSArray *toRecipients = [NSArray arrayWithObject:@"[email protected]"]; [picker setToRecipients:toRecipients]; NSString *emailBody = @"It is not working!"; [picker setMessageBody:emailBody isHTML:YES]; [mailComposer presentModalViewController:picker animated:NO]; [picker release]; } - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { switch (result) { case MFMailComposeResultCancelled: break; case MFMailComposeResultSaved: break; case MFMailComposeResultSent: break; case MFMailComposeResultFailed: break; default: break; } [mailComposer dismissModalViewControllerAnimated:NO]; [[UIApplication sharedApplication] setStatusBarOrientation:CCDeviceOrientationLandscapeLeft animated:NO]; CCDirector *director = [CCDirector sharedDirector]; [director.openGLView setUserInteractionEnabled:YES]; [director startAnimation]; [director resume]; [mailComposer.view.superview removeFromSuperview]; } -(void)launchMailAppOnDevice { NSString *recipients = @"mailto:[email protected]?&subject=Hello!"; NSString *body = @"&body=It is not working"; NSString *email = [NSString stringWithFormat:@"%@%@", recipients, body]; email = [email stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; [[UIApplication sharedApplication] openURL:[NSURL URLWithString:email]]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • SharePoint: Problem with BaseFieldControl

    - by Anoop
    Hi All, In below code in a Gird First column is BaseFieldControl from a column of type Choice of SPList. Secound column is a text box control with textchange event. Both the controls are created at rowdatabound event of gridview. Now the problem is that when Steps: 1) select any of the value from BaseFieldControl(DropDownList) which is rendered from Choice Column of SPList 2) enter any thing in textbox in another column of grid. 3) textchanged event fires up and in textchange event rebound the grid. Problem: the selected value becomes the first item or the default value(if any). but if i do not rebound the grid at text changed event it works fine. Please suggest what to do. using System; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; namespace SharePointProjectTest.Layouts.SharePointProjectTest { public partial class TestBFC : LayoutsPageBase { GridView grid = null; protected void Page_Load(object sender, EventArgs e) { try { grid = new GridView(); grid.ShowFooter = true; grid.ShowHeader = true; grid.AutoGenerateColumns = true; grid.ID = "grdView"; grid.RowDataBound += new GridViewRowEventHandler(grid_RowDataBound); grid.Width = Unit.Pixel(900); MasterPage holder = (MasterPage)Page.Controls[0]; holder.FindControl("PlaceHolderMain").Controls.Add(grid); DataTable ds = new DataTable(); ds.Columns.Add("Choice"); //ds.Columns.Add("person"); ds.Columns.Add("Curr"); for (int i = 0; i < 3; i++) { DataRow dr = ds.NewRow(); ds.Rows.Add(dr); } grid.DataSource = ds; grid.DataBind(); } catch (Exception ex) { } } void tx_TextChanged(object sender, EventArgs e) { DataTable ds = new DataTable(); ds.Columns.Add("Choice"); ds.Columns.Add("Curr"); for (int i = 0; i < 3; i++) { DataRow dr = ds.NewRow(); ds.Rows.Add(dr); } grid.DataSource = ds; grid.DataBind(); } void grid_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { SPWeb web = SPContext.Current.Web; SPList list = web.Lists["Source for test"]; SPField field = list.Fields["Choice"]; SPListItem item=list.Items.Add(); BaseFieldControl control = (BaseFieldControl)GetSharePointControls(field, list, item, SPControlMode.New); if (control != null) { e.Row.Cells[0].Controls.Add(control); } TextBox tx = new TextBox(); tx.AutoPostBack = true; tx.ID = "Curr"; tx.TextChanged += new EventHandler(tx_TextChanged); e.Row.Cells[1].Controls.Add(tx); } } public static Control GetSharePointControls(SPField field, SPList list, SPListItem item, SPControlMode mode) { if (field == null || field.FieldRenderingControl == null || field.Hidden) return null; try { BaseFieldControl webControl = field.FieldRenderingControl; webControl.ListId = list.ID; webControl.ItemId = item.ID; webControl.FieldName = field.Title; webControl.ID = "id_" + field.InternalName; webControl.ControlMode = mode; webControl.EnableViewState = true; return webControl; } catch (Exception ex) { return null; } } } }

    Read the article

  • C#, AES encryption check!

    - by Data-Base
    I have this code for AES encryption, can some one verify that this code is good and not wrong? it works fine, but I'm more concern about the implementation of the algorithm // Plaintext value to be encrypted. //Passphrase from which a pseudo-random password will be derived. //The derived password will be used to generate the encryption key. //Password can be any string. In this example we assume that this passphrase is an ASCII string. //Salt value used along with passphrase to generate password. //Salt can be any string. In this example we assume that salt is an ASCII string. //HashAlgorithm used to generate password. Allowed values are: "MD5" and "SHA1". //SHA1 hashes are a bit slower, but more secure than MD5 hashes. //PasswordIterations used to generate password. One or two iterations should be enough. //InitialVector (or IV). This value is required to encrypt the first block of plaintext data. //For RijndaelManaged class IV must be exactly 16 ASCII characters long. //KeySize. Allowed values are: 128, 192, and 256. //Longer keys are more secure than shorter keys. //Encrypted value formatted as a base64-encoded string. public static string Encrypt(string PlainText, string Password, string Salt, string HashAlgorithm, int PasswordIterations, string InitialVector, int KeySize) { byte[] InitialVectorBytes = Encoding.ASCII.GetBytes(InitialVector); byte[] SaltValueBytes = Encoding.ASCII.GetBytes(Salt); byte[] PlainTextBytes = Encoding.UTF8.GetBytes(PlainText); PasswordDeriveBytes DerivedPassword = new PasswordDeriveBytes(Password, SaltValueBytes, HashAlgorithm, PasswordIterations); byte[] KeyBytes = DerivedPassword.GetBytes(KeySize / 8); RijndaelManaged SymmetricKey = new RijndaelManaged(); SymmetricKey.Mode = CipherMode.CBC; ICryptoTransform Encryptor = SymmetricKey.CreateEncryptor(KeyBytes, InitialVectorBytes); MemoryStream MemStream = new MemoryStream(); CryptoStream CryptoStream = new CryptoStream(MemStream, Encryptor, CryptoStreamMode.Write); CryptoStream.Write(PlainTextBytes, 0, PlainTextBytes.Length); CryptoStream.FlushFinalBlock(); byte[] CipherTextBytes = MemStream.ToArray(); MemStream.Close(); CryptoStream.Close(); return Convert.ToBase64String(CipherTextBytes); } public static string Decrypt(string CipherText, string Password, string Salt, string HashAlgorithm, int PasswordIterations, string InitialVector, int KeySize) { byte[] InitialVectorBytes = Encoding.ASCII.GetBytes(InitialVector); byte[] SaltValueBytes = Encoding.ASCII.GetBytes(Salt); byte[] CipherTextBytes = Convert.FromBase64String(CipherText); PasswordDeriveBytes DerivedPassword = new PasswordDeriveBytes(Password, SaltValueBytes, HashAlgorithm, PasswordIterations); byte[] KeyBytes = DerivedPassword.GetBytes(KeySize / 8); RijndaelManaged SymmetricKey = new RijndaelManaged(); SymmetricKey.Mode = CipherMode.CBC; ICryptoTransform Decryptor = SymmetricKey.CreateDecryptor(KeyBytes, InitialVectorBytes); MemoryStream MemStream = new MemoryStream(CipherTextBytes); CryptoStream cryptoStream = new CryptoStream(MemStream, Decryptor, CryptoStreamMode.Read); byte[] PlainTextBytes = new byte[CipherTextBytes.Length]; int ByteCount = cryptoStream.Read(PlainTextBytes, 0, PlainTextBytes.Length); MemStream.Close(); cryptoStream.Close(); return Encoding.UTF8.GetString(PlainTextBytes, 0, ByteCount); } Thank you

    Read the article

  • print individual tables ??

    - by LiveEn
    I am getting values from a database and displaying in a table. Im trying to print the results as individual . Im using the below javascript <script type="text/javascript"> function print_parent(element) { element.parentNode.className = 'print'; window.print(); element.parentNode.className = ''; return false; } </script> The problem that i have is when i try to print all the results it works great.can some one please tell me how can i print each individual table in each result please? below is my php code $sql="select * from cisdb where pids LIKE '%$pids%'"; $result=mysql_query($sql) or die(mysql_error()); if (mysql_num_rows($result)==0) { echo '<b><center>There was no records !</center></b>'."<br>"; } while ($row=mysql_fetch_array($result)) { $cat=str_replace('+', ' ', $row['category']); print "<center>"; print "<table width='472' border='1' align='center' class='noprint'>"; print "<tr>"; print "<td width='150'><div align='center'><a href='#' onclick='return print_parent(this)'>Print</a> </div></td>"; print "<td width='150'><div align='center'><a href='process.php?mode=ed&id={$row['id']}'>Edit</div></td>"; print "<td width='150'><div align='center'><a href='process.php?mode=del&id={$row['id']}' onclick='return confirm('Are you sure you want to delete?')'>Delete</a></div></td>"; print "</tr>"; print "</table><br>"; print "<div id='divToPrint'>"; print"<table width=700 style=height:900 border=1 cellpadding=1 cellspacing=1 bordercolor=#D6D6D6 class=sss title={$row['title']}> <tr> <td height=25 colspan=2 align=left valign=top><strong>Customer:{$row['name']}</strong></td> <td width=183 align=left valign=top><strong>Sales ID:{$row['said']} </strong></td> <td width=100 align=left valign=top><strong>Phone Cord. ID:{$row['pcid']}</strong></td> <td align=left valign=top><strong>Type:{$row['classtype']}</strong></td> </tr> <tr> <td height=25 colspan=2 valign=top><strong>Contact Name: </strong></td> <td colspan=2 valign=top><strong>Email:</strong></td> <td width=154 valign=top><strong>Phone:</strong></td> </tr <tr> <td height=15 colspan=5 valign=top><strong>Remarks:</strong></td> </tr> <tr> <td height=15 colspan=2 valign=top><strong>Date Added: </strong></td> <td valign=top><strong>Date Edited : </strong></td> <td colspan=2 valign=top><strong>Printed : </strong></td> </tr> </table></div><br>"; print "</center>"; }

    Read the article

  • How to Get The Output Of a command terminated by a alarm() call.

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • Error using Session in IIS7

    - by flashnik
    After deployment of my website to IIS I'm getting a following error message when trying to access session: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I access it in Page_Load or PreRender events (I tried both versions). With VS Dev Server it works without a problem. I tried both InProc an SessionState storage, 1 and multiple woker processes. I added a enableSessionState = "true" to my webpage explicitly. Here is part of web.config: <system.web> <globalization culture="ru-RU" uiCulture="ru-RU" /> <compilation debug="true" defaultLanguage="c#"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <pages enableEventValidation="false" enableSessionState="true"> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="SearchUrlRewriter" type="Synonymizer.SearchUrlRewriter, Synonymizer, Version=1.0.0.0, Culture=neutral" /> <add name="Session" type="System.Web.SessionStateModule" /> </httpModules> <sessionState cookieless="UseCookies" cookieName="My_SessionId" mode="InProc" stateNetworkTimeout="5" /> <customErrors mode="Off" /> </system.web> What else do I need to do to make it work?? UPD I tried to monitor if IIS accesses aspnet_client folder with ProcMon and didn't get any access.

    Read the article

  • Deploy ASP.Net MVC 2 Applicatiopn to Windows 2008 R2

    - by user325320
    Hi, I have a ASP.Net MVC 2 web site, which can be visited by http://localhost/Admin/ContentMgr/ in ASP.Net Development Server from Visual Studio 2010(RTM Retail). When I try to deploy the site to Windows 2008 R2 , IIS 7.5 , the url always return 404. First, my application pool is running on .Net 4.0, and Integration mode. Second, my IIS do have "HTTP ERROR" and "HTTP Redirection" features on And this is my web.config. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" defaultLanguage="c#" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <!-- <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" /> </authentication> --> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > <remove name="UrlRoutingModule"/> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="MvcHttpHandler" /> <add name="MvcHttpHandler" preCondition="integratedMode" verb="*" path="*.mvc" type="System.Web.Mvc.MvcHttpHandler" /> <add name="UrlRoutingHandler" preCondition="integratedMode" verb="*" path="UrlRouting.axd" type="System.Web.HttpForbiddenHandler, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </handlers> <httpErrors errorMode="Detailed" /> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

  • WCF service using duplex channel in different domains

    - by ds1
    I have a WCF service and a Windows client. They communicate via a Duplex WCF channel which when I run from within a single network domain runs fine, but when I put the server on a separate network domain I get the following message in the WCF server trace... The message with to 'net.tcp://abc:8731/ActiveAreaService/mex/mex' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. So, it looks like the communication just work in one direction (from client to server) if the components are in two separate domains. The Network domains are fully trusted, so I'm a little confused as to what else could cause this? Server app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="JobController.ActiveAreaBehavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="JobController.ActiveAreaBehavior" name="JobController.ActiveAreaServer"> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://SERVER:8731/ActiveAreaService/" /> </baseAddresses> </host> </service> </services> </system.serviceModel> </configuration> but I also add an end point programmatically in Visual C++ host = gcnew ServiceHost(ActiveAreaServer::typeid); NetTcpBinding^ binding = gcnew NetTcpBinding(); binding->MaxBufferSize = Int32::MaxValue; binding->MaxReceivedMessageSize = Int32::MaxValue; binding->ReceiveTimeout = TimeSpan::MaxValue; binding->Security->Mode = SecurityMode::Transport; binding->Security->Transport->ClientCredentialType = TcpClientCredentialType::Windows; ServiceEndpoint^ ep = host->AddServiceEndpoint(IActiveAreaServer::typeid, binding, String::Empty); // Use the base address Client app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IActiveAreaServer" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://SERVER:8731/ActiveAreaService/" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IActiveAreaServer" contract="ActiveArea.IActiveAreaServer" name="NetTcpBinding_IActiveAreaServer"> <identity> <userPrincipalName value="[email protected]" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> Any help is appreciated! Cheers

    Read the article

  • wcf erorr: The client and service bindings may be mismatched?

    - by Rev
    Hi let see server config and client config. Then help me find difference between these configs!! Client config <system.serviceModel> <client> <endpoint address="http://localhost/admin2/AdminCentralService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="AdminCentralServiceConfig" /> <endpoint binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="CommandInvokerConfig" /> </client> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> Server Config <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AdminCentral.Business.Web.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="AdminCentral.Business.Web.Service1Behavior" name="AdminCentral.Business.Web.AdminCentralService"> <endpoint address="" binding="wsHttpBinding" contract="AdminCentral.Business.Web.ICommandInvoker"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • Editing a Gridview row with drop-down lists gets too wide - how can I use popup panels instead?

    - by David
    I have a series of GridViews in a Tab Panel - databound to a generic List of Business Objects. The columns in the Gridview are all similar to the following: <asp:TemplateField HeaderText="Company" SortExpression="Company.ShortName"> <ItemTemplate> <asp:Label ID="lblCompany" runat="server" Text='<%# Bind("Company.ShortName") %>'></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="ddlCompany" runat="server"></asp:DropDownList> </EditItemTemplate> </asp:TemplateField> The GridView generates the "Edit" link at the beginning of the row, all the events fire ok. The problem is that the data is getting long. When in 'display mode', it's fine because the GridView control is smart enough to break some text into multiple lines (in particular Project, Title and Worker names can get pretty long). The problem come in editing mode. Drop-down lists DON'T break entries into multiple lines (for obvious reasons). Going into Edit ode on a row in the Gridview can make the Griview expand horizontally to twice the screen size (blowing through the width limits in the Master page and CSS but that's only a related problem). What I need is something like the ModalPopup - but trying to tie it to an ID in an EditItemTemplate gives me errors when the page renders (because the 'ddlXXXX' doesn't exist at the time). In addition I don't know how to dynamically populate the panel so that I can get a response from it (like the ID of the Company they selected). I'm also trying to avoid javascript and would like this to be a 'pure' aspx/code-behind solution (for simplicity's sake among others). All the examples I find are of Modal Popups with the panels pre-defined. Even if it (the popup panel) were something like a list of checkboxes, it could be databound to the SortedList I have ready to go and an OK/Cancel button combination to accept or ignore things. I'm just not sure of what goes where. I'm open to suggestions. Thanks in advance. EDIT: Final solution looks as follows: <asp:TemplateField HeaderText="Company" SortExpression="Company.ShortName"> <ItemTemplate> <asp:Label ID="lblCompany" runat="server" Text='<%# Bind("Company.ShortName") %>'></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:LinkButton ID="lnkCompany" runat="server" Text='<%# Bind("Company.ShortName") %>'></asp:LinkButton> <asp:Panel ID="pnlCompany" runat="server" style="display:none"> <div> <asp:DropDownList ID="ddlCompany" runat="server" ></asp:DropDownList> <br/> <asp:ImageButton ID="btnOKCo" runat="server" ImageUrl="~/Images/greencheck.gif" OnCommand="PopupButton_Command" CommandName="SelectCO" /> <asp:ImageButton ID="btnCxlCo" runat="server" ImageUrl="~/Images/RedX.gif" /> </div> </asp:Panel> <cc1:ModalPopupExtender ID="mpeCompany" runat="server" TargetControlID="lnkCompany" PopupControlID="pnlCompany" BackgroundCssClass="modalBackground" CancelControlID="btnCxlCo" DropShadow="true" PopupDragHandleControlID="pnlCompany" /> </EditItemTemplate> </asp:TemplateField> And in the code-behind, lstIDLabor is the generic List of data lines (of which Company is one of the properties that is also a business object) that is bound to the GridView: Sub PopupButton_Command(ByVal sender As Object, ByVal e As CommandEventArgs) Dim intRow As Integer Dim intVal As Integer RestoreFromSessionVariables() Select Case e.CommandName Case "SelectCO" intRow = grdIDCostLabor.EditIndex Dim ddlCo As DropDownList = CType(grdIDCost.Rows(intRow).FindControl("ddlCompany"), DropDownList) intVal = ddlCo.SelectedValue lstIDLabor(intRow).CompanyID = intVal lstIDLabor(intRow).Company = Company.Read(intVal) Case Else ' End Select MakeSessionVariables() BindGrids() End Sub

    Read the article

  • How to expose MEX when I need the service to have NTLM authentication

    - by Ram Amos
    I'm developing a WCF service that is RESTful and SOAP, now both of them needs to be with NTLM authentication. I also want to expose a MEX endpoint so that others can easily reference the service and work with it. Now when I set IIS to require windows authentication I can use the REST service and make calls to the service succesfully, but when I want to reference the service with SVCUTIL it throws an error that it requires to be anonymous. Here's my web.config: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/> <bindings> <basicHttpBinding> <binding name="basicHttpBinding" maxReceivedMessageSize="214748563" maxBufferSize="214748563" maxBufferPoolSize="214748563"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Ntlm"> </transport> </security> </binding> </basicHttpBinding> <webHttpBinding> <binding name="webHttpBinding" maxReceivedMessageSize="214748563" maxBufferSize="214748563" maxBufferPoolSize="214748563"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Ntlm"> </transport> </security> </binding> </webHttpBinding> <mexHttpBinding> <binding name="mexHttpBinding"></binding> </mexHttpBinding> </bindings> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" automaticFormatSelectionEnabled="true" helpEnabled="True"> </standardEndpoint> </webHttpEndpoint> </standardEndpoints> <services> <service name="Intel.ResourceScheduler.Service" behaviorConfiguration="Meta"> <clear /> <endpoint address="soap" name="SOAP" binding="basicHttpBinding" contract="Intel.ResourceScheduler.Service.IResourceSchedulerService" listenUriMode="Explicit" /> <endpoint address="" name="rest" binding="webHttpBinding" behaviorConfiguration="REST" contract="Intel.ResourceScheduler.Service.IResourceSchedulerService" /> <endpoint address="mex" name="mex" binding="mexHttpBinding" behaviorConfiguration="" contract="IMetadataExchange" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="REST"> <webHttp /> </behavior> <behavior name="WCFBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="Meta"> <serviceMetadata httpGetEnabled="true"/> </behavior> <behavior name="REST"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> <behavior name="WCFBehavior"> <serviceMetadata httpGetEnabled="true"/> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> <behavior name=""> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> Any help will be appreciated.

    Read the article

  • tkinter frame does not show on startup

    - by Jzz
    this is my first question on SO, so correct me please if I make a fool of myself. I have this fairly complicated python / Tkinter application (python 2.7). On startup, the __init__ loads several frames, and loads a database. When that is finished, I want to set the application to a default state (there are 2 program states, 'calculate' and 'config'). Setting the state of the application means that the appropriate frame is displayed (using grid). When the program is running, the user can select a program state in the menu. Problem is, the frame is not displayed on startup. I get an empty application (menu bar and status bar are displayed). When I select a program state in the menu, the frame displays as it should. Question: What am I doing wrong? Should I update idletasks? I tried, but no result. Anything else? Background: I use the following to switch program states: def set_program_state(self, state): '''sets the program state''' #try cleaning all the frames: try: self.config_frame.grid_forget() except: pass try: self.tidal_calculations_frame.grid_forget() except: pass try: self.tidal_grapth_frame.grid_forget() except: pass if state == "calculate": print "Switching to calculation mode" self.tidal_calculations_frame.grid() #frame is preloaded self.tidal_calculations_frame.fill_data(routes=self.routing_data.routes, deviations=self.misc_data.deviations, ship_types=self.misc_data.ship_types) self.tidal_grapth_frame.grid() self.program_state = "calculate" elif state == "config": print "Switching to config mode" self.config_frame = GUI_helper.config_screen_frame(self, self.user) #load frame first (contents depend on type of user) self.config_frame.grid() self.program_state = "config" I understand that this is kind of messy to read, so I simplified things for testing, using this: def set_program_state(self, state): '''sets the program state''' #try cleaning all the frames: try: self.testlabel_1.grid_forget() except: pass try: self.testlabel_2.grid_forget() except: pass if state == "calculate": print "switching to test1" self.testlabel_1 = tk.Label(self, text="calculate", borderwidth=1, relief=tk.RAISED) self.testlabel_1.grid(row=0, sticky=tk.W+tk.E) elif state == "config": print "switching to test1" self.testlabel_2 = tk.Label(self, text="config", borderwidth=1, relief=tk.RAISED) self.testlabel_2.grid(row=0, sticky=tk.W+tk.E) But the result is the same. The frame (or label in this test) is not displayed at startup, but when the user selects the state (calling the same function) the frame is displayed. UPDATE the sample code in the comments (thanks for that!) pointed me in another direction. Further testing revealed (what I think) the cause of the problem. Disabling the display of the status bar made the program work as expected. Turns out, I used pack to display the statusbar and grid to display the frames. And they are in the same container, so problems arise. I fixed that by using only pack inside the main container. But the same problem is still there. This is what I use for the statusbar: self.status = GUI_helper.StatusBar(self.parent) self.status.pack(side=tk.BOTTOM, fill=tk.X) And if I comment out the last line (pack), the config frame loads on startup, as per this line: self.set_program_state("config") But if I let the status bar pack inside the main window, the config frame does not show. Where it does show when the user asks for it (with the same command as above).

    Read the article

  • FMOD surround sound openframeworks

    - by user1449425
    Ok, I hope I don't mess this up, I have had a look for some answers but can't find anything. I am trying to make a simple sampler in openframeworks using the FMOD sound player in 3D mode. I can make a single instance work fine (recording a new file using libsndfilerecorder and then playing it back and moving it in surround. However I want to have 8 layers of looping audio that I can record and replace one layer at a time in a live show. I get a lot of problems as soon as I have more than 1 layer. The first part of my question relates to the FMOD 3D modes, it is listener relative, so I have to define the position of my listener for every sound (I would prefer to have head relative mode but I cannot make this work at all. Again this works fine when I am using a single player but with multiple players only the last listener I update actually works. The main problem I have is that when I use multiple players I get distortion, and often a mix of other currently playing sounds (even when the microphone cannot hear them) in my new recordings. Is there an incompatability with libsndfilerecorder and FMOD? Here I initialise the players for (int i=0; i<CHANNEL_COUNT; i++) { lvelocity[i].set(1, 1, 1); lup[i].set(0, 1, 0); lforward[i].set(0, 0, 1); lposition[i].set(0, 0, 0); sposition[i].set(3, 3, 2); svelocity[i].set(1, 1, 1); //player[1].initializeFmod(); //player[i].loadSound( "1.wav" ); player[i].setVolume(0.75); player[i].setMultiPlay(true); player[i].play(); setupHold[i]==false; recording[i]=false; channelHasFile[i]=false; settingOsc[i]=false; } When I am recording I unload the file and make sure the positions of the player that is not loaded are not updating. void fmodApp::recordingStart( int recordingId ){ if (recording[recordingId]==false) { setupHold[recordingId]=true; //this stops the position updating cout<<"Start recording Channel " + ofToString(recordingId+1)+" setup hold is true \n"; pt=getDateName() +".wav"; player[recordingId].stop(); player[recordingId].unloadSound(); audioRecorder.setup(pt); audioRecorder.setFormat(SF_FORMAT_WAV | SF_FORMAT_PCM_16); recording[recordingId]=true; //this starts the libSndFIleRecorder } else { cout<<"Channel" + ofToString(recordingId+1)+" is already recording \n"; } } And I stop the recording like this. void fmodApp::recordingEnd( int recordingId ){ if (recording[recordingId]=true) { recording[recordingId]=false; cout<<"Stop recording" + ofToString(recordingId+1)+" \n"; audioRecorder.finalize(); audioRecorder.close(); player[recordingId].loadSound(pt); setupHold[recordingId]=false; channelHasFile[recordingId]=true; cout<< "File recorded channel " + ofToString(recordingId+1) + " file is called " + pt + "\n"; } else { cout << "Sorry track" + ofToString(recordingId+1) + "is not recording"; } } I am careful not to interrupt the updating process but I cannot see where I am going wrong. Many Thanks

    Read the article

  • About global.asax and the events there

    - by eski
    So what i'm trying to understand is the whole global.asax events. I doing a simple counter that records website visits. I am using MSSQL. Basicly i have two ints. totalNumberOfUsers - The total visist from begining. currentNumberOfUsers - Total of users viewing the site at the moment. So the way i understand global.asax events is that every time someone comes to the site "Session_Start" is fired once. So once per user. "Application_Start" is fired only once the first time someone comes to the site. Going with this i have my global.asax file here. <script runat="server"> string connectionstring = ConfigurationManager.ConnectionStrings["ConnectionString1"].ConnectionString; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup Application.Lock(); Application["currentNumberOfUsers"] = 0; Application.UnLock(); string sql = "Select c_hit from v_counter where (id=1)"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); cmd.Connection.Open(); cmd.ExecuteNonQuery(); SqlDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { Application.Lock(); Application["totalNumberOfUsers"] = reader.GetInt32(0); Application.UnLock(); } reader.Close(); cmd.Connection.Close(); } void Application_End(object sender, EventArgs e) { // Code that runs on application shutdown } void Application_Error(object sender, EventArgs e) { // Code that runs when an unhandled error occurs } void Session_Start(object sender, EventArgs e) { // Code that runs when a new session is started Application.Lock(); Application["totalNumberOfUsers"] = (int)Application["totalNumberOfUsers"] + 1; Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] + 1; Application.UnLock(); string sql = "UPDATE v_counter SET c_hit = @hit WHERE c_type = 'totalNumberOfUsers'"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); SqlParameter hit = new SqlParameter("@hit", SqlDbType.Int); hit.Value = Application["totalNumberOfUsers"]; cmd.Parameters.Add(hit); cmd.Connection.Open(); cmd.ExecuteNonQuery(); cmd.Connection.Close(); } void Session_End(object sender, EventArgs e) { // Code that runs when a session ends. // Note: The Session_End event is raised only when the sessionstate mode // is set to InProc in the Web.config file. If session mode is set to StateServer // or SQLServer, the event is not raised. Application.Lock(); Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] - 1; Application.UnLock(); } </script> In the page_load i have this protected void Page_Load(object sender, EventArgs e) { l_current.Text = Application["currentNumberOfUsers"].ToString(); l_total.Text = Application["totalNumberOfUsers"].ToString(); } So if i understand this right, every time someone comes to the site both the currentNumberOfUsers and totalNumberOfUsers are incremented with 1. But when the session is over the currentNumberOfUsers is decremented with 1. If i go to the site with 3 types of browsers with the same computer i should have 3 in hits on both counters. Doing this again after hours i should have 3 in current and 6 in total, right ? The way its working right now is the current goes up to 2 and the total is incremented on every postback on IE and Chrome but not on firefox. And one last thing, is this the same thing ? Application["value"] = 0; value = Application["value"] //OR Application.Set("Value", 0); Value = Application.Get("Value");

    Read the article

  • Error using Session in IIS

    - by flashnik
    After deployment of my website to IIS I'm getting a following error message when trying to access session: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I access it in Page_Load or PreRender events (I tried both versions). With VS Dev Server it works without a problem. I tried both InProc an SessionState storage, 1 and multiple woker processes. I added a enableSessionState = "true" to my webpage explicitly. Here is part of web.config: <system.web> <globalization culture="ru-RU" uiCulture="ru-RU" /> <compilation debug="true" defaultLanguage="c#"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <pages enableEventValidation="false" enableSessionState="true"> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="SearchUrlRewriter" type="Synonymizer.SearchUrlRewriter, Synonymizer, Version=1.0.0.0, Culture=neutral" /> <add name="Session" type="System.Web.SessionStateModule" /> </httpModules> <sessionState cookieless="UseCookies" cookieName="My_SessionId" mode="InProc" stateNetworkTimeout="5" /> <customErrors mode="Off" /> </system.web> What else do I need to do to make it work??

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

  • How to reduce virtual memory by optimising my PHP code?

    - by iCeR
    My current code (see below) uses 147MB of virtual memory! My provider has allocated 100MB by default and the process is killed once run, causing an internal error. The code is utilising curl multi and must be able to loop with more than 150 iterations whilst still minimizing the virtual memory. The code below is only set at 150 iterations and still causes the internal server error. At 90 iterations the issue does not occur. How can I adjust my code to lower the resource use / virtual memory? Thanks! <?php function udate($format, $utimestamp = null) { if ($utimestamp === null) $utimestamp = microtime(true); $timestamp = floor($utimestamp); $milliseconds = round(($utimestamp - $timestamp) * 1000); return date(preg_replace('`(?<!\\\\)u`', $milliseconds, $format), $timestamp); } $url = 'https://www.testdomain.com/'; $curl_arr = array(); $master = curl_multi_init(); for($i=0; $i<150; $i++) { $curl_arr[$i] = curl_init(); curl_setopt($curl_arr[$i], CURLOPT_URL, $url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYPEER, FALSE); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); for($i=0; $i<150; $i++) { $results = curl_multi_getcontent ($curl_arr[$i]); $results = explode("<br>", $results); echo $results[0]; echo "<br>"; echo $results[1]; echo "<br>"; echo udate('H:i:s:u'); echo "<br><br>"; usleep(100000); } ?> Processor Information Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #2 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #3 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #4 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #5 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #6 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #7 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #8 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Memory Information Memory for crash kernel (0x0 to 0x0) notwithin permissible range Memory: 8302344k/9175040k available (2176k kernel code, 80272k reserved, 901k data, 228k init, 7466304k highmem) System Information Linux server3.server.com 2.6.18-194.17.1.el5PAE #1 SMP Wed Sep 29 13:31:51 EDT 2010 i686 i686 i386 GNU/Linux Physical Disks SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back sd 0:1:0:0: Attached scsi disk sda sd 4:0:0:0: Attached scsi removable disk sdb sd 0:1:0:0: Attached scsi generic sg4 type 0 sd 4:0:0:0: Attached scsi generic sg7 type 0 Current Memory Usage total used free shared buffers cached Mem: 8306672 7847384 459288 0 487912 6444548 -/+ buffers/cache: 914924 7391748 Swap: 4095992 496 4095496 Total: 12402664 7847880 4554784 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 307G 546G 36% / /dev/sda1 99M 19M 76M 20% /boot none 4.0G 0 4.0G 0% /dev/shm /var/tmpMnt 4.0G 1.8G 2.0G 48% /tmp

    Read the article

  • WCF - Define multiple services in a single APP.Config file?

    - by Goober
    Scenario I have a windows forms application. I want to use two different WCF Services that are in no way connected. HOWEVER, I'm not sure how to go about defining the services in my APP.CONFIG file. From what I have read, it is possible to do what I have done below, but I cannot be sure that the syntax is correct or the tags are all present where necessary and I needed some clarification. Question. So is the below the correct way to setup two services in A SINGLE APP.CONFIG FILE? I.E: <configuration> <system.serviceModel> <services> <service> <!--SERVICE ONE--> <endpoint> </endpoint> <binding> </binding> </service> <service> <!--SERVICE TWO--> <endpoint> </endpoint> <binding> </binding> </service> </services> </system.serviceModel> </configuration> CODE <configuration> <system.serviceModel> <services> <!--SERVICE ONE--> <service> <endpoint address="" binding="netTcpBinding" bindingConfiguration="tcpServiceEndPoint" contract="ListenerService.IListenerService" name="tcpServiceEndPoint" /> <binding name="tcpServiceEndPoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:05:00" enabled="true" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </service> <!--SERVICE TWO--> <service> <endpoint address="" binding="netTcpBinding" contract="UploadObjects.IResponseService" bindingConfiguration="TransactedBinding" name="UploadObjects.ResponseService"/> <binding name="TransactedBinding"> <security mode="None" /> </binding> </service> </services> </system.serviceModel> </configuration> EDIT What do the BEHAVIOURS represent? How do they relate to the service definitions?

    Read the article

  • Break in Class Module vs. Break on Unhandled Errors (VB6 Error Trapping, Options Setting in IDE)

    - by Erx_VB.NExT.Coder
    Basically, I'm trying to understand the difference between the "Break in Class Module" and "Break on Unhandled Errors" that appear in the Visual Basic 6.0 IDE under the following path: Tools --> Options --> General --> Error Trapping The three options appear to be: Break on All Errors Break in Class Module Break on Unhandled Errors Now, apparently, according to MSDN, the second option (Break in Class Module) really just means "Break on Unhandled Errors in Class Modules". Also, this option appears to be set by default (ie: I think its set to this out of the box). What I am trying to figure out is, if I have the second option selected, do I get the third option (Break on Unhandled Errors) for free? In that, does it come included by default for all scenarios outside of the Class Module spectrum? To advise, I don't have any Class Modules in my currently active project. I have .bas modules though. Also, is it possible that by Class Mdules they may be referring to normal .bas Modules as well? (this is my second sub-question). Basically, I just want the setting to ensure there won't be any surprises once the exe is released. I want as many errors to display as possible while I am developing, and non to be displayed when in release mode. Normally, I have two types of On Error Resume Next on my forms where there isn't explicit error handling, they are as follows: On Error Resume Next ' REQUIRED On Error Resume Next ' NOT REQUIRED The required ones are things like, checking to see if an array has any length, if a call to its UBound errors out, that means it has no length, if it returns a value 0 or more, then it does have length (and therefore, exists). These types of Error Statements need to remain active even while I am developing. However, the NOT REQUIRED ones shouldn't remain active while I am developing, so I have them all commented out to ensure that I catch all the errors that exist. Once I am ready to release the exe, I do a CTRL+H to find all occurrences of: 'On Error Resume Next ' NOT REQUIRED (You may have noticed they are commented out)... And replace them with: On Error Resume Next ' NOT REQUIRED ... The uncommented version, so that in release mode, if there are any leftover errors, they do not show to users. For more on the description by MSDN on the three options (which I've read twice and still don't find adequate) you can visit the following link: http://webcache.googleusercontent.com/search?q=cache:yUQZZK2n2IYJ:support.microsoft.com/kb/129876&hl=en&lr=lang_en%7Clang_tr&gl=au&tbs=lr:lang_1en%7Clang_1tr&prmd=imvns&strip=1 I’m also interested in hearing your thoughts if you feel like volunteering them (and this would be my tentative/totally optional third sub-question, that being, your thoughts on fall-back error handling techniques). Just to summarize, the first two questions were, do we get option 3 included in all non-class scenarios if we choose option 2? And, is it possible that when they use the term "Class Module" they may be referring to .bas Modules as well? (Since a .bad Module is really just a class module that is pre-instantiated in the background during start-up). Thank you.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >