Search Results

Search found 6967 results on 279 pages for 'multi channel'.

Page 82/279 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Arrow ECS Oracle OpenWorld update

    - by mseika
    A date to mark in your diaries now! On Tuesday 23rd October will be holding our post Oracle OpenWorld Oracle. Covering all the important announcements from Oracle OpenWorld, this must attend event will be held in the Royal Exchange, London. The full agenda and speaker line up is being finalised but we will cover all the major strategy and product announcements from Oracle OpenWorld, FY 13 Channel Strategy and Partnering with Arrow ECS. Oracle OpenWorld a Channel Perspective, David Tweddle - Head of UK Alliance and Channel Hardware Announcements, Christopher Lindsay - Oracle Hardware EMEA Software Announcements, speaker to be confirmed Arrow ECS Focus and Strategy, Simon Rushbrook - Business Development Manager, Arrow ECS Summary and close, Nick Tinsley - Sales Director Who should attend? The event is subject Oracle NDA and is aimed at sales and pre-sales personal within your organisation. This event will commence at 12.30 with lunch, presentations will start at 1.30 and will close at 4.30 followed by drinks and networking. Full details to follow shortly but save the date and register now. Date & Time Tuesday 23rd October 201212.30pm - 4.30pm Post event drinks will be servedfrom 4.30pm Location London OfficeEntrance 2, Fourth Floor,The Royal Exchange,London, EC3V 3LN Click here for directions >

    Read the article

  • IRC News from #netbeans on FreeNode

    - by Geertjan
    I joined the #netbeans channel on FreeNode last week and the discussions there are really great. It's so cool to not have the endless back and forth of an e-mail exchange. Instead, you can hammer out a complete solution to a problem while chatting live in the channel. A case in point was yesterday, when someone named 'charmeleon' wanted to create a NetBeans Platform based application that includes the "image" module from the NetBeans IDE sources. That way, he'd have a starting point for his own image-oriented application, since he'd not only have the NetBeans Platform, but also the sources of the "image" module. Had we been communicating via e-mail, it would have taken weeks, at least, to come to a solution. Instead, we hashed it out together live, including some very specific problems that would have been hard to communicate about via e-mail. In the end, I made a movie showing exactly the scenario that charmeleon was interested in: And, right now, in the #netbeans channel, charmeleon said: "NetBeans RCP feels like cheating once you start getting over the hump." I'm sure the fact that the hump was handled within a few hours of chatting on irc is a big contributor to that impression.

    Read the article

  • YouTube custom thumbnails feature availability

    - by skat
    I've been trying to figure out this on my own for weeks, but now I give up. 'Custom thumbnails' feature on YouTube is such a controversial one, it was changed so much... so much that even FAQ on YouTube doesn't fully describe it's features (as I see). I have a YouTube channel for one of my websites. This YouTube channel is main marketing force for my website - it brings all the boys to my yard (I mean, website). So I have to use all the hacky-tricky stuff to increase my visibility on youtube. And damn, those custom thumbnails are giving me hard times... As far as I understand, this is current state of 'custom thumbnail' feature: "If your account is in good standing, you may have the ability to upload custom thumbnails for your video uploads." (c) https://support.google.com/youtube/answer/138008 My channel has good standing, has more than 50000 views. So why the hell my account is still not eligible for this feature? anyone have any idea?

    Read the article

  • Issues with MIPS interrupt for tv remote simulator

    - by pred2040
    Hello I am writing a program for class to simulate a tv remote in a MIPS/SPIM enviroment. The functions of the program itself are unimportant as they worked fine before the interrupt so I left them all out. The gaol is basically to get a input from the keyboard by means of interupt, store it in $s7 and process it. The interrupt is causing my program to repeatedly spam the errors: Exception occurred at PC=0x00400068 Bad address in data/stack read: 0x00000004 Exception occurred at PC=0x00400358 Bad address in data/stack read: 0x00000000 program starts here .data msg_tvworking: .asciiz "tv is working\n" msg_sec: .asciiz "sec -- " msg_on: .asciiz "Power On" msg_off: .asciiz "Power Off" msg_channel: .asciiz " Channel " msg_volume: .asciiz " Volume " msg_sleep: .asciiz " Sleep Timer: " msg_dash: .asciiz "-\n" msg_newline: .asciiz "\n" msg_comma: .asciiz ", " array1: .space 400 # 400 bytes of storage for 100 channels array2: .space 400 # copy of above for sorting var1: .word 0 # 1 if 0-9 is pressed, 0 if not var2: .word 0 # stores number of channel (ex. 2-) var3: .word 0 # channel timer var4: .word 0 # 1 if s pressed once, 2 if twice, 0 if not var5: .word 0 # sleep wait timer var6: .word 0 # program timer var9: .float 0.01 # for channel timings .kdata var7: .word 10 var8: .word 11 .text .globl main main: li $s0, 300 li $s1, 0 # channel li $s2, 50 # volume li $s3, 1 # power - 1:on 0:off li $s4, 0 # sleep timer - 0:off li $s5, 0 # temporary li $s6, 0 # length of sleep period li $s7, 10000 # current key press li $t2, 0 # temp value not needed across calls li $t4, 0 interrupt data here mfc0 $a0, $12 ori $a0, 0xff11 mtc0 $a0, $12 lui $t0, 0xFFFF ori $a0, $0, 2 sw $a0, 0($t0) mainloop: # 1. get external input, and process it # input from interupt is taken from $a2 and placed in $s7 #for processing beq $a2, $0, next lw $s7, 4($a2) li $a2, 0 # call the process_input function here # jal process_input next: # 2. check sleep timer mainloopnext1: # 3. delay for 10ms jal delay_10ms jal check_timers jal channel_time # 4. print status lw $s5, var6 addi $s5, $s5, 1 sw $s5, var6 addi $s0, $s0, -1 bne $s0, $0, mainloopnext4 li $s0, 300 jal status_print mainloopnext4: j mainloop li $v0,10 # exit syscall -------------------------------------------------- status_print: seconds_stat: power_stat: on_stat: off_stat: channel_stat: volume_stat: sleep_stat: j $ra -------------------------------------------------- delay_10ms: li $t0, 6000 delay_10ms_loop: addi $t0, $t0, -1 bne $t0, $0, delay_10ms_loop jr $ra -------------------------------------------------- check_timers: channel_press: sleep_press: go_back_press: channel_check: channel_ignore: sleep_check: sleep_ignore: j $ra ------------------------------------------------ process_input: beq $s7, 112, power beq $s7, 117, channel_up beq $s7, 100, channel_down beq $s7, 108, volume_up beq $s7, 107, volume_down beq $s7, 115, sleep_init beq $s7, 118, history bgt $s7, 47, end_range jr $ra end_range: power: on: off: channel_up: over: channel_down: under: channel_message: channel_time: volume_up: volume_down: volume_message: sleep_init: sleep_incr: sleep: sleep_reset: history: digit_pad_init: digit_pad: jr $ra -------------------------------------------- interupt data here, followed closely from class .ktext 0x80000180 .set noat move $k1, $at .set at sw $v0, var7 sw $a0, var8 mfc0 $k0, $13 srl $a0, $k0, 2 andi $a0, $a0, 0x1f bne $a0, $zero, no_io lui $v0, 0xFFFF lw $a2, 4($v0) # keyboard data placed in $a2 no_io: mtc0 $0, $13 mfc0 $k0, $12 andi $k0, 0xfffd ori $k0, 0x11 mtc0 $k0, $12 lw $v0, var7 lw $a0, var8 .set noat move $at, $k1 .set at eret Thanks in advance.

    Read the article

  • oauth_verifier is not passed using DotNetOpenAuth's Webconsumer

    - by BozoJoe
    I receive back a good oauth_verifier value from the server, but it is not being passed on via the ProcessUserAuthorization call to the access_token endpoint. I'm using DotNetOpenAuth 3.3.1, and the WebConsumer implementation. The server I'm working with is using OAuth 1.0a not 1.0.1. Do I need to force DotNetOpenAuth to use 1.0a? 2010-01-16 13:19:44,343 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - After binding element processing, the received UserAuthorizationResponse (1.0.1) message is: oauth_verifier: dEz9lE9AA1gcdr6oCbmD oauth_token: vauHNVOCITlbGCuqycWn 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Preparing to send AuthorizedTokenRequest (1.0) message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.OAuth.ChannelElements.OAuthHttpMethodBindingElement applied to message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.Messaging.Bindings.StandardReplayProtectionBindingElement applied to message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.Messaging.Bindings.StandardExpirationBindingElement applied to message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Applying secrets to message to prepare for signing or signature verification. 2010-01-16 13:19:44,348 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Signing AuthorizedTokenRequest message using HMAC-SHA1. 2010-01-16 13:19:44,349 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Constructed signature base string: GET&http%3A%2F%2Fx-staging.indivo.org%3A8000%2Foauth%2Faccess_token&oauth_consumer_key%3Doak%26oauth_nonce%3DgPersiZV%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1263676784%26oauth_token%3DvauHNVOCITlbGCuqycWn%26oauth_version%3D1.0 2010-01-16 13:19:44,349 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.OAuth.ChannelElements.SigningBindingElementChain applied to message. 2010-01-16 13:19:44,351 [5] INFO DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Prepared outgoing AuthorizedTokenRequest (1.0) message for http://x-staging.indivo.org:8000/oauth/access_token: oauth_token: vauHNVOCITlbGCuqycWn oauth_consumer_key: XXXXXXmyComsumerKeyXXXXXX oauth_nonce: gPersiZV oauth_signature_method: HMAC-SHA1 oauth_signature: xNynvr2oFlqtdoOKOl2ETiiTLGY= oauth_version: 1.0 oauth_timestamp: 1263676784 2010-01-16 13:19:44,351 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Sending AuthorizedTokenRequest request. 2010-01-16 13:19:44,351 [5] DEBUG DotNetOpenAuth.Http [(null)] <(null)> - HTTP GET http://x-staging.indivo.org:8000/oauth/access_token 2010-01-16 13:20:34,657 [5] ERROR DotNetOpenAuth.Http [(null)] <(null)> - WebException from http://x-staging.indivo.org:8000/oauth/access_token: <h4>Internal Server Error</h4> A pastebin link to the log4net log

    Read the article

  • WCF Timeout issue - should there even be a socket connection?

    - by stiank81
    I have a .Net application which is split into a client and server side. The communication between them is handled using WCF. I'm not using the automagic service references, but instead I've built the connection manually like described in the Screencast by Miguel Castro. Summarized this means that I create a console application on the server side that holds ServiceHost objects for the different services: var myServiceHost = new System.ServiceModel.ServiceHost(typeof(MyService), new Uri("net.tcp://localhost:8002")); myServiceHost.Open(); And on the client side I have service proxies creating channels using the ChannelFactory: IMyService proxy = new ChannelFactory<IMyService>("MyServiceEndpoint").CreateChannel(); The client and server side share the service contract defined in the interface IMyService. And another advantage is that I get minimal App.config files - without all the autogenerated stuff created through the Service References. Example from client side: <?xml version="1.0"?> <configuration> <system.serviceModel> <client> <endpoint address="net.tcp://localhost:8002/MyEndpoint" binding="netTcpBinding" contract="IMyService" name="MyServiceEndpoint"/> </client> </system.serviceModel> </configuration> So - to my problem. I create the proxy once, and it holds a channel all the way through the application. However, if I leave the application without use for a few minutes the channel has timed out, and I get the following exception: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:59.9979998'. How do I prevent this? I'm assuming I need to specify a higher timeout in my configuration? But I don't want it to ever time out. But on the other hand - I don't want a socket connection! Do I need one? Thought I could go connection less with WCF... What's the permanent solution and best practice on solving this? Set timeout to "never".. Create a new channel for each request? I'm assuming there is some overhead creating the channel?.. Increase the timeout to e.g. 5minutes and create new channel if the connection did timeout? Make it connection less somehow? (Without the overhead of creating channels..) Something else...

    Read the article

  • SimpleXML not pulling correct link

    - by kylex
    My code does not pull the proper link for the item. Instead it's getting the previous item's link. Any suggestions? Thanks! <?php // Load the XML data from the specified file name. // The second argument (NULL) allows us to specify additional libxml parameters, // we don't need this so we'll leave it as NULL. The third argument however is // important as it informs simplexml to handle the first parameter as a file name // rather than a XML formatted string. $pFile = new SimpleXMLElement("http://example.blogspot.com/feeds/posts/default?alt=rss", null, true); // Now that we've loaded our XML file we can begin to parse it. // We know that a channel element should be available within the, // document so we begin by looping through each channel foreach ($pFile->channel as $pChild) { // Now we want to loop through the items inside this channel { foreach ($pFile->channel->item as $pItem) { // If this item has child nodes as it should, // loop through them and print out the data foreach ($pItem->children() as $pChild) { // We can check the name of this node using the getName() method. // We can then use this information, to, for example, embolden // the title or format a link switch ($pChild->getName()) { case 'pubDate': $date = date('l, F d, Y', strtotime($pChild)); echo "<p class=\"blog_time\">$date</p>"; break; case 'link': $link = $pChild; break; case 'title': echo "<p class=\"blog_title\"><a href=\"$link\">$pChild</a></p>"; break; case 'description': // echo substr(strip_tags($pChild), 0 , 270) . "..."; break; case 'author': echo ""; break; case 'category': echo ""; break; case 'guid': echo ""; break; default: // echo strip_tags($pChild) . "<br />\n"; break; } } } } } ?>

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Significance of SEO Submissions

    Search Engine Optimization is an important strategy for making the web occurrence and existence of your company cost effective and fruitful for you. To elevate the interest of your target audience in your website, to pull them towards your online identity and making them browse through your products and services is a very important step in making your business successful in all fields. This multi-faceted multi-beneficial task can bear the sweet fruit of success when it is applied in the best way.

    Read the article

  • Codeway 5 : Embarcadero présente les nouveautés de RAD Studio XE2, sa suite de développement rapide, évènement gratuit en ligne

    Codeway 5 : Embarcadero présente les nouveautés de RAD Studio XE2 Sa suite de développement rapide et multiplateforme, lors d'un évènement gratuit en ligne Durant la semaine du 21 au 25 novembre, Embarcadero organise Codeway 5, un évènement en ligne et en français pour présenter les nouveautés de RAD Studio XE2, la nouvelle évolution de sa suite de développement rapide, multi-langages et multi plateformes. Après avoir fait escale dans les principales villes françaises avec le CodeWay Tour 2011, Embarcadero veut manifestement se faire entendre par un plus grand nombre d'intéressés sans qu'ils aient à se déplacer. « Une connexion internet suffit » pour prendre pleinement par...

    Read the article

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Examples of bad variable names and reasons [on hold]

    - by user470184
    I'll start with a class in the jdk package : public final class Sdp { should be : public final class SocketsDirectProtocol { Sdp is class name, this is ambigious, should be : Class<?> cl = Class.forName("java.net.SdpSocketImpl", true, null); should be : Class<?> clazz = Class.forName("java.net.SdpSocketImpl", true, null); cl is ambiguous private static void setAccessible(final AccessibleObject o) { should be : private static void setAccessible(final AccessibleObject accessibleObject) { There are various other examples in this class, do you have similar and/or differing examples of variables that were named badly ? package com.oracle.net; public final class Sdp { private Sdp() { } /** * The package-privage ServerSocket(SocketImpl) constructor */ private static final Constructor<ServerSocket> serverSocketCtor; static { try { serverSocketCtor = (Constructor<ServerSocket>) ServerSocket.class.getDeclaredConstructor(SocketImpl.class); setAccessible(serverSocketCtor); } catch (NoSuchMethodException e) { throw new AssertionError(e); } } /** * The package-private SdpSocketImpl() constructor */ private static final Constructor<SocketImpl> socketImplCtor; static { try { Class<?> cl = Class.forName("java.net.SdpSocketImpl", true, null); socketImplCtor = (Constructor<SocketImpl>)cl.getDeclaredConstructor(); setAccessible(socketImplCtor); } catch (ClassNotFoundException e) { throw new AssertionError(e); } catch (NoSuchMethodException e) { throw new AssertionError(e); } } private static void setAccessible(final AccessibleObject o) { AccessController.doPrivileged(new PrivilegedAction<Void>() { public Void run() { o.setAccessible(true); return null; } }); } /** * SDP enabled Socket. */ private static class SdpSocket extends Socket { SdpSocket(SocketImpl impl) throws SocketException { super(impl); } } /** * Creates a SDP enabled SocketImpl */ private static SocketImpl createSocketImpl() { try { return socketImplCtor.newInstance(); } catch (InstantiationException x) { throw new AssertionError(x); } catch (IllegalAccessException x) { throw new AssertionError(x); } catch (InvocationTargetException x) { throw new AssertionError(x); } } /** * Creates an unconnected and unbound SDP socket. The {@code Socket} is * associated with a {@link java.net.SocketImpl} of the system-default type. * * @return a new Socket * * @throws UnsupportedOperationException * If SDP is not supported * @throws IOException * If an I/O error occurs */ public static Socket openSocket() throws IOException { SocketImpl impl = createSocketImpl(); return new SdpSocket(impl); } /** * Creates an unbound SDP server socket. The {@code ServerSocket} is * associated with a {@link java.net.SocketImpl} of the system-default type. * * @return a new ServerSocket * * @throws UnsupportedOperationException * If SDP is not supported * @throws IOException * If an I/O error occurs */ public static ServerSocket openServerSocket() throws IOException { // create ServerSocket via package-private constructor SocketImpl impl = createSocketImpl(); try { return serverSocketCtor.newInstance(impl); } catch (IllegalAccessException x) { throw new AssertionError(x); } catch (InstantiationException x) { throw new AssertionError(x); } catch (InvocationTargetException x) { Throwable cause = x.getCause(); if (cause instanceof IOException) throw (IOException)cause; if (cause instanceof RuntimeException) throw (RuntimeException)cause; throw new RuntimeException(x); } } /** * Opens a socket channel to a SDP socket. * * <p> The channel will be associated with the system-wide default * {@link java.nio.channels.spi.SelectorProvider SelectorProvider}. * * @return a new SocketChannel * * @throws UnsupportedOperationException * If SDP is not supported or not supported by the default selector * provider * @throws IOException * If an I/O error occurs. */ public static SocketChannel openSocketChannel() throws IOException { FileDescriptor fd = SdpSupport.createSocket(); return sun.nio.ch.Secrets.newSocketChannel(fd); } /** * Opens a socket channel to a SDP socket. * * <p> The channel will be associated with the system-wide default * {@link java.nio.channels.spi.SelectorProvider SelectorProvider}. * * @return a new ServerSocketChannel * * @throws UnsupportedOperationException * If SDP is not supported or not supported by the default selector * provider * @throws IOException * If an I/O error occurs */ public static ServerSocketChannel openServerSocketChannel() throws IOException { FileDescriptor fd = SdpSupport.createSocket(); return sun.nio.ch.Secrets.newServerSocketChannel(fd); } }

    Read the article

  • SSH / SFTP connection issue using Tamir.SharpSsh

    - by jinsungy
    This is my code to connect and send a file to a remote SFTP server. public static void SendDocument(string fileName, string host, string remoteFile, string user, string password) { Scp scp = new Scp(); scp.OnConnecting += new FileTansferEvent(scp_OnConnecting); scp.OnStart += new FileTansferEvent(scp_OnProgress); scp.OnEnd += new FileTansferEvent(scp_OnEnd); scp.OnProgress += new FileTansferEvent(scp_OnProgress); try { scp.To(fileName, host, remoteFile, user, password); } catch (Exception e) { throw e; } } I can successfully connect, send and receive files using CoreFTP. Thus, the issue is not with the server. When I run the above code, the process seems to stop at the scp.To method. It just hangs indefinitely. Anyone know what might my problem be? Maybe it has something to do with adding the key to the a SSH Cache? If so, how would I go about this? EDIT: I inspected the packets using wireshark and discovered that my computer is not executing the Diffie-Hellman Key Exchange Init. This must be the issue. EDIT: I ended up using the following code. Note, the StrictHostKeyChecking was turned off to make things easier. JSch jsch = new JSch(); jsch.setKnownHosts(host); Session session = jsch.getSession(user, host, 22); session.setPassword(password); System.Collections.Hashtable hashConfig = new System.Collections.Hashtable(); hashConfig.Add("StrictHostKeyChecking", "no"); session.setConfig(hashConfig); try { session.connect(); Channel channel = session.openChannel("sftp"); channel.connect(); ChannelSftp c = (ChannelSftp)channel; c.put(fileName, remoteFile); c.exit(); } catch (Exception e) { throw e; } Thanks.

    Read the article

  • DotNetOpenAuth: Message signature was incorrect

    - by Shawn Miller
    I'm getting a "Message signature was incorrect" exception when trying to authenticate with MyOpenID and Yahoo. I'm using pretty much the ASP.NET MVC sample code that came with DotNetOpenAuth 3.4.2 public ActionResult Authenticate(string openid) { var openIdRelyingParty = new OpenIdRelyingParty(); var authenticationResponse = openIdRelyingParty.GetResponse(); if (authenticationResponse == null) { // Stage 2: User submitting identifier Identifier identifier; if (Identifier.TryParse(openid, out identifier)) { var realm = new Realm(Request.Url.Root() + "openid"); var authenticationRequest = openIdRelyingParty.CreateRequest(openid, realm); authenticationRequest.RedirectToProvider(); } else { return RedirectToAction("login", "home"); } } else { // Stage 3: OpenID provider sending assertion response switch (authenticationResponse.Status) { case AuthenticationStatus.Authenticated: { // TODO } case AuthenticationStatus.Failed: { throw authenticationResponse.Exception; } } } return new EmptyResult(); } Working fine with Google, AOL and others. However, Yahoo and MyOpenID fall into the AuthenticationStatus.Failed case with the following exception: DotNetOpenAuth.Messaging.Bindings.InvalidSignatureException: Message signature was incorrect. at DotNetOpenAuth.OpenId.ChannelElements.SigningBindingElement.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\ChannelElements\SigningBindingElement.cs:line 139 at DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\Messaging\Channel.cs:line 992 at DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\ChannelElements\OpenIdChannel.cs:line 172 at DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestInfo httpRequest) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\Messaging\Channel.cs:line 386 at DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestInfo httpRequestInfo) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\RelyingParty\OpenIdRelyingParty.cs:line 540 Appears that others are having the same problem: http://trac.dotnetopenauth.net:8000/ticket/172 Does anyone have a workaround?

    Read the article

  • Invalid message signature when running OpenId Provider on Cluster

    - by Garth
    Introduction We have an OpenID Provider which we created using the DotNetOpenAuth component. Everything works great when we run the provider on a single node, but when we move the provider to a load balanced cluster where multiple servers are handling requests for each session we get issue with the message signing as the DotNetOpenAuth component seems to be using something unique from each cluster node to create the signature. Exception DotNetOpenAuth.Messaging.Bindings.InvalidSignatureException: Message signature was incorrect. at DotNetOpenAuth.OpenId.ChannelElements.SigningBindingElement.ProcessIncomingMessage(IProtocolMessage message) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OpenId\ChannelElements\SigningBindingElement.cs:line 139 at DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\Messaging\Channel.cs:line 940 at DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OpenId\ChannelElements\OpenIdChannel.cs:line 172 at DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestInfo httpRequest) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\Messaging\Channel.cs:line 378 at DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestInfo httpRequestInfo) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OpenId\RelyingParty\OpenIdRelyingParty.cs:line 493 Setup We have the machine config setup to use the same machine key on all cluster nodes and we have setup an out of process session with SQL Server. Question How do we configure the key used by DotNetOpenAuth to sign its messages so that the client will trust responses from all servers in the cluster during the same session?

    Read the article

  • Posting messages in two RabbitMQ queue, instead of one (using py-amqp)

    - by Khelben
    I've got this strange problem using py-amqp and the Flopsy module. I have written a publisher that sends messages to a RabbitMQ server, and I wanted to be able to send it to a specified queue. On the Flopsy module that is not possible, so I tweaked it adding a parameter and a line to declare the queue on the init_ method of the Publisher object def __init__(self, routing_key=DEFAULT_ROUTING_KEY, exchange=DEFAULT_EXCHANGE, connection=None, delivery_mode=DEFAULT_DELIVERY_MODE, queue=DEFAULT_QUEUE): self.connection = connection or Connection() self.channel = self.connection.connection.channel() self.channel.queue_declare(queue) # ADDED TO SET UP QUEUE self.exchange = exchange self.routing_key = routing_key self.delivery_mode = delivery_mode The channel object is part of the py-amqplib library The problem I've got it's that, even if it's sending the messages to the specified queue, it's ALSO sending the messages to the default queue. AS in this system we expect to send quite a lot of messages, we don't want to stress the system making useless duplicates... I've tried to debug the code and go inside the py-amqplib library, but I'm not able figure out any error or lacking step. Also, I'm not able to find any documentation form py-amqplib outside the code. Any ideas on why is this happening and how to correct it?

    Read the article

  • AMQP gem specifying a dead letter exchange

    - by JP.
    I've specified a queue on the RabbitMQ server called MyQueue. It is durable and has x-dead-letter-exchange set to MyQueue.DLX. (I also have an exchange called MyExchange bound to that queue, and another exchange called MyQueue.DLX, but I don't believe this is important to the question) If I use ruby's amqp gem to subscribe to those messages I would do it like this: # Doing this before and in a new thread has to do with how my code is structured # shown here in case it has a bearing on the question Thread.new do AMQP.start('amqp://guest:[email protected]:5672') end EventMachine.next_tick do channel = AMQP::Channel.new(AMQP.connection) queue = channel.queue("MyQueue", :durable => true, :'x-dead-letter-exchange' => "MyQueue.DLX") queue.subscribe(:ack => true) do |metadata, payload| p metadata p payload end end If I execute this code with the queues and exchanges already created and bound (as they need to be in my set up) then RabbitMQ throws the following error in its logs: =ERROR REPORT==== 19-Aug-2013::14:25:53 === connection <0.19654.2>, channel 2 - soft error: {amqp_error,precondition_failed, "inequivalent arg 'x-dead-letter-exchange'for queue 'MyQueue' in vhost '/': received none but current is the value 'MyQueue.DLX' of type 'longstr'", 'queue.declare'} Which seems to be saying that I haven't specified the same Dead Letter Exchange as the pre-existing queue - but I believe I have with the queue = ... line. Any ideas?

    Read the article

  • Idiomatic way to build a custom structure from XML zipper in Clojure

    - by Checkers
    Say, I'm parsing an RSS feed and want to extract a subset of information from it. (def feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)) I can get links and titles separately: (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text) However I can't figure out the way to extract them at the same time without traversing the zipper more than once, e.g. (let [feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)] (zipmap (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text))) ...or a variation of thereof, involving mapping multiple sequences to a function that incrementally builds a map with, say, assoc. Not only I have to traverse the sequence multiple times, the sequences also have separate states, so elements must be "aligned", so to speak. That is, in a more complex case than RSS, a sub-element may be missing in particular element, making one of sequences shorter by one (there are no gaps). So the result may actually be incorrect. Is there a better way or is it, in fact, the way you do it in Clojure?

    Read the article

  • Problem while fetching xml through some rss feed

    - by Tariq- iPHONE Programmer
    I was fetching xml through some rss feed. I am unable to sort items in depth like i have sorted easily "channel - description" as NSString *resultValue=[[responseDictionary valueForKeyPath:@"rss.channel.description"] textContent]; Above Result: YouTube RSS Feed My question is how i can parse .... item - description... i.e (Music video by Andrews \U00a9 1982 MJJ Productions Inc.) i am getting nil if i fetch like valueForKeyPath:@"rss.channel.item.description"] Key: rss Value: { "_text" = "\n"; channel = { "_text" = "\n"; description = { "_text" = "YouTube RSS Feed"; }; item = ( { "_text" = "\n\t"; description = { "_text" = "Music video by Andrews \U00a9 1982 MJJ Productions Inc."; }; enclosure = { length = 294; type = "application/x-shockwave-flash"; url = "http://youtube.com/v/Zi_XLOBDo_Y.swf"; }; link = { "_text" = "http://youtube.com/?v=Zi_XLOBDo_Y"; };

    Read the article

  • C++ socket concurrent server

    - by gregpi
    Hi all; Im writing a concurrent server that's supposed to have a communication channel and a data channel. The client initially connect to the communication channel to authenticate, upon successful authentication, the client is then connected to the data channel to access data. My program is already doing that, and im using threads.My only issue is that if I try to connect another client, I get a "cannot bind : address already in use" error. I have it this way: PART A Client connect to port 4567 (and enter his login info). A thread spawn to handle the client(repeated for each client that connects). In the thread created, I have a function(let's call it FUNC_A) that checks the client's login info(dont worry about how the check is done), if successful, the thread starts the data server(listening on 8976) then sends an OK to the client, once received the client attempts to connect to the data server. PART B Once a client connect to the data server, from inside FUNC_A the client is accepted and another thread is spawn to handle the client's connection to the data server.(hopefully everything is clear). Now, all that is working fine. However, if I try to connect with second client when it gets to PART B I get a "cannot bind error: address already in use". I've tried so many different ways, I've even tried spawning a thread to start the data server and accept the client and then start another thread to handle that connection. still no luck. Please give me a suggestion as to what I'm doing wrong, how do I go about doing this or what's the best way to implement it. Thank you

    Read the article

  • display flex xml with dash

    - by user217582
    In Flex, how do I display "midi-channel" to s:Label? I tried songTitle.text = mainXML.child("movement-title").text(); it worked but unsure how to display for midi-channel part. <score-partwise> <movement-title>zsdf</movement-title> <part-list> <score-part id="P1"> <part-name>Piano</part-name> <score-instrument id="P1-I1"> <instrument-name>Acoustic Grand Piano</instrument-name> </score-instrument> <midi-instrument id="P1-I1"> <midi-channel>1</midi-channel> <midi-program>1</midi-program> </midi-instrument> </score-part> </part-list>

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >