Search Results

Search found 2000 results on 80 pages for 'stackover flow'.

Page 15/80 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • What to do when the programming activity becomes a problem?

    - by gablin
    I once saw a program (can't remember which) where it talked about people "experiencing flow" when they are doing something they are passionate about. When "in flow", they tend to lose track of time and surrounding, concentrating only on their activity at hand. This happens a lot for me when I program; most particularly when I face a problem. I refuse to give up until it's solved. This usually leads to hours just rushing by and I forget to eat lunch, dinner gets pushed into far into the evening, and when I finally look at the clock, it's way into the wee-hours of the night and I will only get a few hours of sleep before having to rise early in the morning. (This is not to say that I'm in flow only when facing a problem - but I find it particularly hard to stop programming and step back when there's something I can't solve immediately.) I love programming, but I hate it when it disrupts my normal routines (most importantly eating and sleeping patterns). And sitting still for so many hours, staring a screen, is not healthy. Please, any ideas on how I can get my rampant programming activity under control?

    Read the article

  • Tools for game script / storyboard

    - by Pietro Polsinelli
    I am searching for a tool that will help in writing a game script. By "script" I mean the text core of a storyboard - without the drawing drafts, which may or may not be there (yet). What I'm thinking of will let write a piece of text of the script, define a simplified workflow from that step, and then define the text of next steps, and so on. Searching online, I found Inform http://inform7.com/ ("A Design System for Interactive Fiction Based on Natural Language") which in theory is exactly what I am searching for, but trying to use it it has this model of a space (a dungeon, a library) where you are picking up objects and exploring them. In my case I am designing more a Sims like game, the flow is entirely different. Considering non specific software, mind mapping tools miss the linearity of the process. What I am writing is a directed graph - simply a work-flow, but the way I want to design it is more text based than work-flow based. SO what I'm doing now is using a text editor, which I'll transform directly in code. Any suggestions?

    Read the article

  • Which opcodes are faster at the CPU level?

    - by Geotarget
    In every programming language there are sets of opcodes that are recommended over others. I've tried to list them here, in order of speed. Bitwise Integer Addition / Subtraction Integer Multiplication / Division Comparison Control flow Float Addition / Subtraction Float Multiplication / Division Where you need high-performance code, C++ can be hand optimized in assembly, to use SIMD instructions or more efficient control flow, data types, etc. So I'm trying to understand if the data type (int32 / float32 / float64) or the operation used (*, +, &) affects performance at the CPU level. Is a single multiply slower on the CPU than an addition? In MCU theory you learn that speed of opcodes is determined by the number of CPU cycles it takes to execute. So does it mean that multiply takes 4 cycles and add takes 2? Exactly what are the speed characteristics of the basic math and control flow opcodes? If two opcodes take the same number of cycles to execute, then both can be used interchangeably without any performance gain / loss? Any other technical details you can share regarding x86 CPU performance is appreciated

    Read the article

  • Analytics - Where do my drop offs go?

    - by BadCash
    I have a website set up with Google Analytics (through the Wordpress plugin "Google Analytics for WordPress" by Joos de Valk). When I check out the visitors flow in Google Analytics, it shows something like this: (home) - 43% drop-offs /page-2/ - 10% drop-offs ... etc ... I have also set up events for external links. My main "goal" of the website is to drive traffic to my Android app on Google Play, so I have a couple of different links to that that are all set up as events. Everything seems to be working, my events show up when I go to Content - Events in Google Analytics. However, it seems to me that some percentage of the users that are reported as "drop-offs" in fact have clicked on one of the external links. But there's no info about the reason of those drop-offs in the Visitors flow-chart. I can of course check out each specific event category, event action and set "other" to Content/Page, which (I guess) shows the number of visitors who triggered a specific event on a specific page. It just seems like such a complicated way of going about this! So, is there a way to get a more detailed picture, including events, in the Visitors flow chart? Something like: (home) - 43% drop-offs Event Action: "Google Play"=50%, "Youtube"=10%, (not set)=40%

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

  • Program Structure Design Tools? (Top Down Design)

    - by Lee Olayvar
    I have been looking to expand my methodologies to better involve Unit testing, and i stumbled upon Behavioral Driven Design (Namely Cucumber, and a few others). I am quite intrigued by the concept as i have never been able to properly design top down, only because keeping track of the design gets lost without a decent way to record it. So on that note, in a mostly language agnostic way, are there any useful tools out there i am (probably) unaware of? Eg, i have often been tempted to try building flow charts for my programs, but i am not sure how much that will help, and it seems a bit confusing to me how i could make a complex enough flow chart to handle the logic of a full program, and all its features.. ie, it just seems like flow charts would be limiting in the design scheme.. or possibly grow to an unmaintainable scale. BDD methods are nice, but with a system that is so tied to structure, tying into the language and unit testing seems like a must (for it to be worth it) and it seems to be hard to find something to work well with both Python and Java (my two main languages). So anyway.. on that note, any comments are much appreciated. I have searched around on here and it seems like top down design is a well discussed topic, but i haven't seen too much reference to tools themselves, eg, flow chart programs, etc. I am on Linux, if it matters (in the case of programs).

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • What happens if I pierce a TFT monitor?

    - by sharptooth
    What happens if I pierce a TFT monitor screen with something sharp (say a nail)? Will only the pierced region malfunction or the whole monitor screen? There's an opinion that in this case the entire screen will "flow out" (more specifically - "liquid crystals will flow out") and stop working completely. Is that truth or an urban legend?

    Read the article

  • tc rules block traffic from some hosts at network

    - by user139430
    I have a problem I can not solve. The script, which sets the rules for traffic shaping is blocking the traffic from some hosts.If I remove all the rules, then it works. I can not understand why? Here is my script... #!/bin/sh cmdTC=/sbin/tc rateLANDl="60mbit" ceilLANDl="60mbit" rateLANUl="40mbit" ceilLANUl="40mbit" quantLAN="1514" # Nowaday bandwidth limit set to 100mbit. # We devide it with 60mbit download and 40mbit upload bandthes. rateHiDl="30mbit" ceilHiDl="60mbit" rateHiUl="20mbit" ceilHiUl="40mbit" quantHi="1514" rateLoDl="30mbit" ceilLoDl="60mbit" rateLoUl="20mbit" ceilLoUl="40mbit" quantLo="1514" devNIF=eth0 devFIF=ifb0 modprobe ifb ip link set $devFIF up 2>/dev/null #exit 0 ################################################################################################ # Remove discuiplines from network and fake interfaces ################################################################################################ $cmdTC qdisc del dev $devNIF root 2>/dev/null $cmdTC qdisc del dev $devFIF root 2>/dev/null $cmdTC qdisc del dev $devNIF ingress 2>/dev/null if [ "$1" = "down" ]; then exit 0 fi ################################################################################################ # Create discuiplines for network interface ################################################################################################ $cmdTC qdisc add dev $devNIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devNIF parent 1:0 classid 1:1 htb rate ${rateLANDl} ceil ${ceilLANDl} quantum ${quantLAN} $cmdTC class add dev $devNIF parent 1:1 classid 1:11 htb rate ${rateHiDl} ceil ${ceilHiDl} quantum ${quantHi} $cmdTC class add dev $devNIF parent 1:1 classid 1:12 htb rate ${rateLoDl} ceil ${ceilLoDl} quantum ${quantLo} $cmdTC qdisc add dev $devNIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devNIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devNIF protocol all parent 1:0 u32 match ip dst 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devNIF protocol all parent 111: handle 111 flow hash keys dst divisor 1024 baseclass 1:11 $cmdTC filter add dev $devNIF protocol all parent 112: handle 112 flow hash keys dst divisor 1024 baseclass 1:12 ################################################################################################ # Create discuiplines for fake interface ################################################################################################ $cmdTC qdisc add dev $devFIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devFIF parent 1:0 classid 1:1 htb rate ${rateLANUl} ceil ${ceilLANUl} quantum ${quantLAN} $cmdTC class add dev $devFIF parent 1:1 classid 1:11 htb rate ${rateHiUl} ceil ${ceilHiUl} quantum ${quantHi} $cmdTC class add dev $devFIF parent 1:1 classid 1:12 htb rate ${rateLoUl} ceil ${ceilLoUl} quantum ${quantLo} $cmdTC qdisc add dev $devFIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devFIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devFIF protocol all parent 1:0 u32 match ip src 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devFIF protocol all parent 111: handle 111 flow hash keys src divisor 1024 baseclass 1:11 $cmdTC filter add dev $devFIF protocol all parent 112: handle 112 flow hash keys src divisor 1024 baseclass 1:12 ################################################################################################ # Create redirect discuiplines from network to fake interface ################################################################################################ $cmdTC qdisc add dev $devNIF handle ffff:0 ingress $cmdTC filter add dev $devNIF parent ffff:0 protocol all u32 match u32 0 0 action mirred egress redirect dev $devFIF Here is my /etc/modules: loop ifb ppp_mppe nf_conntrack_pptp nt_conntrack_proto_gre nf_nat_pptp nf_nat_proto_gre The system is Linux wall 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux

    Read the article

  • How can I login (send text) with minicom?

    - by Travis
    I am attempting to login from a Linux client to a set top box running Linux via a USB to serial cable. When I power on the device, I can see the init messages scroll past, and I get to the login prompt, like this: (none) login: but I cannot login. The cursor stops flashing as if it is receiving input, but there is no response. My serial port setup is: Device: /dev/ttyUSB0 Bps: 115200 8N1 Hardware Flow Control: Yes Software Flow Control: No Any help would be greatly appreciated!

    Read the article

  • Spring Webflow in Grails keeping plenty of hibernate sessions open

    - by Pavel P
    Hi, I have an Internet app running on Grails 1.1.2 and it integrates Spring WebFlow mechanism. The problem is that there are some bots ignoring robots.txt and are entering the flow quite often. Because second step of the flow needs some human intelligence, the bot leaves open flow after the first step. This causes a lot of open flows which leades to a lot of abandoned open hibernate sessions. Do you know some common clean-up mechanism for this kind of unattended flows (plus hibernate sessions) in Grails+Spring WebFlow? Thanks, Pavel

    Read the article

  • SSIS DTSX File Repair Tool

    - by Eric Ness
    I'm working with an SSIS 2005 file that crashes Visual Studio 2005 on my workstation. This happens when I open the data flow diagram and Visual Studio attempts to validate the package. I can open it successfully on another computer though. The package itself is fairly simple and only has two control flow tasks and maybe ten tasks in the data flow. I'm wondering if there is a tool that goes through the XML in the dtsx file and repairs any issues or if this is even necessary. The dtsx file is about 171 kB and it seems like there's a lot in it considering what a simple package it is.

    Read the article

  • Grails MySql processList

    - by Masiar Ighani
    Hello, i have a grails application with a webflow. I store my inner flow objects of interest in the converstaion scope. After entering and leaving the flow a few times, i see that the single user connected to the DB (MySql) generates a lot of threads on the MySql Server which are not released. The processlist in mysql show me the threads in sleeping mode and a netstat on the client shows me established connections to the mysql server. I assume the connections are held active and not released. But why is that? What do grails exactly do when entering and leaving a flow? Why are so many connections opened and not closed? Any help would be appreciated. regards, masiar

    Read the article

  • Finding the order of method calls in Eclipse

    - by Chathuranga Chandrasekara
    Suppose I have a big program that consists of hundreds of methods in it. And according to the nature of input the program flow is getting changed. Think I want to make a change to the original flow. And it is big hassle to find call hierarchy/ references and understand the flow. Do I have any solution for this within Eclipse? Or a plugin? As an example, I just need a Log of method names that is in order of time. Then I don't need to worry about the methods that are not relevant with my "given input" Update : Using debug mode in eclipse or adding print messages are not feasible. The program is sooooo big. :)

    Read the article

  • WindowsForms difference to simple Console App

    - by daemonfire300
    I currently started to "port" my console projects to WinForms, but it seems I am badly failing doing that. I am simply used to a console structure: I got my classes interacting with each other depending on the input coming from the console. A simple flow: Input -> ProcessInput -> Execute -> Output -> wait for input Now I got this big Form1.cs (etc.) and the "Application.Run(Form1);" But I really got no clue how my classes can interact with the form and create a flow like I described above. I mean, I just have these "...._Click(object sender....)" for each "item" inside the form. Now I do not know where to place / start my flow / loop, and how my classes can interact with the form.

    Read the article

  • Finding the actual runtime call tree of a Java Program

    - by Chathuranga Chandrasekara
    Suppose I have a big program that consists of hundreds of methods in it. And according to the nature of input the program flow is getting changed. Think I want to make a change to the original flow. And it is big hassle to find call hierarchy/ references and understand the flow. Do I have any solution for this within Eclipse? Or a plugin? As an example, I just need a Log of method names that is in order of time. Then I don't need to worry about the methods that are not relevant with my "given input" Update : Using debug mode in eclipse or adding print messages are not feasible. The program is sooooo big. :)

    Read the article

  • C#, WindowsForms difference to simple Console App

    - by daemonfire300
    Hi there, I currently started to "port" my console projects to WinForms, but it seems I am badly failing doing that. I am simply used to a console structure: I got my classes interacting with each other depending on the input coming from the console. A simple flow: Input -> ProcessInput -> Execute -> Output -> wait for input Now I got this big Form1.cs (etc.) and the "Application.Run(Form1);" But I really got no clue how my classes can interact with the form and create a flow like I described above. I mean, I just have these "...._Click(object sender....)" for each "item" inside the form. Now I do not know where to place / start my flow / loop, and how my classes can interact with the form. thanks and regards daemonfire

    Read the article

  • What is the best way in Silerlight to make areas of a large graphic clickable?

    - by Edward Tanguay
    In a Silverlight application I have large images which have flow charts on them. I need to handle the clicks on specific hotspots of the image where the flow chart boxes are. Since the flow charts will always be different, the information of where the hotspots has to be dynamic, e.g. in a list of coordinates. I've found article like this one but don't need the detail of e.g. the outline of countries but just simple rectangle and circle areas. I've also found articles where they talk about overlaying an HTML image map over the silverlight application, but it has to be easier than this. What is the best way to handle clicks on specific areas of an image in silverlight?

    Read the article

  • standard debugging way for javascript/jquery

    - by ZX12R
    This is my usual way to debug javascript. Include alert(0); to break the flow and find out what is happening. sometimes when i need multiple check points i do alert('the flow is now in function 1'); alert('the flow is now in function 2'); or sometimes just alert('success'); i would like to know if there is any standard way for debugging adopted as i am finding my current method very intrusive. thanks in advance..:)

    Read the article

  • SSIS - Limiting Concurrent Connections

    - by Bigtoe
    Hi Folks, I am using SSIS to connect to a legecy mainframe database and this allows only 5 concurrent connections at a time. I have a dataflow task with many tables to transfer and it kicks outs because of this limitation. I have split up the Data Flow task into seperate data flows and this is working for the moment, but it is not optiomal as they need to be sequenced and 1 large transfer in a flow is holding up subsequent transfers. Anyone any idea of how to limit the number of connections in a single data flow, I had a look at using the Engine Threads but this did not make any difference. Any help much appericated.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >