Search Results

Search found 1784 results on 72 pages for 'depth'.

Page 57/72 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • VirtualBox 3.2 is released! A Red Letter Day?

    - by Fat Bloke
    Big news today! A new release of VirtualBox packed full of innovation and improvements. Over the next few weeks we'll take a closer look at some of these new features in a lot more depth, but today we'll whet your appetite with the headline descriptions. To start with, we should point out that this is the first Oracle-branded version which makes today a real Red-letter day ;-)  Oracle VM VirtualBox 3.2 Version 3.2 moves VirtualBox forward in 3 main areas ( handily, all beginning with "P" ) : performance, power and supported guest operating system platforms.  Let's take a look: Performance New Latest Intel hardware support - Harnessing the latest in chip-level support for virtualization, VirtualBox 3.2 supports new Intel Core i5 and i7 processor and Intel Xeon processor 5600 Series support for Unrestricted Guest Execution bringing faster boot times for everything from Windows to Solaris guests; New Large Page support - Reducing the size and overhead of key system resources, Large Page support delivers increased performance by enabling faster lookups and shorter table creation times. New In-hypervisor Networking - Significant optimization of the networking subsystem has reduced context switching between guests and host, increasing network throughput by up to 25%. New New Storage I/O subsystem - VirtualBox 3.2 offers a completely re-worked virtual disk subsystem which utilizes asynchronous I/O to achieve high-performance whilst maintaining high data integrity; New Remote Video Acceleration - The unique built-in VirtualBox Remote Display Protocol (VRDP), which is primarily used in virtual desktop infrastructure deployments, has been enhanced to deliver video acceleration. This delivers a rich user experience coupled with reduced computational expense, which is vital when servers are running hundreds of virtual machines; Power New Page Fusion - Traditional Page Sharing techniques have suffered from long and expensive cache construction as pages are scrutinized as candidates for de-duplication. Taking a smarter approach, VirtualBox Page Fusion uses intelligence in the guest virtual machine to determine much more rapidly and accurately those pages which can be eliminated thereby increasing the capacity or vm density of the system; New Memory Ballooning- Ballooning provides another method to increase vm density by allowing the memory of one guest to be recouped and made available to others; New Multiple Virtual Monitors - VirtualBox 3.2 now supports multi-headed virtual machines with up to 8 virtual monitors attached to a guest. Each virtual monitor can be a host window, or be mapped to the hosts physical monitors; New Hot-plug CPU's - Modern operating systems such Windows Server 2008 x64 Data Center Edition or the latest Linux server platforms allow CPUs to be dynamically inserted into a system to provide incremental computing power while the system is running. Version 3.2 introduces support for Hot-plug vCPUs, allowing VirtualBox virtual machines to be given more power, with zero-downtime of the guest; New Virtual SAS Controller - VirtualBox 3.2 now offers a virtual SAS controller, enabling it to run the most demanding of high-end guests; New Online Snapshot Merging - Snapshots are powerful but can eat up disk space and need to be pruned from time to time. Historically, machines have needed to be turned off to delete or merge snapshots but with VirtualBox 3.2 this operation can be done whilst the machines are running. This allows sophisticated system management with minimal interruption of operations; New OVF Enhancements - VirtualBox has supported the OVF standard for virtual machine portability for some time. Now with 3.2, VirtualBox specific configuration data is also stored in the standard allowing richer virtual machine definitions without compromising portability; New Guest Automation - The Guest Automation APIs allow host-based logic to drive operations in the guest; Platforms New USB Keyboard and Mouse - Support more guests that require USB input devices; New Oracle Enterprise Linux 5.5 - Support for the latest version of Oracle's flagship Linux platform; New Ubuntu 10.04 ("Lucid Lynx") - Support for both the desktop and server version of the popular Ubuntu Linux distribution; And as a man once said, "just one more thing" ... New Mac OS X (experimental) - On Apple hardware only, support for creating virtual machines run Mac OS X. All in all this is a pretty powerful release packed full of innovation and speedups. So what are you waiting for?  -FB 

    Read the article

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • Information on upgrading Kinect Applications to MS SDK Beta 2.

    - by mbcrump
    Introduction Microsoft recently released the Kinect for Windows SDK Beta 2. It contains many enhancements and fixes that can be found here. The only problem with it is that a lot of current demo applications no longer function properly. Today, I’m going to walk you through a typical scenario of upgrading a Kinect application built with Beta 1 to Beta 2. Note: This tutorial covers WPF, but you can use the same techniques for WinForms. 1) Fix the references Let’s start with a fairly popular Kinect demo called Kinect User Interface Demo. This project uses the beta 1 version of Microsoft.Research.Kinect.dll and version 1.0.0.0 of Coding4Fun’s Kinect library. After you download the source code and extract the zip you will see the following references in Visual Studio 2010: Pay attention to the following references as these are the .dlls that you will have to update: Coding4Fun.Kinect.Wpf Microsoft.Research.Kinect If you click on Coding4Fun.Kinect.Wpf file you will see the following version information (v1.0.0.0): This needs to be upgraded to the Coding4Fun Kinect library built against Beta 2. So head over to http://c4fkinect.codeplex.com/ and hit download and you will have the following files. Go ahead and hit the delete key on your keyboard to remove the Coding4Fun.Kinect.Wpf.dll file from your project. Select “Add Reference” and navigate out to the folder where you extracted the files and select Coding4Fun.Kinect.Wpf.dll. If you click on the Coding4Fun.Kinect.Wpf.dll file and check properties it should be listed at 1.1.0.0: Fix Microsoft.Research.Kinect.dll The official SDK Beta 2 released a new .dll that you will need to reference in your application. Go ahead and select Microsoft.Research.Kinect.dll in your application and hit the Delete key on your keyboard. Go ahead and select Add Reference again and select Microsoft.Research.Kinect.dll from the .NET tab. Double check and make sure the version number is 1.0.0.45 as shown below. References fixed – Runtime needs to be updated. So we have fixed the references in a typical Kinect application that uses Microsoft’s SDK and C4F Kinect libraries. Now, we will need to update the runtime. All Beta 1 Kinect applications will instantiate the Runtime with the following code: Can you see that it is now marked with [Depreciated]? That means we need to update it before Microsoft decides to remove it from future versions of the SDK. We can fix this very easily by replacing this code: readonly Runtime _runtime = new Runtime(); with Microsoft.Research.Kinect.Nui.Runtime _nui; and adding similar code to our Loaded event as shown below public MainWindow() { InitializeComponent(); Loaded += new RoutedEventHandler(MainWindow_Loaded); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { if (Runtime.Kinects.Count == 0) { txtInfo.Text = "Missing Kinect"; } else { _nui = Runtime.Kinects[0]; _nui.Initialize(RuntimeOptions.UseColor); // Video Frame Ready Event can happen now!!! //_nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(_nui_VideoFrameReady); _nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } } In this sample, I am testing to see if a Kinect is detected and if it is then I initialize the runtime with my first Kinect by using the Runtime.Kinects[0]. You can also specify other Kinect devices here. The rest of the code is standard code that you simply modify however you wish (ie Skeletal, Depth, etc) depending on what type of video feed you want. Conclusion As you can see it really wasn’t that painful to upgrade your project to Beta 2. I would recommend that you go ahead and upgrade to Beta 2 as future versions of the SDK will use these methods.  Thanks for reading. Subscribe to my feed

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • Cloud Apps News @#OOW12

    - by Natalia Rachelson
    All eyes were on Oracle this past week and the news cycle was in full swing. What better time to make some key announcements that were guaranteed to create buzz ... and so we did. The name of the game was Cloud! Here are the key Cloud announcements for Apps, which included Fusion Tap that enables mobility across all Cloud Apps, HCM customer momentum in the Cloud, and our very first ERP Cloud Services customer. Oracle Unveils Oracle Fusion Tap for the iPadOracle Fusion Tap - Productivity Amplified Anywhere, Anytime "Both the enterprise and technology providers must recognize the need to innovate and adapt for the increasing mobility of the workforce - not just for sales teams, but across the organization," said Carter Lusher, Research Fellow and Chief Analyst of Enterprise Applications Ecosystem, Ovum. "A mobile application that quickly and powerfully allows employees to make connections, analyze data, and complete activities at any time and wherever they may be located drives new levels of business value and enhances efficiency. Frankly, mobile access is no longer a 'nice to have' but a 'must have.'"  "The mobile workforce is a business reality, and Oracle Fusion Tap is an example of how Oracle delivers mobile and cloud innovations that fundamentally improve productivity and how we work," said Chris Leone, Senior Vice President of Application Development, Oracle. "With Oracle Fusion Tap users will have an all-in-one, easily extensible app that puts mission-critical data and colleague connection at their fingertips." The entire release is available here http://www.oracle.com/us/corporate/press/1855392 Customers Live on Oracle Fusion Human Capital ManagementOracle HCM Cloud Service Helps Power HR's Contribution to the Business "More than 25 of the 100-plus customers who have selected Oracle Fusion Human Capital Management (HCM) are already live. Ardent Leisure, Peach Aviation, Toshiba Medical Systems and Zillow have deployed Oracle HCM Cloud Service and are using it to transform their HR operations. They join companies such as Principal Financial Group and Elizabeth Arden, who are already using Oracle HCM Cloud Service to help manage international growth and deliver pervasive, role-based, configurable solutions to their employees. With these recent go-lives, Oracle takes a leading position in successfully bringing live HCM customers in the cloud."  "As a technology company, Zillow looked to a partner who could scale with us. Zillow has gone live on Oracle HCM Cloud Service, which will give us the ability automate and streamline HR operations for our employees in the near future," said Sarah Bilton, Senior Director HR, Zillow. Read the entire release here http://www.oracle.com/us/corporate/press/1859573 Lending Club Selects Oracle ERP Cloud Service to Help Increase Insight and EfficienciesOracle ERP Cloud Service Provides an Open Architecture, Best-of-Breed Decision-Making, and Scalability in the Cloud "Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle ERP Cloud Service to help improve decision-making and workflow, implement robust reporting, and take advantage of the inherent scalability and cost savings provided by the cloud. With more than 76,000 borrowers and 90,000 investors Lending Club utilizes technology and innovation to reduce the cost of traditional banking and offer borrowers better rates and investors better returns.  After an extensive search, Lending Club selected Oracle ERP Cloud Service due to the breadth and depth of capabilities and ongoing innovation of Oracle ERP Cloud Service, as well as Oracle's open architecture, industry leadership and commitment to partners." "Lending Club is an innovative, data-intensive, high-growth company and we needed a solution and partner that could match us," said Carrie Dolan, CFO, Lending Club. "We conducted a thorough review of our options, and Oracle ERP Cloud Service was the clear winner in terms of capabilities and business value as well as commitment to us as a customer." Read the entire release here http://www.oracle.com/us/corporate/press/1859020

    Read the article

  • Octree subdivision problem

    - by ChaosDev
    Im creating octree manually and want function for effectively divide all nodes and their subnodes - For example - I press button and subnodes divided - press again - all subnodes divided again. Must be like - 1 - 8 - 64. The problem is - i dont understand how organize recursive loops for that. OctreeNode in my unoptimized implementation contain pointers to subnodes(childs),parent,extra vector(contains dublicates of child),generation info and lots of information for drawing. class gOctreeNode { //necessary fields gOctreeNode* FrontBottomLeftNode; gOctreeNode* FrontBottomRightNode; gOctreeNode* FrontTopLeftNode; gOctreeNode* FrontTopRightNode; gOctreeNode* BackBottomLeftNode; gOctreeNode* BackBottomRightNode; gOctreeNode* BackTopLeftNode; gOctreeNode* BackTopRightNode; gOctreeNode* mParentNode; std::vector<gOctreeNode*> m_ChildsVector; UINT mGeneration; bool mSplitted; bool isSplitted(){return m_Splitted;} .... //unnecessary fields }; DivideNode of Octree class fill these fields, set mSplitted to true, and prepare for correctly drawing. Octree contains basic nodes(m_nodes). Basic node can be divided, but now I want recursivly divide already divided basic node with 8 subnodes. So I write this function. void DivideAllChildCells(int ix,int ih,int id) { std::vector<gOctreeNode*> nlist; std::vector<gOctreeNode*> dlist; int index = (ix * m_Height * m_Depth) + (ih * m_Depth) + (id * 1);//get index of specified node gOctreeNode* baseNode = m_nodes[index].get(); nlist.push_back(baseNode->FrontTopLeftNode); nlist.push_back(baseNode->FrontTopRightNode); nlist.push_back(baseNode->FrontBottomLeftNode); nlist.push_back(baseNode->FrontBottomRightNode); nlist.push_back(baseNode->BackBottomLeftNode); nlist.push_back(baseNode->BackBottomRightNode); nlist.push_back(baseNode->BackTopLeftNode); nlist.push_back(baseNode->BackTopRightNode); bool cont = true; UINT d = 0;//additional recursive loop param (?) UINT g = 0;//additional recursive loop param (?) LoopNodes(d,g,nlist,dlist); //Divide resulting nodes for(UINT i = 0; i < dlist.size(); i++) { DivideNode(dlist[i]); } } And now, back to the main question,I present LoopNodes, which must do all work for giving dlist nodes for splitting. void LoopNodes(UINT& od,UINT& og,std::vector<gOctreeNode*>& nlist,std::vector<gOctreeNode*>& dnodes) { //od++;//recursion depth bool f = false; //pass through childs for(UINT i = 0; i < 8; i++) { if(nlist[i]->isSplitted())//if node splitted and have childs { //pass forward through tree for(UINT j = 0; j < 8; j++) { nlist[j] = nlist[j]->m_ChildsVector[j];//set pointers to these childs } LoopNodes(od,og,nlist,dnodes); } else //if no childs { //add to split vector dnodes.push_back(nlist[i]); } } } This version of loop nodes works correctly for 2(or 1?) generations after - this will not divide neightbours nodes, only some corners. I need correct algorithm. Screenshot All I need - is correct version of LoopNodes, which can add all nodes for DivideNode.

    Read the article

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Call for Papers for both Devoxx UK and France now open!

    - by Yolande
    Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} The two conferences are taking place the last week of March 2013 with London on March 26th and 27 and Paris on March 28th and 29th. Oracle fully supports "Devoxx UK" and "Devoxx France" as a European Platinum Partner. Submit proposals and participate in both conferences since they are a two-hour train ride away from one another. The Devoxx conferences are designed “for developers by developers.” The conference committees are looking for speakers who are passionate developers unafraid to share their knowledge of Java, mobile, web and beyond. The sessions are about frameworks, tools and development with in-depth conference sessions, short practical quickies, and bird-of-a-feather discussions. Those different formats allow speakers to choose the best way to present their topics and can be mentioned during the submission process Devoxx has proven its success under Stephan Janssen, organizer of Devoxx in Belgium for the past 11 years. Devoxx has been the biggest Java conference in Europe for many years. To organize those local conferences, Stephan has enrolled the top community leaders in the UK and France. Ben Evans and Martijn Verberg are the leaders of London Java User Group (JUG) and are also known internationally for starting the Adopt-a-JSR program. Antonio Goncalves is the leader of the Paris JUG. He organized last year’s Devoxx France, which was a big success with twice the size first expected. The organizers made sure to add the local character to the conferences. "The community energy has to feel right," said Ben Evans and for that he picked an "old Victoria hall" for the venue. Those leaders are part of very dynamic Java communities in France and in the UK. France has 22 JUGs; the Paris JUG alone has 2,000 members. The UK has over 50,000 developers working in London and its surroundings; a lot of them are Java developers working in the financial industry. The conference fee is kept as low as possible to encourage those developers to attend. Devoxx promises to be crowded and sold out in advance. Make sure to submit your talks to both Devoxx UK and France before January 31st, 2013. 

    Read the article

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • Free Advanced ADF eCourse, part II

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A second part of the advanced Oracle ADF online training is available for free. Lynn Munsinger and her team worked hard to deliver this second part of the training as close to the release of the first installment as possible. This two part advanced ADF training provides you with a wealth of advanced ADF know-how and insight for you to adopt in a self-paced manner. Part 1 (though I am sure you have this bookmarked): http://tinyurl.com/advadf-part1 Part 2 (the new material): http://tinyurl.com/advadf-part2 Quoting Lynn: "This second installment provides you with all you need to know about regions, including best practices for implementing contextual events and other region communication patterns. It also covers the nitty-gritty details of building great looking user interfaces, such as how to work with (not against!) the ADF Faces layout components, how to build page templates and declarative components, and how to skin the application to your organization’s needs. It wraps up with an in-depth look at layout components, and a second helping of additional region considerations if you just can’t get enough. Like the first installment, the content for this course comes from Product Management. This 2nd eCourse compilation is a bit of a “Swan Song” for Patrice Daux, a long-time JDeveloper and ADF curriculum developer, who is retiring the end of this month. Thanks for your efforts, Patrice, and bon voyage!" Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Indeed: Great job Patrice! (and all others involved)

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • Is Your Company Social on the Inside?

    - by Mike Stiles
    As we talk about the extension of social from an outbound-facing marketing tool to a platform that will reach across the entire enterprise, servicing multiple functions of that enterprise, it might be time to take a look at how social can be effectively employed for internal communications. Remember the printed company newsletter? Yeah, nobody reads it. Remember the emailed company newsletter? Yeah, nobody reads it. Why not? Shouldn’t your employees care about the company more than anything else in life and be voraciously hungry for any information related to it? The more realistic prospect is that a company’s employees don’t behave much differently at work where information is concerned than they do in their personal lives. They “tune in” to information that’s immediately relevant to them, that peaks their interest, and/or that’s presented in a visually engaging way. That currently makes an internal social platform the most ideal way to communicate within the organization. It not only facilitates more immediate, more targeted (and thus more relevant) messaging from the company out to employees, it sets a stage for employees to communicate with each other and efficiently get answers to questions from peers. It’s a collaboration tool on steroids. If you build such an internal social portal and you do it right, will employees use it? Considering social media has officially been declared more addictive than cigarettes, booze and sex…probably. But what does it mean to do an internal social platform “right”? The bar has been set pretty high. Your employees are used to Twitter and Facebook, and would roll their eyes at anything less simple or harder to navigate than those. All the Facebook best practices would apply to your internal social as well, including the importance of managing posting frequency, using photos and video, moderation & response, etc. And don’t worry, you won’t be the first to jump in. WPP's global digital agency Possible has its own social network called Colab. Nestle has “The Nest.” Red Robin’s got one. I myself got an in-depth look at McGraw-Hill’s internal social platform at Blogwell NYC. Some of these companies are building their own platforms, others are buying them off the shelf or customizing readymade solutions. But you won’t be the last either. Prescient Digital Media and the IABC learned 39% of companies don’t offer employees any social tools. Not a social network, not discussion forums, not even IM. And a great many continue to ban the use of Facebook and Twitter on the premises. That’s pretty astonishing since social has become as essential a modern day communications tool as the telephone. But such holdouts will pay a big price for being mired in fear while competitors exploit social connections unchallenged. Fish where the fish are. If social has become the way people communicate and take in information, let that be the way communication is trafficked in the organization.

    Read the article

  • Commit in sharpSVN

    - by Pedro
    Hello! I have a problem doing commit with sharpsvn. Now i´m adding all the files of my working copy (if the file is added throws an exception), and after it i do commit. It works but it trows exceptions. There is some way to get the status of the repository before do add() and only add the new files or the files who are changed? And if i delete one file or folder on my working copy , How can i delete these files or folder on the repository? Code: String[] folders; folders = Directory.GetDirectories(direccionLocal,"*.*", SearchOption.AllDirectories); foreach (String folder in folders) { String[] files; files = Directory.GetFiles(folder); foreach (String file in files) { if (file.IndexOf("\\.svn") == -1) { Add(file, workingcopy); } } } Commit(workingcopy, "change"); Add: public bool Add(string path, string direccionlocal) { using (SvnClient client = new SvnClient()) { SvnAddArgs args = new SvnAddArgs(); args.Depth = SvnDepth.Empty; Console.Out.WriteLine(path); args.AddParents = true; try { return client.Add(path, args); } catch (Exception ex) { return false; } } } Commit: public bool Commit(string path, string message) { using (SvnClient client = new SvnClient()) { SvnCommitArgs args = new SvnCommitArgs(); args.LogMessage = message; args.ThrowOnError = true; args.ThrowOnCancel = true; try { return client.Commit(path, args); } catch (Exception e) { if (e.InnerException != null) { throw new Exception(e.InnerException.Message, e); } throw e; } } }

    Read the article

  • How to build google breakpad

    - by Steve
    Hi, I'm totally lost on how to build Google's breakpad. There is a sln file, but it depends on a library that doesn't seem to have an associated sln. It seems to use something called gyp that I haven't figured out how to get working. I tried python gyp ..\breakpad\src\client\windows\breakpad_client.gyp and that just gives the following errors Traceback (most recent call last): File "gyp", line 18, in <module> sys.exit(gyp.main(sys.argv[1:])) File "pylib\gyp\__init__.py", line 445, in main options.circular_check) File "pylib\gyp\__init__.py", line 84, in Load depth, generator_input_info, check, circular_check) File "pylib\gyp\input.py", line 2165, in Load VerifyNoGYPFileCircularDependencies(targets) File "pylib\gyp\input.py", line 1429, in VerifyNoGYPFileCircularDependencies ' '.join(bad_files) gyp.input.CircularException: Some files not reachable, cycle in .gyp file dependency graph detected involving some or all of: ..\breakpad\src\client\windows\sender\crash_report_sender.gyp ..\breakpad\src\client\windows\h andler\exception_handler.gyp ..\breakpad\src\client\windows\breakpad_client.gyp ..\breakpad\src\client\windows\unittests\client_tests.gyp ..\breakpad\src\client\windows\crash_generation\crash_generation.gyp Which I can't make any sense out of. I also can't seem to find any documentation. Any help would be appreciated.

    Read the article

  • "Message":"Invalid JSON primitive: RecordId."

    - by Radhi
    getting error in ajax call from jquery. here is my jquery function function AddAlbumToMyProfile(AlbumId, AlbumName, AlbumTypeName) { var obj = { AlbumId: AlbumId, AlbumName: AlbumName, AlbumTypeName: AlbumTypeName }; //following is ASP.NET AJAX serialize function to convert //object into jSON. var json = Sys.Serialization.JavaScriptSerializer.serialize(obj); $.ajax({ type: "POST", url: "Gallary.aspx/AddAlbumToMyProfile", data: json, contentType: "application/json; charset=utf-8", dataType: "json", async: true, cache: false, success: function(msg) { if (msg.d == '') { alert("Album Added to your profile"); } else { alert(msg.d); } }, error: function(XMLHttpRequest, textStatus, errorThrown) { } }); } and this is my webmethod [WebMethod] public static string DeleteRecord(Int64 RecordId, Int64 UserId, Int64 UserProfileId, string ItemType, string FileName) { try { string FilePath = HttpContext.Current.Server.MapPath(FileName); XDocument xmldoc = XDocument.Load(FilePath); XElement Xelm = xmldoc.Element("UserProfile"); XElement parentElement = Xelm.XPathSelectElement(ItemType + "/Fields"); (from BO in parentElement.Descendants("Record") where BO.Element("Id").Attribute("value").Value == RecordId.ToString() select BO).Remove(); XDocument xdoc = XDocument.Parse(Xelm.ToString(), LoadOptions.PreserveWhitespace); xdoc.Save(FilePath); UserInfoHandler obj = new UserInfoHandler(); return obj.GetHTML(UserId, UserProfileId, FileName, ItemType, RecordId, Xelm).ToString(); } catch (Exception ex) { HandleException.LogError(ex, "EditUserProfile.aspx", "DeleteRecord"); } return "success"; } can anybody please tell me whats wrong in my code?? i am getting error: {"Message":"Invalid JSON primitive: RecordId.","StackTrace":" at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializePrimitiveObject()\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize(JavaScriptSerializer serializer, String input, Type type, Int32 depthLimit)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n at System.Web.Script.Services.RestHandler.GetRawParamsFromPostRequest(HttpContext context, JavaScriptSerializer serializer)\r\n at System.Web.Script.Services.RestHandler.GetRawParams(WebServiceMethodData methodData, HttpContext context)\r\n at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)","ExceptionType":"System.ArgumentException"}

    Read the article

  • Add file using SharpSVN

    - by jan
    Hi, I would like to add all unversioned files under a directory to SVN using SharpSVN. I tried regular svn commands on the command line first: C:\temp\CheckoutDir> svn status -v I see all subdirs, all the files that are already checked in, a few new files labeled "?", nothing with the "L" lock indication C:\temp\CheckoutDir> svn add . --force This results in all new files in the subdirs ,that are already under version control themselves, to be added. I'd like to do the same using SharpSVN. I copy a few extra files into the same directory and run this code: ... using ( SharpSvn.SvnClient svn = new SvnClient() ) { SvnAddArgs saa = new SvnAddArgs(); saa.Force = true; saa.Depth = SvnDepth.Infinity; try { svn.Add(@"C:\temp\CheckoutDir\." , saa); } catch (SvnException exc) { Log(@"SVN Exception: " + exc.Message + " - " + exc.File); } } But an SvnException is raised: SvnException.Message: Working copy 'C:\temp\CheckoutDir' locked SvnException.File: ..\..\..\subversion\libsvn_wc\lock.c" No other svnclient instance is running in my code, I also tried calling svn.cleanup() right before the Add, but to no avail. Since the documentation is rather vague ;), I was wondering if anyone here knew the answer. Thanks in advance! Jan

    Read the article

  • Rails : soap4r - Error while running wsdl2ruby.rb

    - by Mathieu
    when I execute Mathieu$ /Users/Mathieu/.gem/ruby/1.8/bin/wsdl2ruby.rb path --wsdl https://www.arello.com/webservice/verify.cfc?wsdl --type client --force I get at depth 0 - 20: unable to get local issuer certificate F, [2010-05-06T10:41:11.040288 #35933] FATAL -- app: Detected an exception. Stopping ... SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (OpenSSL::SSL::SSLError) /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:247:in connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:247:inssl_connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:639:in connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/timeout.rb:128:intimeout' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:631:in connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:522:inquery' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:147:in query' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:953:indo_get_block' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:765:in do_request' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:848:inprotect_keep_alive_disconnected' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:764:in do_request' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:833:infollow_redirect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:519:in get_content' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/xmlSchema/importer.rb:73:infetch' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/xmlSchema/importer.rb:36:in import' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/importer.rb:18:inimport' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/soap/wsdl2ruby.rb:206:in import' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/soap/wsdl2ruby.rb:36:inrun' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/bin/wsdl2ruby.rb:46:in run' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/logger.rb:659:instart' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/bin/wsdl2ruby.rb:137 /Users/Mathieu/.gem/ruby/1.8/bin/wsdl2ruby.rb:19:in `load' /Users/Mathieu/.gem/ruby/1.8/bin/wsdl2ruby.rb:19 I, [2010-05-06T10:41:11.040855 #35933] INFO -- app: End of app. (status: -1)

    Read the article

  • [Gray Hat Python] Simple debugger, want work ??

    - by Rami Jarrar
    hi, i'm reading the Gray Hat Python,, i reach for this :: class debugger(): def __init__(self): self.h_process = None self.pid = None self.debugger_active = False def load(self,path_to_exe): creation_flags = DEBUG_PROCESS startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() startupinfo.dwFlags = 0x1 startupinfo.wShowWindows = 0x0 startupinfo.cb = sizeof(startupinfo) if kernel32.CreateProcessA(path_to_exe, None, None, None, None, creation_flags, None, None, byref(startupinfo), byref(process_information)): print "[*] We have successfully launched the process!" print "[*] PID: %d"%(process_information.dwProcessId) self.h_process = self.open_process(process_information.dwProcessId) else: print "[*] Error: 0x%08x."%(kernel32.GetLastError()) def open_process(self,pid): h_process = self.open_process(pid) if kernel32.DebugActiveProcess(pid): self.debugger_active = True self.pid = int(pid) self.run() else: print "[*] Unable to attach to the process." def run(self): while self.debugger_active == True: self.get_debug_event() def get_debug_event(self): debug_event = DEBUG_EVENT() continue_status = DBG_CONTINUE if kernel32.WaitForDebugEvent(byref(debug_event), INFINITE): raw_input("Press a Key to continue...") self.debugger_active = False kernel32.ContinueDebugEvent( \ debug_event.dwProcessId, \ debug_event.dwThreadId, \ continue_status ) def detach(self): if kernel32.DebugActiveProcessStop(self.pid): print "[*] Finished debugging. Exiting..." return True else: print "There was an error" return False when run my_test.py :: import my_dbg debugger = my_dbg.debugger() pid = raw_input('Enter the PID of the process to attach to: ') debugger.open_process(int(pid)) debugger.detach() i get this error :: Traceback (most recent call last): File "C:/Python26/dbgpy/my_test.py", line 5, in <module> debugger.attach(int(pid)) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) ........... ........... ........... File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) RuntimeError: maximum recursion depth exceeded its because the loop and something else, but what it is ?? I'm running on Windows using Python2.6.4.. :) Update:: i remove h_process = self.open_process(pid), but i get the same error for the next instruction if kernel32.DebugActiveProcess(pid) , so the problem i think in the loop while,, but what it is ???

    Read the article

  • Ideas for OpenSource CMS in ASP.NET MVC

    - by rajesh pillai
    I am in the process of collecting ideas for building an opensource CMS based on ASP.net framework. I have choosen ASP.NET MVC with Jquery as the tool to develop this. I have made this as community wiki. Background: Most of the good CMS that is available is built on PHP, though off late CMS built on ASP.net framework seems to be cropping up. I would like to collect ideas/suggestion/expectations from an opensource CMS system for ASP.net platform. I am looking for expectation from technology and features that you wish could find in a modern CMS and any other thoughts/ideas that comes to your mind. Your input would be of great help in this direction. Meanwhile I am also reviewing many opensource CMS system built on ASP.net as well as MS Office Sharepoint to get ideas and I would update my findings here for your reference. The following are some of the opensource CMS/BlogEngines that I am in the process of reviewing. -Oxite (ASP.net MVC) : This is the new kid on the block -Wordpress -BlogEngine.net -Umbraco Some of the features that I can think of is noted below Simplified content creation Support Multiple content author Metadata feature Workflow engine Simplified deployment List based contents (sharepoint like) Customizable URL's Support content Caching Roles (contenauthor, contentpublisher etc) Support different types of content (like html, txt, document, image, videos) Skinnable (support extensible themes) Localization & Globalization Unlimited nesting of categories Readymade template for blog, forums,survey. Good documentation You can add your points or add some depth to any of the above feature.

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • How do I get folder size with Exchange Web Services 2010 Managed API?

    - by Adam Tuttle
    I'm attempting to use EWS 2010 Managed API to get the total size of a user's mailbox. I haven't found a web service method to get this data, so I figured I would try to calculate it. I found one seemingly-applicable question on another site about finding mailbox sizes with EWS 2007, but either I'm not understanding what it's asking me to do, or that method just doesn't work with EWS 2010. Noodling around in the code insight, I was able to write what I thought was a method that would traverse the folder structure recursively and result in a combined total for all folders inside the Inbox: private int traverseChildFoldersForSize(Folder f) { int folderSizeSum = 0; if (f.ChildFolderCount > 0) { foreach (Folder c in f.FindFolders(new FolderView(10000))) { folderSizeSum += traverseChildFoldersForSize(c); } } folderSizeSum += (int)f.ManagedFolderInformation.FolderSize; return folderSizeSum; } (Assumes there aren't more than 10,000 folders inside a given folder. Figure that's a safe bet...) Unfortunately, this doesn't work. I'm initiating the recursion with this code: Folder root = Folder.Bind(svc, WellKnownFolderName.Inbox); int totalSize = traverseChildFoldersForSize(root); But a Null Pointer Exception is thrown, essentially saying that [folder].ManagedFolderInformation is a null object reference. For clarity, I also attempted to just get the size of the root folder: Console.Write(root.ManagedFolderInformation.FolderSize.ToString()); Which threw the same NPE exception, so I know that it's not just that once you get to a certain depth in the directory tree that ManagedFolderInformation doesn't exist. Any ideas on how to get the total size of the user's mailbox? Am I barking up the wrong tree?

    Read the article

  • Using Freepascal\Lazarus JSON Libraries

    - by Gizmo_the_Great
    I'm hoping for a bit of a "simpletons" demo\explanation for using Lazarus\Freepascal JSON parsing. I've asked a question here but all the replies are "read this" and none of them are really helping me get a grasp because the examples are bit too in-depth and I'm seeking a very simple example to help me understand how it works. In brief, my program reads an untyped binary file in chunks of 4096 bytes. The raw data then gets converted to ASCII and stored in a string. It then goes through the variable looking for certain patterns, which, it turned out, are JSON data structures. I've currently coded the parsing the hard way using Pos and ExtractANSIString etc. But I'vesince learnt that there are JSON libraries for Lazarus & FPC, namely fcl-json, fpjson, jsonparser, jsonscanner etc. https://bitbucket.org/reiniero/fpctwit/src http://fossies.org/unix/misc/fpcbuild-2.6.0.tar.gz:a/fpcbuild-2.6.0/fpcsrc/packages/fcl-json/src/ http://svn.freepascal.org/cgi-bin/viewvc.cgi/trunk/packages/fcl-json/examples/ However, I still can't quite work out HOW I read my string variable and parse it for JSON data and then access those JSON structures. Can anyone give me a very simple example, to help getting me going? My code so far (without JSON) is something like this: try SourceFile.Position := 0; while TotalBytesRead < SourceFile.Size do begin BytesRead := SourceFile.Read(Buffer,sizeof(Buffer)); inc(TotalBytesRead, BytesRead); StringContent := StripNonAsciiExceptCRLF(Buffer); // A custom function to strip out binary garbage leaving just ASCII readable text if Pos('MySearchValue', StringContent) > 0 then begin // Do the parsing. This is where I need to do the JSON stuff ...

    Read the article

  • ASP.NET Convert to Web App question

    - by mattgcon
    The following web control will not convert for some reason (add designer and cs pages). I am getting a page directive is missing error What is wrong with the code that is causing it to not convert? <%@ Control Language="C#" AutoEventWireup="true" %> <%@ Register TagPrefix="ipam" TagName="tnavbar" src="~/controls/tnavbar.ascx" %> <script language="C#" runat="server"> string strCurrent = ""; string strDepth = ""; public string Current { get { return strCurrent; } set { strCurrent = value; } } public string Depth { get { return strDepth; } set { strDepth = value; } } void Page_Load(Object sender, EventArgs e) { idTnavbar.Current = strCurrent; idTnavbar.Item1Link = strDepth + idTnavbar.Item1Link; idTnavbar.Item2Link = strDepth + idTnavbar.Item2Link; idTnavbar.Item3Link = strDepth + idTnavbar.Item3Link; idTnavbar.Item4Link = strDepth + idTnavbar.Item4Link; idTnavbar.Item5Link = strDepth + idTnavbar.Item5Link; idTnavbar.Item6Link = strDepth + idTnavbar.Item6Link; idTnavbar.Item7Link = strDepth + idTnavbar.Item7Link; } </script> <ipam:tnavbar id="idTnavbar" Item1="2000 -- 2001" Item1Link="2000_--_2001.aspx" Item2="2001 -- 2002" Item2Link="2001_--_2002.aspx" Item3="2002 -- 2003" Item3Link="2002_--_2003.aspx" Item4="2003 -- 2004" Item4Link="2003_--_2004.aspx" Item5="2004 -- 2005" Item5Link="2004_--_2005.aspx" Item6="2005 -- 2006" Item6Link="2005_--_2006.aspx" Item7="2006 -- 2007" Item7Link="2006_--_2007.aspx" runat="server" /> Please help, if I can solve this issue many more pages will be fixed to.

    Read the article

  • Integrate Google Maps API into an iPhone app

    - by Corey Floyd
    Update: iPhone SDk 3.0 now addresses the question here, however the NDA prevents any in depth discussion. Log in to the iPhone Dev Center if you need more info. Ok, I have to admit I'm a little lost here. I am fairly comfortable with Cocoa, but am having trouble picking up the bit of javascript needed to solve this problem. I am trying to send a request to Google for a reverse geo code. I have looked over the Google documentation I have viewed here: http://code.google.com/apis/maps/documentation/index.html http://code.google.com/apis/maps/documentation/geocoding/ Even after a rough reading, I am missing a basic concept: How do I talk to google? In some examples, they show a url being sent to google (which seems easy enough), but in others they show javascript. It seems for reverse geocoding, the request might be be harder than sending the url with some parameters (but I hope I am wrong). Can someone point me to the correct way to make a request? (In objective-C, so I can wrap my head around it)

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >