Search Results

Search found 11380 results on 456 pages for 'cpu speed'.

Page 228/456 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Python networkx DFS or BFS missing?

    - by sadawd
    Dear Everyone I am interested in finding a path (not necessarily shortest) in a short amount of time. Dijsktra and AStar in networkx is taking too long. Why is there no DFS or BFS in networkx? I plan to write my own DFS and BFS search (I am leaning more towards BFS because my graph is pretty deep). Is there anything that I can use in networkx's lib to speed me up? Thx

    Read the article

  • Implement custom JTA XAResource for using with hibernate

    - by jstingo
    I have two level access to database: the first with Hibernate, the second with JDBC. The JDBC level work with nontransactional tables (I use MyISAM for speed). I want make both levels works within transaction. I read about JTA which can manage distributed transactions. But there is lack information in the internet about how to implement and use custom resource. Does any one have experience with using custom XAResources?

    Read the article

  • Flex barChart and XML Data

    - by theband
    <Projectlist> <Project> <ProjectName>Alcoswitch - ToggleSwitches </ProjectName> <ProjectStatusname>Planning</ProjectStatusname> </Project> <Project> <ProjectName> Transverse Wedge</ProjectName> <ProjectStatusname>Canceled</ProjectStatusname> </Project> <Project> <ProjectName>High Speed Pluggable I/O</ProjectName> <ProjectStatusname>In-Progress</ProjectStatusname> </Project> <Project> <ProjectName>"High Speed Pluggable I/O - Product Breakouts:</ProjectName> <ProjectStatusname>In-Progress</ProjectStatusname> </Project> <Project> <ProjectName>Circular Plastic Connector (CPC)</ProjectName> <ProjectStatusname>In-Progress</ProjectStatusname> </Project> </Projectlist> This is my XML data i am recieving, how can i show this in a bar chart. <mx:BarChart id="barChart" showDataTips="true" dataProvider="{ProjectStateInfo}" width="100%" height="100%"> <mx:horizontalAxis> <mx:CategoryAxis categoryField="ProjectStatusname"/> </mx:horizontalAxis> <mx:verticalAxis> <mx:CategoryAxis categoryField="ProjectName"/> </mx:verticalAxis> <mx:series> <mx:BarSeries id="barSeries" visible="true" yField="ProjectName" xField="ProjectStatusname" displayName="ProjectStatusname" /> </mx:series> </mx:BarChart> My X-Axis shows muliple values of In-Progress, but i just need one. Is it possible to represent such relationship using BarChart. Any other Flex chart is Advisable.

    Read the article

  • How to play YUV video in Qt4?

    - by shingle
    I want to play YUV video sequence by using Qt. Now I am using QPixmap, by using DrawPixel on QPixmap pixel by pixel. However, it can't play the video in real-time. How can I do to improve the speed?

    Read the article

  • Any other kinds of "Task Queue" APIs ?

    - by sork
    I'm curious if it's common practice outside of the GAE platform to be able to defer tasks to background workers via webhooks. I find it particularly useful to speed up the front-end of webapps, by delegating any long process to background tasks. I'd like to hear about open source software allowing to implement a TaskQueue-like API, with webhooks preferably, if anyone has some experience in this area. Thanks!

    Read the article

  • What is a columnar database?

    - by Raj More
    I have been working with warehousing for a while now. I am intrigued by Columnar Databases and the speed that they have to offer for data retrievals. I have multi-part question: How do Columnar Databases work? How do they differ from relational databases? Is there a trial version of a columnar database I can install to play around? (I am on Windows 7)

    Read the article

  • Debugging a performance issue on ListBoxDragDropTarget (Silverlight Toolkit)?

    - by carlmon
    I have a complex project using SilverLight Toolkit's ListBoxDragDropTarget for drag-drop operations and it is maxing CPU. I tried to reproduce the issue in a small sample project, but then it works fine. The problem persists when I remove our custom styles and all other controls from the page, but the page is hosted in another page's ScrollView. "EnableRedrawRegions" shows that the screen gets redrawn on every frame. My question is this: How can I track down the cause of this constant redrawing?

    Read the article

  • Advice on logic circuits and serial communications

    - by Spencer Ruport
    As far as I understand the serial port so far, transferring data is done over pin 3. As shown here: There are two things that make me uncomfortable about this. The first is that it seems to imply that the two connected devices agree on a signal speed and the second is that even if they are configured to run at the same speed you run into possible synchronization issues... right? Such things can be handled I suppose but it seems like there must be a simpler method. What seems like a better approach to me would be to have one of the serial port pins send a pulse that indicates that the next bit is ready to be stored. So if we're hooking these pins up to a shift register we basically have: (some pulse pin)-clk, tx-d Is this a common practice? Is there some reason not to do this? EDIT Mike shouldn't have deleted his answer. This I2C (2 pin serial) approach seems fairly close to what I did. The serial port doesn't have a clock you're right nobugz but that's basically what I've done. See here: private void SendBytes(byte[] data) { int baudRate = 0; int byteToSend = 0; int bitToSend = 0; byte bitmask = 0; byte[] trigger = new byte[1]; trigger[0] = 0; SerialPort p; try { p = new SerialPort(cmbPorts.Text); } catch { return; } if (!int.TryParse(txtBaudRate.Text, out baudRate)) return; if (baudRate < 100) return; p.BaudRate = baudRate; for (int index = 0; index < data.Length * 8; index++) { byteToSend = (int)(index / 8); bitToSend = index - (byteToSend * 8); bitmask = (byte)System.Math.Pow(2, bitToSend); p.Open(); p.Parity = Parity.Space; p.RtsEnable = (byte)(data[byteToSend] & bitmask) > 0; s = p.BaseStream; s.WriteByte(trigger[0]); p.Close(); } } Before anyone tells me how ugly this is or how I'm destroying my transfer speeds my quick answer is I don't care about that. My point is this seems much much simpler than the method you described in your answer nobugz. And it wouldn't be as ugly if the .Net SerialPort class gave me more control over the pin signals. Are there other serial port APIs that do?

    Read the article

  • Floating point vs integer calculations on modern hardware

    - by maxpenguin
    I am doing some performance critical work in C++, and we are currently using integer calculations for problems that are inherently floating point because "its faster". This causes a whole lot of annoying problems and adds a lot of annoying code. Now, I remember reading about how floating point calculations were so slow approximately circa the 386 days, where I believe (IIRC) that there was an optional co-proccessor. But surely nowadays with exponentially more complex and powerful CPUs it makes no difference in "speed" if doing floating point or integer calculation? Especially since the actual calculation time is tiny compared to something like causing a pipeline stall or fetching something from main memory? I know the correct answer is to benchmark on the target hardware, what would be a good way to test this? I wrote two tiny C++ programs and compared their run time with "time" on Linux, but the actual run time is too variable (doesn't help I am running on a virtual server). Short of spending my entire day running hundreds of benchmarks, making graphs etc. is there something I can do to get a reasonable test of the relative speed? Any ideas or thoughts? Am I completely wrong? The programs I used as follows, they are not identical by any means: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { int accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += rand( ) % 365; } std::cout << accum << std::endl; return 0; } Program 2: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { float accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += (float)( rand( ) % 365 ); } std::cout << accum << std::endl; return 0; } Thanks in advance!

    Read the article

  • iPhone Frameworks

    - by Kevin
    What is a strong iPhone framework to start out developing with, besides the SDK from Apple? Are there any that exist to speed up development time?

    Read the article

  • silverlight 4 net tcp binding security

    - by SLfan
    This document talks about how to send username and password from SL4 app to a web service. It assumes that HTTPS will be used for transport. However, I want to use NET TCP because of its speed. Is that possible because another article says net tcp in SL4 does not provide transport level security. If that's incorrect then how do I convert the https implementation to net tcp?

    Read the article

  • Application Freezing after some idle time

    - by Rakib Hasan
    Hello, I am developing a software using C# 2.0 which uses about 200MB of memory and occasionally high CPU. The problem is, when i am leaving my machine idle for about 20-30 mins with the application running, after i come back and try to use the application, it freezes for about 2 mins, then becomes interactive. Why does this happen? Is there any way to avoid this? Thank you all. Regards, -Rakib

    Read the article

  • Is this slow WPF TextBlock performance expected?

    - by Ben Schoepke
    Hi, I am doing some benchmarking to determine if I can use WPF for a new product. However, early performance results are disappointing. I made a quick app that uses data binding to display a bunch of random text inside of a list box every 100 ms and it was eating up ~15% CPU. So I made another quick app that skipped the data binding/data template scheme and does nothing but update 10 TextBlocks that are inside of a ListBox every 100 ms (the actual product wouldn't require 100 ms updates, more like 500 ms max, but this is a stress test). I'm still seeing ~10-15% CPU usage. Why is this so high? Is it because of all the garbage strings? Here's the XAML: <Grid> <ListBox x:Name="numericsListBox"> <ListBox.Resources> <Style TargetType="TextBlock"> <Setter Property="FontSize" Value="48"/> <Setter Property="Width" Value="300"/> </Style> </ListBox.Resources> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> </ListBox> </Grid> Here's the code behind: public partial class Window1 : Window { private int _count = 0; public Window1() { InitializeComponent(); } private void OnLoad(object sender, RoutedEventArgs e) { var t = new DispatcherTimer(TimeSpan.FromSeconds(0.1), DispatcherPriority.Normal, UpdateNumerics, Dispatcher); t.Start(); } private void UpdateNumerics(object sender, EventArgs e) { ++_count; foreach (object textBlock in numericsListBox.Items) { var t = textBlock as TextBlock; if (t != null) t.Text = _count.ToString(); } } } Any ideas for a better way to quickly render text? My computer: XP SP3, 2.26 GHz Core 2 Duo, 4 GB RAM, Intel 4500 HD integrated graphics. And that is an order of magnitude beefier than the hardware I'd need to develop for in the real product.

    Read the article

  • one high-end server with one Application Server or multiple Application Servers?

    - by elgcom
    If I have a high-end server, for example with 1T memory and 8x4core CPU... will it bring more performance if I run multiple App Server (on different JVM) rather than just one App Server? On App Server I will run some services (EAR whith message driven beans) which exchange message with each other. btw, has java 64bit now no memory limitation any more? http://java.sun.com/products/hotspot/whitepaper.html#64

    Read the article

  • using Multi Probe LSH with LSHKIT

    - by Yijinsei
    Hi Guys, I have read through the source code for mplsh, but I still unsure on how to use the indexes generated by lshkit to speed up the process in comparing feature vector in Euclidean Distance. Do you guys have any experience regarding this?

    Read the article

  • How to configure a Firebird Database to run in memory

    - by Robert
    I'm running a software called Fishbowl inventory and it is running on a firebird database (Windows server 2003) at this time the fishbowl software is running extremely slow when more then one user accesses the software. I'm thinking I maybe able to speed up the application by forcing the database to run "In Memory". However I can not find documentation on how to do this. Any help would be greatly appreciated. Thank you in advance. Robert

    Read the article

  • jquery effects (show)

    - by matthewsteiner
    Is there a way to just have something "show"? I know there's the effect called show, but I mean something with no animation. I know I could make the speed way fast or something. Or I could change the css from hidden or something. But does someone know of a built in method that does that? Same with "hide".

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >