Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 430/545 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Is loading a video in a browser multithreaded?

    - by mwilcox
    It's hard to know what is multithreaded in a browser and what isn't. It seems while a video streams or progressively downloads, it does not affect page performance, so my guess it is. Note I'm using Flash video, but it's really about video in general. Any other tips on what else is multithreaded (image loads?) is also helpful. I know JavaScript is not, and I thought Flash wasn't but I heard somewhere that it may be (or it could be done), but I think they were not well informed.

    Read the article

  • How would MVVM be for games?

    - by Benny Jobigan
    Particularly for 2d games, and particularly silverlight/wpf games. If you think about it, you can divide a game object into its view (the graphic on the screen) and a view-model/model (the state, ai, and other data for the object). In silverlight, it seems common to make each object a user control, putting the model and view into a single object. I suppose the advantage of this is simplicity. But, perhaps it's less clean or has some disadvantages in terms of the underlying "game engine". What are your thoughts on this matter? What are some advantages and disadvantages of using the MVVM pattern for game development? How about performance? All thoughts are welcome.

    Read the article

  • How to organise a php based website

    - by bsandrabr
    I am putting my php /mysql website up and this is my scenario The users are grouped into sites each site with their own unique database. There will be about 40 users per site. the two options I'm trying to decide between are have a central website running the php and directing the users off to their own database using sub domains for each user each with their own php in htdocs I dont even know if 2 is possible/stupid but if it was, would it make any difference to performance as they're all being run by the same server. Any other ideas/ advice much appreciated as I want to organise it the best way from the start

    Read the article

  • Speed up the loop operation in R

    - by Kay
    Hi, i have a big performance problem in R. I wrote a function that iterates over an data.frame object. It simply adds a new col to a data.frame and accumulate sth. (simple operation). The data.frame has round about 850.000 rows. My PC is still working about 10h now and i have no idea about the runtime. dayloop2 <- function(temp){ for (i in 1:nrow(temp)){ temp[i,10] <- i if (i > 1) { if ((temp[i,6] == temp[i-1,6]) & (temp[i,3] == temp[i-1,3])) { temp[i,10] <- temp[i,9] + temp[i-1,10] } else { temp[i,10] <- temp[i,9] } } else { temp[i,10] <- temp[i,9] } } names(temp)[names(temp) == "V10"] <- "Kumm." return(temp) } Any ideas how to speed up this operation ?

    Read the article

  • Interpreters: How much simplification?

    - by Ray
    In my interpreter, code like the following x=(y+4)*z echo x parses and "optimizes" down to four single operations performed by the interpreter, pretty much assembly-like: add 4 to y multiply <last operation result> with z set x to <last operation result> echo x In modern interpreters (for example: CPython, Ruby, PHP), how simplified are the "opcodes" for which are in end-effect run by the interpreter? Could I achieve better performance when trying to keep the structures and commands for the interpreter more complex and high-level? That would be surely a lot harder, or?

    Read the article

  • How many parameters in C# method are acceptable?

    - by Valentin Heinitz
    I am new to C# and have to maintain a C# Application. Now I'v found a method vaving 32 Parameters (not auto-generated code). From C/C++ I remember the rule of thumb "4 Parameters". It may be an old-fashioned rule rooting back to old 0x86 compilers, where 4 Parameters could be accomodated in registers (fast) or on stack otherwise. I am not concerning about performance, but I do have a feeling, that 32 parameters per functions are not easy to maintain even in C#. Or am I completly not up to date? What is the rule of thumb for C#? Thank you for any hint!

    Read the article

  • "Out of Memory" error in Lotus Notes automation from VBA

    - by PowerUser
    This VBA function sporadically fails with a Notes automation error "Run-Time Error '7' Out of Memory". Naturally, when I try to manually reproduce it, everything runs fine. Function ToGMT(ByVal X As Date) As Date Static NtSession As NotesSession If NtSession Is Nothing Then Set NtSession = New NotesSession NtSession.Initialize End If (do stuff) End function To put this in context, this VBA function is being called by an Access query, 3-4 times per record, with 20,000 records. For performance reasons, the NotesSession has been made static. Any ideas why it is sporadically giving an out-of-memory error? (Also, I'm initiating the NotesSession just so I can convert a datetime to GMT using Lotus's rules. If you know a better way, I'm listening).

    Read the article

  • P/Invoke or C++/CLI for wrapping a C library

    - by Ian G
    Have a moderate size (40-odd function) C API that needs to be called from a C# project. The functions logically break up to form a few classes that will be API presented to the rest of the project. Are there any objective reasons to prefer P/Invoke or C++/CLI for the interoperability underneath that API, in terms of robustness, maintainability, deployment, ...? The issues I could think of that might be, but aren't problematic are: C++/CLI will require an separate assembly, the P/Invoke classes can be in the main assembly. (We've already got multiple assemblies and there'll be the C dlls anyway so not a major issue). Performance doesn't seem differ noticeable between the two methods. Issues that I'm not sure about are: My feeling is C++/CLI will be easier to debug if there's inter-op problem, is this true? Language familiarity enough people know C# and C++ but knowledge of details of C++/CLI are rarer here. Anything else?

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Google Web Optimizer -- How long until winning combination?

    - by Django Reinhardt
    I've had an A/B Test running in Google Web Optimizer for six weeks now, and there's still no end in sight. Google is still saying: "We have not gathered enough data yet to show any significant results. When we collect more data we should be able to show you a winning combination." Is there any way of telling how close Google is to making up its mind? (Does anyone know what algorithm does it use to decide if there's been any "high confidence winners"?) According to the Google help documentation: Sometimes we simply need more data to be able to reach a level of high confidence. A tested combination typically needs around 200 conversions for us to judge its performance with certainty. But all of our conversions have over 200 conversations at the moment: 230 / 4061 (Original) 223 / 3937 (Variation 1) 205 / 3984 (Variation 2) 205 / 4007 (Variation 3) How much longer is it going to have to run?? Thanks for any help.

    Read the article

  • Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?

    - by Royi Namir
    (the following items has different goals , but im interesting knowing how they "PAUSEd") questions Thread.sleep - Does it impact performance on a system ?does it tie up a thread with its wait ? what about Monitor.Wait ? what is the difference in the way they "wait"? do they tie up a thread with their wait ? what aboutRegisteredWaitHandle ? This method accepts a delegate that is executed when a wait handle is signaled. While it’s waiting, it doesn’t tie up a thread. so some thread are paused and can be woken by a delegate , while others just wait ? spin ? can someone please make things clearer ? edit http://www.albahari.com/threading/part2.aspx

    Read the article

  • Java 2D clip area to shape

    - by user2923880
    I'm quite new to graphics in java and I'm trying to create a shape that clips to the bottom of another shape. Here is an example of what I'm trying to achieve: Where the white line at the base of the shape is the sort of clipped within the round edges. The current way I am doing this is like so: g2.setColor(gray); Shape shape = getShape(); //round rectangle g2.fill(shape); Rectangle rect = new Rectangle(shape.getBounds().x, shape.getBounds().y, width, height - 3); Area area = new Area(shape); area.subtract(new Area(rect)); g2.setColor(white); g2.fill(area); I'm still experimenting with the clip methods but I can't seem to get it right. Is this current method ok (performance wise, since the component repaints quite often) or is there a more efficient way? Thanks in advance.

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • Will GTK's pango and cairo work well in Cocoa and MFC applications.

    - by Lothar
    I'm writing a GUI program and decided to go native on all platforms. But for all the stuff i need to draw myself i would like to use the same drawing routines because font and unicode handling is so difficult and complex. Do you see any negative points in useing Pango/Cairo. Well on MacOSX i havent succeded installing Pango/Cairo yet. Looks like a bad Omen. I would also like to hear about the performance penality. The first time i looked at Pango i thought, yes thats the reason why Software is still getting despite better hardware.

    Read the article

  • Asp .Net MVC Viewmodel should be class or struct?

    - by Jonas Everest
    Hey guys, I have just been thinking about the concept of view model object we create in asp.net MVC. Our purpose is to instantiate it and pass it from controller to view and view read it and display the data. Those view model are usually instantiated through constructor. We won't need to initialize the members, we may not need to redefine/override parameterless constructor and we don't need inheritance feature there. So, why don't we use struct type for our view model instead of class. It will enhance the performance.

    Read the article

  • NHibernate parent-childs save redundant sql update executed

    - by Broken Pipe
    I'm trying to save (insert) parent object with a collection of child objects, all objects are new. I prefer to manually specify what to save\update and when so I do not use any cascade saves in mappings and flush sessions by myself. So basically I save this object graph like: session.Save(Parent) foreach (var child in Parent.Childs) { session.Save(child); } session.Flush() I expect this code to insert Parent row, then each child row, however NHibernate executes this SQL: INSERT INTO PARENT.... INSERT INTO CHILD .... UPDATE CHILD SET ParentId=@1 WHERE Id=@2 This update statement is absolutely unnecessary, ParentId was already set correctly in INSERT. How do I get rid of it? Performance is very important for me.

    Read the article

  • Hibernate criteria with projection not performing query for @OneToMany mapping

    - by Josh
    I have a domain object, Expense, that has a field called initialFields. It's annotated as so: @OneToMany(fetch = FetchType.EAGER, cascade = { CascadeType.ALL }, orphanRemoval = true) @JoinTable(blah blah) private final List<Field> initialFields; Now I'm trying to use Projections in order to only pull certain fields for performance reasons, but when doing so the initialFields field is always null. It's the only OneToMany field and the only field I am trying to retrieve with the projection that is behaving this way. If I use a regular HQL query initialFields is populated appropriately, but of course I can't limit the fields. Anyone ever seen anything like this?

    Read the article

  • Javscript filter vs map problem

    - by graham.reeds
    As a continuation of my min/max across an array of objects I was wondering about the performance comparisons of filter vs map. So I put together a test on the values in my code as was going to look at the results in FireBug. This is the code: var _vec = this.vec; min_x = Math.min.apply(Math, _vec.filter(function(el){ return el["x"]; })); min_y = Math.min.apply(Math, _vec.map(function(el){ return el["x"]; })); The mapped version returns the correct result. However the filtered version returns NaN. Breaking it out, stepping through and finally inspecting the results, it would appear that the inner function returns the x property of _vec but the actual array returned from filter is the unfiltered _vec. I believe my usage of filter is correct - can anyone else see my problem?

    Read the article

  • Is ReaderWriterLockSlim.EnterUpgradeableReadLock() essentially the same as Monitor.Enter()?

    - by Neil Barnwell
    So I have a situation where I may have many, many reads and only the occasional write to a resource shared between multiple threads. A long time ago I read about ReaderWriterLock, and have read about ReaderWriterGate which attempts to mitigate the issue where many writes coming in trump reads and hurt performance. However, now I've become aware of ReaderWriterLockSlim... From the docs, I believe that there can only be one thread in "upgradeable mode" at any one time. In a situation where the only access I'm using is EnterUpgradeableReadLock() (which is appropriate for my scenario) then is there much difference to just sticking with lock(){}? Here's the excerpt: A thread that tries to enter upgradeable mode blocks if there is already a thread in upgradeable mode, if there are threads waiting to enter write mode, or if there is a single thread in write mode. Or, does the recursion policy make any difference to this?

    Read the article

  • Morfik - suitability for medium-scale web enterprise applications

    - by MaikB
    I'm investigating technologies with which to develop a medium-scale (up to 100 or 200 simultaneous users) database-driven web application, and someone suggested Morfik. However, outside of the Morfik company I can find practically zero community support - no active blogs, no tutorials, no videos, no books - and this is of some concern (especially when compared to C# / ASP.NET / nHibernate etc support). Deciding between Morfik (untried and not used widely AFAIK) and the other technologies I mentioned (tried, tested, used widely) is becoming a critical issue for my company. Has anyone had success using Morfik in these kind of circumstances? What kind of performance did you achieve?

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • Optimize C# Code Fragment

    - by Eric J.
    I'm profiling some C# code. The method below is one of the most expensive ones. For the purpose of this question, assume that micro-optimization is the right thing to do. Is there an approach to improve performance of this method? Changing the input parameter to p to ulong[] would create a macro inefficiency. static ulong Fetch64(byte[] p, int ofs = 0) { unchecked { ulong result = p[0 + ofs] + ((ulong)p[1 + ofs] << 8) + ((ulong)p[2 + ofs] << 16) + ((ulong)p[3 + ofs] << 24) + ((ulong)p[4 + ofs] << 32) + ((ulong)p[5 + ofs] << 40) + ((ulong)p[6 + ofs] << 48) + ((ulong)p[7 + ofs] << 56); return result; } }

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >