Search Results

Search found 10789 results on 432 pages for 'cpu upgrade'.

Page 192/432 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • How to know if a device can be disabled or not?

    - by user326498
    I use the following code to enable/disable a device installed on my computer: SP_PROPCHANGE_PARAMS params; memset(&params, 0, sizeof(params)); devParams.cbSize = sizeof(devParams); params.ClassInstallHeader.cbSize = sizeof(params.ClassInstallHeader); params.ClassInstallHeader.InstallFunction = DIF_PROPERTYCHANGE; params.Scope = DICS_FLAG_GLOBAL; params.StateChange = DICS_DISABLE ; params.HwProfile = 0; // current profile if(!SetupDiSetClassInstallParams(m_hDev, &m_hDevInfo,&params.ClassInstallHeader,sizeof(SP_PROPCHANGE_PARAMS))) { dwErr = GetLastError(); return FALSE; } if(!SetupDiCallClassInstaller(DIF_PROPERTYCHANGE,m_hDev,&m_hDevInfo)) { dwErr = GetLastError(); return FALSE; } return TRUE; This code works perfectly only for those devices that can also be disabled by using Windows Device Manager, and won't work for some un-disabled devices such as my cpu device: Intel(R) Pentium(R) Dual CPU E2160 @ 1.80GHz. So the problem is how to determine if a device can be disabled or not programmatically? Is there any API to realize this goal? Thank you!

    Read the article

  • Reading same file from multiple threads in C#

    - by Gustavo Rubio
    Hi. I was googling for some advise about this and I found some links. The most obvious was this one but in the end what im wondering is how well my code is implemented. I have basically two classes. One is the Converter and the other is ConverterThread I create an instance of this Converter class that has a property ThreadNumber that tells me how many threads should be run at the same time (this is read from user) since this application will be used on multi-cpu systems (physically, like 8 cpu) so it is suppossed that this will speed up the import The Converter instance reads a file that can range from 100mb to 800mb and each line of this file is a tab-delimitted value record that is imported to another destination like a database. The ConverterThread class simply runs inside the thread (new Thread(ConverterThread.StartThread)) and has event notification so when its work is done it can notify the Converter class and then I can sum up the progress for all these threads and notify the user (in the GUI for example) about how many of these records have been imported and how many bytes have been read. It seems, however that I'm having some trouble because I get random errors about the file not being able to be read or that the sum of the progress (percentage) went above 100% which is not possible and I think that happens because threads are not being well managed and probably the information returned by the event is malformed (since it "travels" from one thread to another) Do you have any advise on better practices of implementation of threads so I can accomplish this? Thanks in advance.

    Read the article

  • Parsing data with Clojure, interval problem.

    - by Andrea Di Persio
    Hello! I'm writing a little parser in clojure for learning purpose. basically is a TSV file parser that need to be put in a database, but I added a complication. The complication itself is that in the same file there are more intervals. The file look like this: ###andreadipersio 2010-03-19 16:10:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84` .... ###andreadipersio 2010-03-19 16:20:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84 I ended up with this code: (defn is-header? "Return true if a line is header" [line] (> (count (re-find #"^\#{3}" line)) 0)) (defn extract-fields "Return regex matches" [line pattern] (rest (re-find pattern line))) (defn process-lines [lines] (map process-line lines)) (defn process-line [line] (if (is-header? line) (extract-fields line header-pattern)) (extract-fields line data-pattern)) My idea is that in 'process-line' interval need to be merged with data so I have something like this: ('andreadipersio', '2010-03-19', '16:10:00', 'root', 'launchd', 1, 0, 0.0, 0.0, '2:46.97') for every row till the next interval, but I can't figure how to make this happen. I tried with something like this: (def process-line [line] (if is-header? line) (def header-data (extract-fields line header-pattern))) (cons header-data (extract-fields line data-pattern))) But this doesn't work as excepted. Any hints? Thanks!

    Read the article

  • Am I a discoverer of a bug in the WPF engine?

    - by bitbonk
    We have a MFC 8 application compiled with /CLR that contains a larger amount of Windows Forms UserControls wich again contain WPF user controls using ElementHost. Due to the architecture of our software we can not use HwndHost directly. We observed an extremely strange behavior here that we can not make any sense of: When the CPU load is very high during startup of the application and there are a lot live of ElementHost instances, the whole property engine completely stops working. For example animations that usually just work fine now never update the values of the bound properties, they just stay at some random value after startup. When I set a property that is not bound to anything the value is correctly stored in the dependency property (calling the getter returns the new value) but the visual representation never reflects that. I set the background to red but the background color does not change. We tested this on a lot of different machines all running Windows XP SP2 and it is pretty reproducible. The funny thing here is, that there is in fact one situation where the bound properties actually pickup a new value from the animation and the visual gets updated based on the property values. It is when I resize the ElementHost or when I hide and reshow the parent native control. As soon as I do this, properties that are bound to an animation pickup a new value and the visuals rerender based on the new property values - but just once - if I want to see another update I have to resize the ElementHost. Do you have any explanation of what could be happening here or how I could approach this problem to find it out? What can I do to debug this? Is there a way I can get more information about what WPF actually does or where WPF might have crashed? To me it currently seems like a bug in WPF itself since it only happens at high CPU load at startup.

    Read the article

  • What issues might I have in opening .NET 2.0 Projects in Visual Studio 2010?

    - by Ben McCormack
    The small software team I work on recently got approved to upgrade to Visual Studio 2010 (we're currently using VS 2005). We have several ASP.NET 2.0 and WinForms (in .NET 2.0) projects in production. I've been tasked with downloading VS 2010 and seeing how well it plays with our current projects. What issues should I be aware of when targeting older applications in VS 2010? If I open a VS 2005 project in VS 2010, will it still place nicely when my teammate goes back to open the project in VS 2005? Will we have to upgrade projects to work in VS 2010 (assuming the projects themselves aren't upgraded to .NET 4)? Can I use VS 2010 to edit legacy VB6 apps (just kidding)? I'm excited to work with the newest software, but we're concerned about running into development snags on production applications that are already working just fine. NOTE: I started a bounty in hopes of getting a more detailed answer to this question. Perhaps the answer really is as simple as those already provided, but I'm interested in more feedback regarding our options to transition from using VS 2005 to VS 2010.

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

  • Upgraded to Xcode 4 -- Endless stream of duplicate symbol errors causing build errors

    - by D-Nice
    Everything was working perfectly fine in Xcode 3 yesterday before I upgraded. So I completed the upgrade, restarted my computer, and opened my old project. I had to reconfigure a few settings like the header paths so that I could begin to compile. I'm using AdWhirl for ad mediation, and at this point my errors begin to read something like duplicate symbol _OBJC_METACLASS_$_SBJSON in /Users/Admin/Desktop/TMapLiteAdwhirl/AdWhirl/MMSDK/libMMSDK.a(SBJSON.o) and /Users/Admin/Library/Developer/Xcode/DerivedData/TruxMapLite-bgpylibztethnlhkfkdumpvrjvgy/Build/Intermediates/TruxMapLite.build/Debug-iphoneos/TruxMapLite.build/Objects-normal/armv6/SBJSON.o for architecture armv6 The library it's referring to is the SDK for one of the ad networks I'm including in AdWhirl. Both of the 'duplicate symbols' refer to the SAME FILE, but they use different paths. If I had still had XCode 3, I would simply try excluding these libraries from the build path, but I have no idea how that can be done in Xcode 4. I've tried everything all the way down to deleting the library and all associated files from my project, but when I do this, i will simply get the same type of error for a different library in the AdWhirl directory. This is incredibly frustrating because before my upgrade everything was working smoothly and I was prepared to submit my binary. If anyone has any advice, id be more than happy to give it a try. Thanks!

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • Powershell finding services using a cmdlet dll

    - by bartonm
    I need to upgrade a dll assemblies, written in C#, in our installation. Before I replace the DLL file, I want to check if the file has a lock and if so display a message. How do I implement this in powershell? I was thinking iterate through Get-Process checking dependencies. Solved. I iterated through list looking a file path match. function IsCaradigmPowershellDLLFree() { # The list of DLLs to check for locks by running processes. $DllsToCheckForLocks = "$env:ProgramFiles\Caradigm Platform\System 3.0\Platform\PowerShell\Caradigm.Platform.Powershell.dll", "$env:ProgramFiles\Caradigm Platform\System 3.0\Platform\PowerShell\Caradigm.Platform.Powershell.InternalPlatformSetup.dll"; # Assume true, then check all process dependencies $result = $true; # Iterate through each process and check module dependencies foreach ($p in Get-Process) { # Iterate through each dll used in a given process foreach ($m in Get-Process -Name $p.ProcessName -Module -ErrorAction SilentlyContinue) { # Check if dll dependency match any DLLs in list foreach ($dll in $DllsToCheckForLocks) { # Compare the fully-qualified file paths, # if there's a match then a lock exists. if ( ($m.FileName.CompareTo($dll) -eq 0) ) { $pName = $p.ProcessName.ToString() Write-Error "$dll is locked by $pName. This dll must be have zero locked prior to upgrade. Stop this service to release this lock on $m1." $result = $false; } } } } return $result; }

    Read the article

  • Custom Controls Properties - C# , Forms - :(

    - by user353600
    Hi I m adding custom control to my flowlayoutpanel , its a sort of forex data , refresh every second , so on each timer tick , i m adding a control , changing controls button text , then adding it to flowlayout panel , i m doing it at each 100ms timer tick , it takeing tooo much CPU , here is my custom Control . public partial class UserControl1 : UserControl { public UserControl1() { InitializeComponent(); } private void UserControl1_Load(object sender, EventArgs e) { } public void displaydata(string name , string back3price , string back3 , string back2price , string back2 , string back1price , string back1 , string lay3price , string lay3 , string lay2price , string lay2 , string lay1price , string lay1 ) { lblrunnerName.Text = name.ToString(); btnback3.Text = back3.ToString() + "\n" + back3price.ToString(); btnback2.Text = back2.ToString() + "\n" + back2price.ToString(); btnback1.Text = back1.ToString() + "\n" + back1price.ToString(); btnlay1.Text = lay1.ToString() + "\n" + lay1price.ToString(); btnlay2.Text = lay2.ToString() + "\n" + lay2price.ToString(); btnlay3.Text = lay3.ToString() + "\n" + lay3price.ToString(); } and here is how i m adding control; private void timer1_Tick(object sender, EventArgs e) { localhost.marketData[] md; md = ser.getM1(); flowLayoutPanel1.Controls.Clear(); foreach (localhost.marketData item in md) { UserControl1 ur = new UserControl1(); ur.Name = item.runnerName + item.runnerID; ur.displaydata(item.runnerName, item.back3price, item.back3, item.back2price, item.back2, item.back1price, item.back1, item.lay3price, item.lay3, item.lay2price, item.lay2, item.lay1price, item.lay1); flowLayoutPanel1.SuspendLayout(); flowLayoutPanel1.Controls.Add(ur); flowLayoutPanel1.ResumeLayout(); } } now its happing on 10 times on each send , taking 60% of my Core2Duo cpu . is there any other way , i can just add contols first time , and then change the text of cutom controls buttons on runtime on each refresh or timer tick i m using c# .Net

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • Do new Apple SDKs patch previous releases?

    - by Francisco Garcia
    A new iPhone will be soon out there along a new iOS release. Sooner or later there will also be a Xcode upgrade with the SDK for iOS 6 Does Apple do any type of bugfix on previous SDKs or are bugfixes just solved on new releases? As an example: Core Data with iCloud still have some issues but it is getting better over time. Let's say I have an app that really depends on that combo. I would require iOS6, however not all users upgrade the handsets. Ideally an app compiled with a newer XCode release could patch some error on previous SDKs if the target is set to an older iOS release. Should I expect that a project compiled with future SDK releases to work better on devices running on older iOS versions? will be some SDKs bugfixes backported? I understand that there are some bugs that cannot be fixed without an iOS update on the client. Also that it is a lot of work (and unlikely) to backport bugfixes. I am just wondering what is the normal release policy of Apple.

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Boost threading/mutexs, why does this work?

    - by Flamewires
    Code: #include <iostream> #include "stdafx.h" #include <boost/thread.hpp> #include <boost/thread/mutex.hpp> using namespace std; boost::mutex mut; double results[10]; void doubler(int x) { //boost::mutex::scoped_lock lck(mut); results[x] = x*2; } int _tmain(int argc, _TCHAR* argv[]) { boost::thread_group thds; for (int x = 10; x>0; x--) { boost::thread *Thread = new boost::thread(&doubler, x); thds.add_thread(Thread); } thds.join_all(); for (int x = 0; x<10; x++) { cout << results[x] << endl; } return 0; } Output: 0 2 4 6 8 10 12 14 16 18 Press any key to continue . . . So...my question is why does this work(as far as i can tell, i ran it about 20 times), producing the above output, even with the locking commented out? I thought the general idea was: in each thread: calculate 2*x copy results to CPU register(s) store calculation in correct part of array copy results back to main(shared) memory I would think that under all but perfect conditions this would result in some part of the results array having 0 values. Is it only copying the required double of the array to a cpu register? Or is it just too short of a calculation to get preempted before it writes the result back to ram? Thanks.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • Why can't I open a JBoss vfs:/ URL?

    - by skiphoppy
    We are upgrading our application from JBoss 4 to JBoss 6. A couple of pieces of our application get delivered to the client in an unusual way: jars are looked up inside of our application and sent to the client from a servlet, where the client extracts them in order to run certain support functions. In JBoss 4 we would look these jars up with the classloader and find a jar:// URL which would be used to read the jar and send its contents to the client. In JBoss 6 when we perform the lookup we get a vfs:/ URL. I understand that this is from the org.jboss.vfs package. Unfortunately when I call openStream() on this URL and read from the stream, I immediately get an EOF (read() returns -1). What gives? Why can't I read the resource this URL refers to? I've tried trying to access the underlying VFS packages to open the file through the JBoss VFS API, but most of the API appears to be private, and I couldn't find a routine to translate from a vfs:/ URL to a VFS VirtualFile object, so I couldn't get anywhere. I can try to find the file on disk within JBoss, but that approach sounds very failure prone on upgrade. Our old approach was to use Java Web Start to distribute the jars to the client and then look them up within Java Web Start's cache to extract them. But that broke on every minor upgrade of Java because the layout of the cache changed.

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Team City + Gallio runs tests, but results are not shown

    - by Twindagger
    We recently updated to Visual Studio 2010, and as part of our upgrade we started using Gallio 3.2 prerelease builds. Everything runs fine in Visual Studio (through resharper) but I'm having problems with TeamCity integration. The tests seem to run during TeamCity builds just fine (our build takes long enough to run all our tests), but the tests are not showing up in TeamCity's test area. Here is the test target from our NANT build file (this hasn't changed in our upgrade at all). Is there a trick to getting the tests to show up in TeamCity or is this something that's broken in the latest builds of Gallio? <target name="runTests"> <gallio result-property="exitCode" failonerror="false"> <runner-extension value="TeamCityExtension,Gallio.TeamCityIntegration" /> <assemblies> <include name="..\Source\Tests\${testProject}\bin\Debug\${testProject}.dll" /> </assemblies> </gallio> </target>

    Read the article

  • issue with keychain and updates from app store

    - by xonegirlz
    I am having a very frustrating error. I have tested for an app upgrade by first installing the previous version (1.0.1) and then running version (1.0.2). Everything worked fine. I submitted the app and then I am getting issues people are getting crashes when they upgrade. I tried doing the same thing, which is installing the 1.0.1 and then install the binary on the app store, then it crashed. I looked at the console and crash logs and I get this: Jul 7 08:07:45 unknown MyApp[1429] <Warning>: KeychainUtils keychainValueForKey: - Error finding keychain value for key. Status code = -25300 Jul 7 08:07:45 unknown MyApp[1429] <Warning>: AccountSession readUserDataFromDisk - Error finding keychain value for key /var/mobile/Applications/997B32E7-6FFC-4696-9CAA-129BADE2FE64/Documents/instagram_json Jul 7 08:07:45 unknown MyApp[1429] <Warning>: UISegmentedControlStyleBezeled is deprecated. Please use a different style. Jul 7 08:07:45 unknown MyApp[1429] <Error>: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFDictionary setObject:forKey:]: attempt to insert nil value (key: username)' *** First throw call stack: (0x33ee688f 0x367e7259 0x33ee6789 0x33ee67ab 0x33e5368b 0x14fd99 0x152319 0x1530bb 0x170299 0x3270ec59 0x32711817 0x354e7dfb 0x354e7cd0) Jul 7 08:07:45 unknown UIKitApplication:com.firesnakelabs.pinstagram[0x14e4][1429] <Notice>: terminate called throwing an exception >

    Read the article

  • How to speed up drawing of scaled image? Audio playback chokes during window resize.

    - by Paperflyer
    I am writing an audio player for OSX. One view is a custom view that displays a waveform. The waveform is stored as a instance variable of type NSImage with an NSBitmapImageRep. The view also displays a progress indicator (a thick red line). Therefore, it is updated/redrawn every 30 milliseconds. Since it takes a rather long time to recalculate the image, I do that in a background thread after every window resize and update the displayed image once the new image is ready. In the meantime, the original image is scaled to fit the view like this: // The drawing rectangle is slightly smaller than the view, defined by // the two margins. NSRect drawingRect; drawingRect.origin = NSMakePoint(sideEdgeMarginWidth, topEdgeMarginHeight); drawingRect.size = NSMakeSize([self bounds].size.width-2*sideEdgeMarginWidth, [self bounds].size.height-2*topEdgeMarginHeight); [waveform drawInRect:drawingRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1]; The view makes up the biggest part of the window. During live resize, audio starts choking. Selecting the "big" graphic card on my Macbook Pro makes it less bad, but not by much. CPU utilization is somewhere around 20-40% during live resizes. Instruments suggests that rescaling/redrawing of the image is the problem. Once I stop resizing the window, CPU utilization goes down and audio stops glitching. I already tried to disable image interpolation to speed up the drawing like this: [[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone]; That helps, but audio still chokes during live resizes. Do you have an idea how to improve this? The main thing is to prevent the audio from choking.

    Read the article

  • Database tables with dynamic information

    - by Tim Fennis
    I've googled this and found that it's almost impossible to create a database with dynamic collumns. I'll explain my problem first. I am making a webshop for a customer. It has multiple computer products for sale. CPU's HDD's RAM ect. All these products have different properties, a CPU has an FSB, RAM has a CAS latency. But this is very inconvenient because my orders table needs foreign keys to different tables which is impossible. An other option is to store all the product specific information in a varchar or blob field and let PHP figure it out. The problem with this solution is that the website needs a PC builder. A step-by-step guide to building your PC. So for instance if a customer decides he wants a new "i7 920" or whatever i want to be able to sellect all motherboards for socket 1366, which is impossible because all the data is stored in one field. I know it's possible to select all motherboards form the DB and let PHP figure out which ones are for socket 1366 but i was wondering, is there a better solution?

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >