Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 638/837 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • Increase Max Pool Size ERROR when using SYBASE ASE ADO.NET data provider

    - by Brani
    I have made a program in VB.net (visual studio 2003) that connects to a SYBASE ASE database using the ADO.NET data provider. Recently, after a hard disk failure, I restored the program's code from a (rather old) backup. But now the connection fails with a message that does not remind me of anything that I have seen before. Here is the code and the error message: Dim cn As New AseConnection("Data Source='my_server';Port='5000';UID='sa';PWD='my_pwd';Database='my_db';") cn.Open() Error message: Sybase.Data.AseClient.AseException - Cannot allocate more connections. Connection pool is at maximum. Increase Max Pool Size Can anybody help me?

    Read the article

  • How to set MinWorkingSet and MaxWorkingSet in a 64-bit .NET process?

    - by Gravitas
    How do I set MinWorkingSet and MaxWorking set for a 64-bit .NET process? p.s. I can set the MinWorkingSet and MaxWorking set for a 32-bit process, as follows: [DllImport("KERNEL32.DLL", EntryPoint = "SetProcessWorkingSetSize", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern bool SetProcessWorkingSetSize(IntPtr pProcess, int dwMinimumWorkingSetSize, int dwMaximumWorkingSetSize); [DllImport("KERNEL32.DLL", EntryPoint = "GetCurrentProcess", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern IntPtr MyGetCurrentProcess(); // In main(): SetProcessWorkingSetSize(Process.GetCurrentProcess().Handle, int.MaxValue, int.MaxValue); Update: Unfortunately, even if we do this call, the garbage collection trims the working set down anyway, bypassing MinWorkingSet (see "Automatic GC.Collect() in the diagram below). Question: Is there a way to lock the WorkingSet (the green line) to 1GB, to avoid the spike in page faults (the red lines) that occur when allocating new memory into the process? p.s. Every time a page fault occurs, it blocks the thread for 250us, which hits application performance badly.

    Read the article

  • Problem running iPhone application on iPhone from Xcode (and in Instruments)

    - by Ben
    Hi I have a problem running one application on the iPhone from Xcode (or Instruments). When I try to run the app I get the error message Failed to upload XXX.app in the bottom left corner of Xcode. The strange thing is it actually uploaded the app to the iPhone but it doesn't start it (after this I can start the app by hand on the iPhone). So without being able to start the app from Xcode or instruments I have no chance of debugging/performance testing. Any advice on what might be going wrong here? The iPhone console shows me this: Thu Oct 1 14:25:18 unknown mobile_installationd[1976] <Error>: 00808e00 install_embedded_profile: Skipping the installation of the embedded profile Thu Oct 1 14:25:23 unknown SpringBoard[25] <Warning>: Reloading and rendering all application icons. Other applications work fine. I've tried this on two iPhones (both 3.1) with the same result. I am running Xcode 3.2 on SnowLeopard. Regards

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • ASP.Net Authentication with MVC2--how to integrate with DB?

    - by alchemical
    I'm trying to understand the authentication section sample project that opens in a new MVC2 project in VS2010. It essentially lets you register, login, etc. I looked through the code that implements this briefly, it looked fairly complicated. (10 tables, 40 sprocs, 10 views, 4 models, 1 model, 1 controller, etc.) Is it best to utilize this provided framework for authentication? If so, how would I integrate this with my own database models (which has user and role tables, etc.). Also, if I use their framework, are there any performance issues at higher traffic volumes (like SO for example), do I need to become responsible for maintaining the authentication DB as well in this case?

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • how can I code a recursive query in an Entity Framework model?

    - by Greg
    Hi, I have a model which includes NODES, and RELATIONSHIPS (that tie the nodes together, via a parent_node, child_node arrangement). Q1 - Is there any way in EF / Linq-to-entities to perform a query on nodes (e.g. context.Nodes..) to find say "all parents" or "or children" in the graph? Q2 - If there's not in Linq-to-entities, is there any other way to do this other than writing a method that manually goes through and doing it? Q3 - If manual is the only way to do it, should I be concerned about the number of database hits that will be going out to the database as the method keeps recursing through the data? Or more specifically, is there any EF caching type feature that might assist here in ensuring the method is performance from a "number of database hits" point of view? thanks thanks

    Read the article

  • vir-install virtual machine hang on Probbing EDD

    - by user2256235
    I'm using vir-stall virtual machine, and my command is virt-install --name=gust --vcpus=4 --ram=8192 --network bridge:br0 --cdrom=/opt/rhel-server-6.2-x86_64-dvd.iso --disk path=/opt/as1/as1.img,size=50 --accelerate After running the command, it hangs on probing EDD, - Press the <ENTER> key to begin the installation process. +----------------------------------------------------------+ | Welcome to Red Hat Enterprise Linux 6.2! | |----------------------------------------------------------| | Install or upgrade an existing system | | Install system with basic video driver | | Rescue installed system | | Boot from local drive | | Memory test | | | | | | | | | | | | | | | +----------------------------------------------------------+ Press [Tab] to edit options Automatic boot in 57 seconds... Loading vmlinuz...... Loading initrd.img...............................ready. Probing EDD (edd=off to disable)... ok ÿ Previously, I wait a long time, it seems no marching. After I press ctrl + ] and stop it. I find it was running using virsh list, but I cannot console it using virsh concole gust. Any problem and how should I do. Many Thanks

    Read the article

  • Software Engineering undergraduate project ideas

    - by Nasser Hajloo
    There was a similar post at << Computer science undergraduate project ideas << Ideas for Software Engineering Thesis Project << Senior computer engineering project ideas ? << Final Year Project(Software Engineering) Idea So I read all of them and my answer wasn't fit to those. Actually I'm looking for some ideas which 1 - Help me extend a functionality of Open source software (like creating a usefull add-in 2 - Let me Create a Scientific Paper (ideas to publish a scientific paper) 3 - Or Create a Unique an usefull application from the scratch , (like performance tool, profiler, analyzers and other similar tools) I know C# - Asp.net and sql So with all these conditions what do you think is better to do? let me know your ideas whatever those are. any idea appriciated.

    Read the article

  • ASP.NET MVC 2 disable cache for browser back button in partial views

    - by brainnovative
    I am using Html.RenderAction<CartController>(c => c.Show()); on my master Page to display the cart for all pages. The problem is when I add an item to the cart and then hit the browser back button. It shows the old cart (from Cache) until I hit the refresh button or navigate to another page. I've tried this and it works perfectly but it disables the Cache globally for the whole page an for all pages in my site (since this Action method is used on the master page). I need to enable cache for several other partial views (action methods) for performance reasons. I wouldn't like to use client side script with AJAX to refresh the cart (and login view) on page load - but that's the only solution I can think of right now. Does anyone know better?

    Read the article

  • Problem setting up Master-Master Replication in MySQL

    - by Andrew
    I am attempting to setup Master-Master Replication on two MySQL database servers. I have followed the steps in this guide, but it fails in the middle of Step 4 with SHOW MASTER STATUS; It simply returns an empty set. I get the same 3 errors in both servers' logs. MySQL errors on SQL1: [ERROR] Failed to open the relay log './sql1-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure MySQL Errors on SQL2: [ERROR] Failed to open the relay log './sql2-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure The errors make no sense because I'm not referencing those files in any of my configurations. I'm using Ubuntu Server 10.04 x64 and my configuration files are copied below. I don't know where to go from here or how to troubleshoot this. Please help. Thanks. /etc/mysql/my.cnf on SQL1: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL1's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 master-host = <SQL2's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ /etc/mysql/my.cnf on SQL2: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL2's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 2 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 2 master-host = <SQL1's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • CUDA Driver API vs. CUDA runtime

    - by Morten Christiansen
    When writing CUDA applications, you can either work at the driver level or at the runtime level as illustrated on this image (The libraries are CUFFT and CUBLAS for advanced math): I assume the tradeoff between the two are increased performance for the low-evel API but at the cost of increased complexity of code. What are the concrete differences and are there any significant things which you cannot do with the high-level API? I am using CUDA.net for interop with C# and it is built as a copy of the driver API. This encourages writing a lot of rather complex code in C# while the C++ equivalent would be more simple using the runtime API. Is there anything to win by doing it this way? The one benefit I can see is that it is easier to integrate intelligent error handling with the rest of the C# code.

    Read the article

  • Is SiteCore slow and buggy?

    - by Larsenal
    I've seen plenty of negative comments regarding performance and general bugginess. However, to be fair, most of these look like they were within the v5.3 timeframe. Have they fixed all of those issues in v6.0? Is it an excellent product? Some examples of the complaints: Maybe it’s just a case of user error, but this guy says, “some pages take as much as 20s to render…” Source Here, in the comments, one fellow remaks, “Sitecore backend is incredible slow. Sitecore developement is really pain, it takes from 2 minutes to start sitecore and many many seconds to do small backend operations. They claim to have a quick client, but that is a BIG LIE. All developers in my company really hate sitecore for being so slow.” Source Another search yielded, “Sitecore’s users listed three issues as number one: licensing, the server as a resource hog, and the site’s slow responsiveness.” Source

    Read the article

  • ASP.NET AJAX ToolkitScriptManager Issue With Combining Scripts

    - by Kumar
    I have an ASP.NET 3.5 web application in which i am using the ToolkitScriptManager as below: <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" EnablePageMethods="true" ScriptMode="Release" LoadScriptsBeforeUI="false" runat="server" CombineScripts="false"> <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JavaScript/jquery-1.4.1.min.js" /> <asp:ScriptReference Path="~/JavaScript/Validators.js" /> </Scripts> </CompositeScript> </ajaxToolkit:ToolkitScriptManager> This works fine but from a performance standpoint this is not good as the pages are making a lot of requests to the webresources.axd and scriptresource.axd files. When I changed the CombineScripts property to true my ASP.NET AJAX control extenders are no longer working. What is the reason for this weired behavior and is there a fix for this?

    Read the article

  • JSR-299 CDI / Weld vs. Google Guice

    - by deamon
    Weld, the JSR-299 Contexts and Dependency Injection reference implementation, considers itself as a kind of successor of Spring and Guice. CDI was influenced by a number of existing Java frameworks, including Seam, Guice and Spring. However, CDI has its own, very distinct, character: more typesafe than Seam, more stateful and less XML-centric than Spring, more web and enterprise-application capable than Guice. But it couldn't have been any of these without inspiration from the frameworks mentioned and lots of collaboration and hard work by the JSR-299 Expert Group (EG). http://docs.jboss.org/weld/reference/latest/en-US/html/1.html What makes Weld more capable for enterprise application compared to Guice? Are there any advantages or disadvantages compared to Guice? What do you think about Guice AOP compared to Weld interceptors? What about performance?

    Read the article

  • Compiled Linq & String.Contains

    - by sharru
    i'm using linq-to-sql and i'm use complied linq for better performance. I have a users table with a INT field called "LookingFor" that can have the following values.1,2,3,12,123,13,23. I wrote a query to return the users based on the "lookingFor" column i want to return all users that contains the "lookingFor" value (not only those equal to it). In example if user.LookingFor = 12 , and query paramter is 1 this user should be selected. private static Func<NeDataContext, int, IQueryable<int>> MainSearchQuery = CompiledQuery.Compile((NeDataContext db, int lookingFor) => (from u in db.Users where (lookingFor == -1 ? true : u.LookingFor.ToString().Contains(lookingFor) select u.username); This WORKS on non complied linq but throws error when using complied. How do i fix it to work using complied linq? I get this error: Only arguments that can be evaluated on the client are supported for the String.Contains method.

    Read the article

  • Search algorithm (with a sort algorithm already implemented)

    - by msr
    Hello, Im doing a Java application and Im facing some doubts in which concerns performance. I have a PriorityQueue which guarantees me the element removed is the one with greater priority. That PriorityQueue has instances of class Event (which implements Comparable interface). Each Event is associated with a Entity. The size of that priorityqueue could be huge and very frequently I will have to remove events associated to an entity. Right now Im using an iterator to run all the priorityqueue. However Im finding it heavy and I wonder if there are better alternatives to search and remove events associated with an entity "xpto". Any suggestions? Thanks!

    Read the article

  • SharePoint - Unable to complete this operation. Please contact your administrator

    - by Linda
    When I try and save something to my list in SharePoint I get the following error: Unable to complete this operation. Please contact your administrator. at Microsoft.SharePoint.Library.SPRequestInternalClass.PutFile(String bstrUrl, String bstrWebRelativeUrl, Object varFile, PutFileOpt PutFileOpt, String bstrCreatedBy, String bstrModifiedBy, Int32 iCreatedByID, Int32 iModifiedByID, Object varTimeCreated, Object varTimeLastModified, Object varProperties, String bstrCheckinComment, UInt32& pdwVirusCheckStatus, String& pVirusCheckMessage) at Microsoft.SharePoint.Library.SPRequest.PutFile(String bstrUrl, String bstrWebRelativeUrl, Object varFile, PutFileOpt PutFileOpt, String bstrCreatedBy, String bstrModifiedBy, Int32 iCreatedByID, Int32 iModifiedByID, Object varTimeCreated, Object varTimeLastModified, Object varProperties, String bstrCheckinComment, UInt32& pdwVirusCheckStatus, String& pVirusCheckMessage) A quick google says it may be a problem with disk space on the Database. I have checked my server and the smallest amount of space left on any of the drives is ~4GB. The file size is 1MB. I have checked the database and the data file is on unrestricted growth. Any ideas?

    Read the article

  • ListView items not clickable with HorizontalScrollView inside

    - by Felix
    I have quite a complicated ListView. Each item looks something like this: > LinearLayout (vertical) > LinearLayout (horizontal) > include (horizontal LinearLayout with two TextViews) > include (ditto) > include (ditto) > TextView > HorizontalScrollView (this guy is my problem) > LinearLayout (horizontal) In my activity, when an item is created (getView() is called) I add dynamic TextViews to the LinearLayout inside the HorizontalScrollView (besides filling the other, simpler stuff out). Amazingly, performance is pretty good. My problem is that when I added the HorizontalScrollView, my list items became unclickable. They don't get the orange background when clicked and they don't fire the OnItemClickedListener I have set up (to do a simple Log.d call). How can I make my list items clickable again?

    Read the article

  • Silverlight 4 MediaElement play sound

    - by NealWalters
    I converted a local sound file to a resource, which built this in my XAML: <UserControl.Resources> <my:Uri x:Key="SoundFiles">file:///c:/Audio/HebrewDemo/Shalom.wav</my:Uri> </UserControl.Resources> I did this by pasting a local disk mp3 filename into source, then clicked on the "dot" by source and chose "Extract Value to Resource". When I run, it tells me that "Uri" is not valid, and sure enough, in the Intellisense, I see other elements that start with "uri" but not just URI by itself. In the real world I want to specify a dynamic mp3 file name. For example, I might have a database of foreign language words used for flashcards, I want to play a sound file on a URL. But I thought I would try to walk before running...

    Read the article

  • Removing offline/defunct files in SQL server 2008

    - by philox
    How to remove traces of files marked as OFFLINE or DEFUNCT in Microsoft SQL server 2008? I have been playing around with a setup where I create a database with 3 file-groups which are: Primary, FileGroupData and FileGroupIndex. The clustered index is using FileGroupData and a non-clustered index is set to use FileGroupIndex. To simulate a disk failure I've shut down SQL server and manually deleted the files in index file-group. To start the database I'll mark the files 'OFFLINE', but after that I can't delete the index files, which are now offline. I don't have backup of the files as they are merely indices, but that has the implication that I can't restore the files and have their status as "ONLINE". How would you recommend removing the files and the file-group as they still show up in management studio under files/file-groups. Management studio is not able to delete them. As far as I can tell this is different from the question posted in : http://stackoverflow.com/questions/462637/how-do-i-remove-offline-files-from-a-sql-server-2005-database /Philip

    Read the article

  • C++ Profiling: KiFastSystemCallRet

    - by John
    I searched for this after seeing it's the top rated item when profiling using Very Sleepy, and it seems everyone gets the answer "it's a system function, ignore it". But Sleepy's hint for the function says: Hint: KiFastSystemCallRet often means the thread was waiting for something else to finish. Possible causes might be disk I/O, waiting for an event, or maybe just calling Sleep(). Now, my app is absolutely thrashing the CPU and so it's a bit weird 33% of the time is spent waiting for something to happen. Do I really just ignore it? EDIT: apparently, 77% of the calls to this come from QueryOglResource (?) which is in module nvd3dnum. I think that might be nvidia Direct3D stuff, i.e rendering.

    Read the article

  • Method 'SingleOrDefault' not supported by Linq to Entities

    - by user300992
    I read other posts on similar problem on using SingleOfDefault on Linq-To-Entity, some suggested using "First()" and some others suggested using "Extension" method to implement the Single(). This code throws exception: Movie movie = (from a in movies where a.MovieID == '12345' select a).SingleOrDefault(); If I convert the object query to a List using .ToList(), "SingleOrDefault()" actually works perfectly without throwing any error. My question is: Is it not good to convert to List? Is it going to be performance issue for more complicated queries? What does it get translated in SQL? Movie movie = (from a in movies.ToList() where a.MovieID == '12345' select a).SingleOrDefault();

    Read the article

  • C# - Recursive / Reflection Property Values

    - by tyndall
    What is the best way to go about this in C#? string propPath = "ShippingInfo.Address.Street"; I'll have a property path like the one above read from a mapping file. I need to be able to ask the Order object what the value of the code below will be. this.ShippingInfo.Address.Street Balancing performance with elegance. All object graph relationships should be one-to-one. Part 2: how hard would it be to add in the capability for it to grab the first one if its a List< or something like it.

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >