Search Results

Search found 10076 results on 404 pages for 'high volume'.

Page 20/404 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • High fan speed with no reason

    - by Klaus
    For a few weeks, the fans of my Lenovo B590 laptop, running on Xubuntu 14, turn to high speed a few minutes after it is turned on. The fans won't speed down until I turn the computer off. This is quite strange, since This didn't happen before The temperatures are quite low (are they ?) $sensors Adapter: Virtual device temp1: +36.0°C (crit = +88.0°C) temp2: +30.0°C (crit = +126.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +37.0°C (high = +72.0°C, crit = +90.0°C) Core 0: +34.0°C (high = +72.0°C, crit = +90.0°C) Core 1: +31.0°C (high = +72.0°C, crit = +90.0°C) thinkpad-isa-0000 Adapter: ISA adapter fan1: 0 RPM pkg-temp-0-virtual-0 Adapter: Virtual device temp1: +37.0°C $sudo hddtemp /dev/sda /dev/sda: ST500LT012-9WS142: 33°C The computer is under low load: top - 08:30:15 up 16 min, 2 users, load average: 0.28, 0.23, 0.23 Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3607944 total, 1973956 used, 1633988 free, 99660 buffers KiB Swap: 3744764 total, 0 used, 3744764 free. 789936 cached Mem The BIOS is up to date (and there are no fan settings in it) The fan is clean and dust-free Why would the BIOS turn the fans to high speed where there seem to be no reason for that ? It seems that we cannot control the fan manually with this model, so I guess the only solution is to understand why this happens.

    Read the article

  • Removing information about deleted volume

    - by Pravin
    In order to increase space in my c drive, I had to delete all my volumes and create again allocating more space to C which I did, after which my drive name G didn't exist. Before this I used to install all my softwares in G. Now since the drive does not exist, I want to remove all info about the softwares I installed in G as they got deleted when volume got deleted. I also want to install cilk++ but it gives me an error-invalid drive g:. If I insert pendrives so I get a volume named G, cilk++ installer runs but says that it will be integrated to visual studio 2008 which i previously had in G drive(but no longer exists) and doesn't show visual studio 2010 which i recently installed in C drive. How do I fix this? Please help.

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

  • Mysterious "media" volume mounted on desktop Mac OS X

    - by Allen
    I have a mysterious volume mounted on my desktop that I can't seem to forcibly unmount. I've tried using umount and also diskutil, but it seems to automatically remount itself. I've copied my hdd with Time Machine, and copied it onto a new computer, and it also has the drive mounted on it. It's not pointing to anything and I can't open it, nor can I forcibily remove it by hand with rm -Rf. Any ideas? I noticed this problem after I upgraded to Mountain Lion from Lion. It causes problems because when I try to select a file using the built in Finder dialog box, it freezes for a few minutes because it tries to cache or read into the "media" mounted volume.

    Read the article

  • HP Pavilion dv3 volume control display driver on Windows 7

    - by Farinha
    I've recently bought an HP Pavilion dv3-2150ep and I'm having a hard time getting the volume control display to work as expected. The control is a back-lit touch-sensitive bar above the keyboard. Now the buttons to turn the volume up and down actually do it, but the lightning is not changing at all. The mute button does change color when toggled. I'm not sure if I'm missing any drivers here (I've installed all of those on the HP support page that seem to have something to do with sound and/or display) or if I have to activate this somewhere. Any ideas?

    Read the article

  • Where does truecrypt store the backup volume header?

    - by happygolucky
    When using WDE, where does (if anywhere) truecrypt store a backup volume header? As i know there is always a backup header for regular truecrypt volumes, however i am not sure if this applies when system encryption is used. Because if i damage the volume header in track 0, my password won't boot my system anymore. So there is no backup header on the drive? I read somewhere on a forum that truecrypt might have a backup header relative to some position from the END of the HDD, however this doesn't make sense as it could easily be wiped over by programs running in Windows. And how would truecrypt know where this backup is anyway?

    Read the article

  • Encrypted TrueCrypt volume in file with decoy content?

    - by penyuan
    I've been reading about the encryption features of TrueCrypt, including the feature to have an encrypted volume inside a file with any name. For example, I can create an encrypted volume inside a file named music.mp3. However, the file won't really play when I try to open it in a music player. Is there a way to add "decoy" content to music.mp3 so that someone who doesn't know its got encrypted content can double click on it and music will play? Obviously it doesn't have to be music, but also images, decoy test document, etc. etc.

    Read the article

  • Windows shows incorrect free space on Raid 10 volume

    - by Adenverd
    I have 4 1TB hard drives in RAID 1+0 configuration. Theoretically, I should have ~2TB of available space. Windows says the drive has a total size of 1.81 TB, which I'm fine with. As far as files on the volume go, I used WinDirStat to determine that I have 552.8GB of files on the volume. This means that I should have somewhere around 1.3TB minimum of free space. Yet Windows shows the drive as only having 492GB of free space. Are there hidden files somewhere that I can't see (I have show hidden files/folders turned on)? Does Windows not recognize that old files have been deleted? Is there any way to correct this problem?

    Read the article

  • How to recover logical volume deleted with lvremove

    - by John P
    I am on CentOS 5.5 and am running Xen. I have a large volume group that I create logical volumes on using lvcreate. Today I had a customer cancel her account, then change her mind about an hour later. Unfortunately I had already removed the LVM her Xen image resided on. (just using a standard lvremove ). There has been no other LVM activity on this disk since then (nothing else added or deleted). Is it possible to "undo" a lvremove, or recover logical volume? If so, how would I go about it?

    Read the article

  • Teaching high school kids ASP.NET programming

    - by dotneteer
    During the 2011 Microsoft MVP Global Summit, I have been talking to people about teaching kids ASP.NET programming. I want to work with volunteer organizations to provide kids volunteer opportunities while learning technical skills that can be applied elsewhere. The goal is to teach motivated kids enough skill to be productive with no more than 6 hours of instruction. Based on my prior teaching experience of college extension courses and involvement with high school math and science competitions, I think this is quite doable with classic ASP but a challenge with ASP.NET. I don’t want to use ASP because it does not provide a good path into the future. After some considerations, I think this is possible with ASP.NET and here are my thoughts: · Create a framework within ASP.NET for kids programming. · Use existing editor. No extra compiler and intelligence work needed. · Using a subset of C# like a scripting language. Teaches data type, expression, statements, if/for/while/switch blocks and functions. Use existing classes but no class creation and OOP. · Linear rendering model. No complicated life cycle. · Bare-metal html with some MVC style helpers for widget creation; ASP.NET control is optional. I want to teach kids to understand something and avoid black boxes as much as possible. · Use SQL for CRUD with a helper class. Again, I want to teach understanding rather than black boxes. · Provide a template to encourage clean separation of concern. · Provide a conversion utility to convert the code that uses template to ASP.NET MVC. This will allow kids with AP Computer Science knowledge to step up to ASP.NET MVC. Let me know if you have thoughts or can help.

    Read the article

  • XNA Easy Storage XBOX 360 High Scores

    - by user1003211
    To followup from a previous query - I need some help with the implementation of easystorage high scores, which is bringing up some errors on the xbox. I get the prompt screen, a savedevice is selected and a file are all created! However the file remains empty, (I've tried prepopulating but still get errors). The full portions of the scoring code can be found here: http://pastebin.com/74v897Yt The current issue in particular is in LoadHighScores() - "There is an error in XML document (0, 0)." under line data = (HighScoreData)serializer.Deserialize(stream); I'm not sure whether this line is correct either: HighScoreData data = new HighScoreData(); public static HighScoreData LoadHighScores(string container, string filename) { HighScoreData data = new HighScoreData(); if (Global.SaveDevice.FileExists(container, filename)) { Global.SaveDevice.Load(container, filename, stream => { File.Open(Global.fileName_options, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream); } finally { // Close the file stream.Close(); // stream.Dispose(); } }); } return (data); } I call: PromptMe(); when the Start button is pressed at the beginning. I call: if (Global.SaveDevice.IsReady){entries = LoadHighScores(HighScoresContainer, HighScoresFilename);} during the menu screen to try and display the highscore screen. I call: SaveHighScore(); when game ends. I've tried altering the struct code to a class but still no luck. Any help greatly appreciated.

    Read the article

  • High-Level Application Architecture Question

    - by Jesse Bunch
    So I'm really wanting to improve how I architect the software I code. I want to focus on maintainability and clean code. As you might guess, I've been reading a lot of resources on this topic and all it's doing is making it harder for me to settle on an architecture because I can never tell if my design is the one that the more experienced programmer would've chosen. So I have these requirements: I should connect to one vendor and download form submissions from their API. We'll call them the CompanyA. I should then map those submissions to a schema fit for submitting to another vendor for integration with the email service provider. We'll call them the CompanyB. I should then submit those responses to the ESP (CompanyB) and then instruct the ESP to send that submitter an email. So basically, I'm copying data from one web service to another and then performing an action at the latter web service. I've identified a couple high-level services: The service that downloads data from CompanyA. I called this the CompanyAIntegrator. The service that submits the data to CompanyB. I called this CompanyBIntegrator. So my questions are these: Is this a good design? I've tried to separate the concerns and am planning to use the facade pattern to make the integrators interchangeable if the vendors change in the future. Are my naming conventions accurate and meaningful to you (who knows nothing specific of the project)? Now that I have these services, where should I do the work of taking output from the CompanyAIntegrator and getting it in the format for input to the CompanyBIntegrator? Is this OK to be done in main()? Do you have any general pointers on how you'd code something like this? I imagine this scenario is common to us engineers---especially those working in agencies. Thanks for any help you can give. Learning how to architect well is really mind-cluttering.

    Read the article

  • check what process was causing the problem of high cpu load

    - by linuxk
    I'm running nginx wordpress server in KVM using 12.04 server x86. It was running very well about 4 month until 2 hours ago. I found that my website is down and no ping response. Virt-manager logged high cpu load(plz see the picture below) before unexpected shut down. I want to know what process caused unexpected shutdown. The following log files make me think my server is attacked. Any suggestions and help would be appreciated. kern.log and syslog showed me same output. Nov 11 03:54:11 www kernel: [1344541.156239] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0.0.0.0 DST=224.0. 0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Nov 11 03:54:11 www kernel: [1344541.156315] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0101:080a:2334:c90 0:0100:0000:0000:0000 DST=ff02:0000:0000:0000:0000:0000:0000:0001 LEN=72 TC=0 HOPLIMIT=1 FLOWLBL=0 PROTO=ICMPv6 TYPE=130 CODE=0 /nginx/access.log showed me 119.235.237.17 - - [11/Nov/2012:03:45:29 +0900] "GET /blog HTTP/1.1" 200 30493 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)" my-server-ip - - [11/Nov/2012:11:05:30 +0900] "POST /wp-cron.php?doing_wp_cron=13 HTTP/1.0" 499 0 "-" "WordPress/3.4.2; http://mywebsite.com" Server turned on in here. 119.235.237.16 - - [11/Nov/2012:11:05:30 +0900] "GET /blog HTTP/1.1" 200 32935 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)"

    Read the article

  • Integration error in high velocity

    - by Elektito
    I've implemented a simple simulation of two planets (simple 2D disks really) in which the only force is gravity and there is also collision detection/response (collisions are completely elastic). I can launch one planet into orbit of the other just fine. The collision detection code though does not work so well. I noticed that when one planet hits the other in a free fall it speeds backward and goes much higher than its original position. Some poking around convinced me that the simplistic Euler integration is causing the error. Consider this case. One object has a mass of 1kg and the other has a mass equal to earth. Say the object is 10 meters above ground. Assume that our dt (delta t) is 1 second. The object goes to the height of 9 meters at the end of the first iteration, 7 at the end of the second, 4 at the end of the third and 0 at the end of the fourth iteration. At this points it hits the ground and bounces back with the speed of 10 meters per second. The problem is with dt=1, on the first iteration it bounces back to a height of 10. It takes several more steps to make the object change its course. So my question is, what integration method can I use which fixes this problem. Should I split dt to smaller pieces when velocity is high? Or should I use another method altogether? What method do you suggest? EDIT: You can see the source code here at github:https://github.com/elektito/diskworld/

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • High Power Consumption and Wakeups on my Asus X54H with 12.04

    - by Marogian
    So I've been using powertop to try and reduce the power consumption on my laptop as I only seem to get about 3 hours of battery. From reading other threads on here it seems my power consumption and wakeups are strangely high, here's a summary: The battery reports a discharge rate of 10.2 W Summary: 651.8 wakeups/second, 0.0 GPU ops/second and 0.0 VFS ops/sec The things which stand out as odd: 1.31 W 4.0 ms/s 166.7 Interrupt PS/2 Touchpad / Keyboard / Mouse So more than 10% of my battery is being consumed by my touchpad/keyboard? That doesn't seem right. 548 mW 34.3 ms/s 45.9 Process compiz 5% from Compiz. Is this correct? 376 mW 1.8 ms/s 47.5 Interrupt [51] i915 298 mW 1.4 ms/s 37.7 Timer tick_sched_timer Another few percent from these things- not quite sure what they are. For reference I've installed Laptop Mode Tools, Jupiter (on power save), the CPU governor is definitely on powersave and brightness is on minimum. What else can I do/Any ideas? I've seen other posts on here reporting laptop battery lives of ~8 hours and power consumption of 4W rather than my 10W... Thanks!

    Read the article

  • High resolution graphical representation of the Earth's surface

    - by Simon
    I've got a library, which I inherited, which presents a zoomable representation of the Earth. It's a Mercator projection and is constructed from triangles, the properties of which are stored in binary files. The surface is built up, for any given view port, by drawing these triangles in an overlapping fashion to produce the image. The definition of each triangle is the lat/long of the vertices. It looks OK at low values of zoom but looks progressively more ragged as the user zooms in. The view ports are primarily referenced though a rectangle of lat/long co-ordinates. I'd like to replace it with a better quality approach. The problem is, I don't know where to begin researching the options as I am not familiar either with the projections needed nor the graphics techniques used to render them. For example, I imagine that I could acquire high resolution images, say Mercator projections although I'm open to anything, break them into tiles and somehow wrap them onto a graphical representation of a sphere. I'm not asking for "how do I", more where should I begin to understand what might be involved and the techniques I will need to learn. I am most grateful for any "Earth rendering 101" pointers folks might have.

    Read the article

  • High Availability

    - by mattjgilbert
    Udi Dahan presented at the UK Connected Systems User Group last night. He discussed High Availability and pointed out that people often think this is purely an infrastructure challenge. However, the implications of system crashes, errors and resulting data loss need to be considered and managed by software developers. In addition a system should remain both highly reliable (backwardly compatible) and available during deployments and upgrades. The argument is that you cannot be considered highly available if your system is always down every time you upgrade. For our recent BizTalk 2009 upgrade we made use of our Business Continuity servers (note the name, rather than calling them Disaster Recovery servers ? ) to ensure our clients could continue to operate while we upgraded the Production BizTalk servers. Then we failed back to the newly built 2009 environment and rebuilt the BC servers. Of course, in the event of an actual disaster there was a window where either one or the other set were not available to take over – however, our Staging machines were already primed to switch to production settings, having been used for testing the upgrade in the first place.   While not perfect (the failover between environments was not automatic and without some minimal outage) planning the upgrade in this way meant BizTalk was online during the rebuild and upgrade project, we didn’t have to rush things to get back on-line and planning meant we were ready to be as available as we could be in the event of an actual disaster.

    Read the article

  • When you’re on a high, start something big

    - by BuckWoody
    Most days are pretty average – we have some highs, some lows, and just regular old work to do. But some days the sun is shining, your co-workers are especially nice, and everything just falls into place. You really *enjoy* what you do. Don’t let that moment pass. All of us have “big” projects that we need to tackle. Things that are going to take a long time, and a lot of money. Those kinds of data projects take a LOT of planning, and many times we put that off just to get to the day’s work. I’ve found that the “high” moments are the perfect time to take on these big projects. I’m more focused, and more importantly, more positive. And as the quote goes, “whether you think you can or you think you can’t, you’re probably right.” You’ll find a way to make it happen if you’re in a positive mood. Now – having those “great days” is actually something you can influence, but I’ll save that topic for a future post. I have a project to work on. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The Oracle Retail Week Awards - most exciting awards yet?

    - by sarah.taylor(at)oracle.com
    Last night's annual Oracle Retail Week Awards saw the UK's top retailers come together to celebrate the very best of our industry over the last year.  The Grosvenor House Hotel on Park Lane in London was the setting for an exciting ceremony which this year marked several significant milestones in British - and global - retail.  Check out our videos about the event at our Oracle Retail YouTube channel, and see if you were snapped by our photographer on our Oracle Retail Facebook page. There were some extremely hot contests for many of this year's awards - and all very deserving winners.  The entries have demonstrated beyond doubt that retailers have striven to push their standards up yet again in all areas over the past year.  The judging panel includes some of the most prestigious names in the retail industry - to impress the panel enough to win an award is a substantial achievement.  This year the panel included the likes of Andy Clarke - Chief Executive of ASDA Group; Mark Newton Jones - CEO of Shop Direct Group; Richard Pennycook - the finance director at Morrisons; Rob Templeman - Chief Executive of Debenhams; and Stephen Sunnucks - the president of Gap Europe.  These are retail veterans  who have each helped to shape the British High Street over the last decade.  It was great to chat with many of them in the Oracle VIP area last night.  For me, last night's highlight was honouring both Sir Stuart Rose and Sir Terry Leahy for their contributions to the retail industry.  Both have set the standards in retailing over the last twenty years and taken their respective businesses from strength to strength, demonstrating that there is always a need for innovation even in larger businesses, and that a business has to adapt quickly to new technology in order to stay competitive.  Sir Terry Leahy's retirement this year marks the end of an era of global expansion for the Tesco group and a milestone in the progression of British retail.  Sir Terry has helped steer Tesco through nearly 20 years of change, with 14 years as Chief Executive.  During this time he led the drive for international expansion and an aggressive campaign to increase market share.  He has led the way for High Street retailers in adapting to the rise of internet retailing and nurtured a very successful home delivery service.  More recently he has pioneered the notion of cross-channel retailing with the introduction of Tesco apps for the iPhone and Android mobile phones allowing customers to scan barcodes of items to add to a shopping list which they can then either refer to in store or order for delivery.  John Lewis Partnership was a very deserving winner of The Oracle Retailer of the Year award for their overall dedication to excellent retailing practices.  The business was also named the American Express Marketing/Advertising Campaign of the Year award for their memorable 'Never Knowingly Undersold' advert series, which included a very successful viral video and radio campaign with Fyfe Dangerfield's cover of Billy Joel's 'She's Always a Woman' used for the adverts.  Store Design of the Year was another exciting category with Topshop taking the accolade for its flagship Oxford Street store in London, which combines boutique concession-style stalls with high fashion displays and exclusive collections from leading designers.  The store even has its own hairdressers and food hall, making it a truly all-inclusive fashion retail experience and a global landmark for any self-respecting international fashion shopper. Over the next few weeks we'll be exploring some of the winning entries in more detail here on the blog, so keep an eye out for some unique insights into how the winning retailers have made such remarkable achievements. 

    Read the article

  • How and where to store user uploaded files in high traffic web farm scenario website?

    - by Inam Jameel
    i am working on a website which deploy on web farms to serve high traffic. where should i store user uploaded files? is it wise to store uploaded files in the file system of the same website and synchronize these files in all web servers(web farm)? or should i use another server to store all uploaded files in this server to store files in a central location? if separate file server will be a better choice, than how can i pass files from web server to that file server efficiently? or should i upload files directly to that file server?

    Read the article

  • NServiceBus Generic Host and mqsvc.exe high CPU

    - by Michael Stephenson
    We have been doing some work with NServiceBus recently and observed some unusual behaviour which was caused by our mistake and seemed worthy of a small post.   The Scenario In our solution we were doing some standard NServiceBus stuff by pushing a message to a queue using NServiceBus.  We had a direct send/receive scenario rather than a publish/subscribe one.   The background process which was meant to collect the message and then process it was a normal NServiceBus message handler.  We would run the NServiceBus.Host.exe which would find the handler and then do the usual NServiceBus magic.   The Problem In this solution we were creating some automated tests around this module of the integration process to ensure that it would work well.  We had two tests.   Test 1 This test would start NServiceBus.Host.exe using the Process object, then seed a message to the queue via our web service façade sitting above the queue which wrapped NServiceBus.  The background process would then process the message and the test would check the message had been processed fine.   If all was well then the NServiceBus.Host.exe process was stopped.   Test 2 In test 2 we would do a very similar thing except that instead of starting the process the test would install NServiceBus.Host.exe as a windows service and then start the service before the test and once the test was executed it would stop the test.   The Results of the Tests Test 1 worked really well, however in test 2 we found that it didn’t really work at all, instead of doing the background process we were finding that between mqsvc.exe and NServiceBus.Host.exe the CPU on the machine was maxed and nothing was really happening.   The Solution After trying a few things we found it was the permissions on the queue were not set correctly.  Once this was resolved it all worked fine and CPU was not excessive and ran just like the console application.   I think the couple of take aways from this are:   Make sure you set the windows service for NserviceBus Generic Host to the right credentials When you install the generic host as a windows service then by default it will use the default windows credentials.  For any production like scenario you should be using a domain account to run the process as via the windows service. Make sure you have the queue set with the right permissions For the credentials you have used to configure the generic host as a windows service you should ensure that this user has the appropriate permissions for any queues it will interact with. Make sure you turn on the right logging configuration in NServiceBus When this wasnt working correctly we didnt know there was an issue, we were just experiencing the high CPU condition.  I am a little surprised that there wasnt something logged and that the process didnt crash.  I guess this could be by design bearing in mind that the process could be monitoring many queues.  In this point Im just saying that originally we didnt have all of the log4net logging which is available from NServiceBus turned on.  Its probably a good idea to have this turned on and configured until you are happy your solution is working fine.   Thanks to Ahmed Hashmi on my team who got this working in the end.

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >