Search Results

Search found 18424 results on 737 pages for 'load balance'.

Page 418/737 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • What can cause peaks in pagetables in /proc/meminfo ?

    - by Fuzzy76
    I have a gameserver running Debian Lenny on a VPS host. Even when experiencing a fairly low load, the players start experiencing major lag (ping times rise from 50 ms to 150-500 ms) in bursts of 3 - 10 seconds. I have installed Munin server monitoring, but when looking at the graphs it looks like the server has plenty of CPU, RAM and bandwidth available. The only weird thing I noticed is some peaks in the memory graph attributed to "page_tables" which maps to PageTables in /proc/meminfo but I can't find any good information on what this might mean. Any ideas what might be causing this? If you need any more graps, just let me know. The interrupts/second count is at roughly 400-600 during this period (nearly all from eth0). The drop in committed was caused by me trying to lower the allocated memory for the server from 512MB to 256MB, but that didn't seem to help.

    Read the article

  • Please recommend tools for PC, browser, home network performance problems?

    - by mobibob
    My client is experiencing some odd response behavior in their browser for the past few days. Classic, "nothing has changed" so I am starting at ground zero. Browsing a website will timeout or take a ridiculous time to load -- other times, the same site and query is immediately responsive. Once a connection is established, video streams are uninterrupted. The home network hosts a website, but it is not experiencing any activity in Apache's 'access.log' I am using speedtest.net to check if the ISP through the internet is 'OK' -- which looks typical (average +/-). I have to suspect the home network is beaconing or something very abnormal, but I don't know where to start.

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Synaptics drivers are not loading on Kubuntu 13.10 on Dell Vostro 2420

    - by Alok Singh Mahor
    I freshly installed Kubuntu 13.10, 32 bit version on newly purchased Dell Vostro 2420. everything is working fine except scrolling and multitouch features through touchpad. I am able to change position of cursor using touchpad and able to tap (single click and double click) but scrolling is not working I tried to find out solution by searching on google but could not find proper solution to load synaptics drivers. i am listing some details: Laptop: Dell Vostro 2420 Linux Kernel version and distribution: 3.11.0-12-generic, Kubuntu 13.10 output of xinput list is ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys id=13 [slave keyboard (3)] Output of synclient -l is Couldn't find synaptics properties. No synaptics driver loaded? output of lshw is at http://paste.ubuntu.com/6645687/ xserver log and dmesg dont have trace of synaptics kindly tell me how to troubleshoot this problem.

    Read the article

  • How to change theme in Windows 7 with Powershell script?

    - by Greg McGuffey
    I would like to have a script that would change the current theme of Windows 7. I found the registry entry where this stored, but I apparently need to take some further action to get windows to load the theme. Any ideas? Here is the script that I'm trying to use, but isn't working (registry updated, but theme not changed): ###################################### # Change theme by updating registry. # ###################################### # Define argument which defines which theme to apply. param ( [string] $theme = $(Read-Host -prompt "Theme") ) # Define the themes we know about. $knownThemes = @{ "myTheme" = "mytheme.theme"; "alien" = "oem.theme" } # Identify paths to user themes. $userThemes = " C:\Users\yoda\AppData\Local\Microsoft\Windows\" # Get name of theme file, based on theme provided $themeFile = $knownThemes["$theme"] # Build path to theme and set registry. $newThemePath = "$userThemes$themeFile" $regPath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Themes\" Set-ItemProperty -path $regPath -name CurrentTheme -value $newThemePath # Update system with this info...this isn't working! rundll32.exe user32.dll, UpdatePerUserSystemParameters Thanks!

    Read the article

  • ASP.NET Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

  • Ubuntu 12.10: Installing proprietary Nvidia driver causes freeze at boot

    - by Greg
    Ok, so I just installed Ubuntu on my laptop, and I immediately encountered an issue: the HDMI audio output won't work. Yes, I know about the sound settings thing where you have to select the HDMI option, but even when it's selected I get no sound out of the TV I'm hooking it up to. This is a dealbreaker for me, because my laptop speakers are terrible, it's one of the big reasons I use my TV monitor. So I decided to work on solving the problem by upgrading my Nvidia drivers. I switched to one of the propriety drivers offered in that software updating utility that comes with the OS, the one option that said (tested). Viola, sound over the HDMI is now working. Unfortunately, this now brings me to my next problem: when I reboot Ubuntu with this or any other proprietary driver installed, it freezes when it tries to load my desktop. As in I can see my wallpaper, but no icons or options of any kind. The system is totally frozen, and gives me one of those "we've experienced an error, do you want to report it messages." So there's my bind. I need HDMI audio out, that's a total dealbreaker for me, but installing the drivers that give me that capability crash the system. Does anyone have any idea what's causing this

    Read the article

  • Determining the required depth and specifications for a server cabinet

    - by Bingu Bingme
    I'm trying to understand the considerations ("why") that go into determining the specifications ("what") for a rackmount server cabinet, in order to determine what sort of rack I should purchase for my home use. Since this is for home use, I won't be following certain best practices (eg. hot/cold aisle, not even air conditioning) and may be willing to sacrifice in various areas in order to reduce cost and footprint - but please advise if there are safety concerns or other considerations to note. The most basic specs for a server cabinet are the dimensions (external width x external depth x usable height). Width: commonly 600mm or 800mm (if the use case requires extra clearance around the sides, such as if there is lots of cabling). In my case and most common cases, I'm going to stick with 600mm. Height: Select a sufficiently tall rack to fit my equipment. But how much may I stuff into it? Eg, if there is a 15U rack, can I really populate it with 15U of servers, or should I leave 1U at top and bottom for air circulation? Depth: Racks commonly have external depth of 600mm (network equipment), 800mm, 1000mm, or even longer. I'm trying to see how to fit into the 800mm depth. With reference to http://www.server-racks.com/rack-mount-depth.html, I'm hoping to have the front and rear posts mounted ~ 28.5" (72cm) apart, which would leave only 8cm for front space and rear space. How much rear space (from rear posts to back of rack) do I really need? I won't use cable management arms, so can I mount a 72cm depth server since the power, KVM, network cables won't take up much depth? My most important equipment are all < 60cm depth (4U chassis) and should comfortably fit within the 800mm cabinet. The rest of the equipment are very old 1U servers that range from 65-72cm depth. I might still want to make further use of them, or I might discard them since they are so old. Even if the 72cm servers cannot be powered on in an 800mm rack, I should be able to use them as 1U shelves. But, what server depth can I expect to be able to operate? Or am I forced to upgrade to 1000mm depth racks in order to use any servers deeper than 60cm? With reference to best practices for HP racks, some other specs and installation considerations: There aren't any minimum recommendations for clearance on the sides of the rack. It is recommended to leave 48" front clearance. The 48" front clearance is based on 32" chassis depth, 13" to extend the rack rails and mate the inner/outer rails, and 3" for movement. If I don't use such rails (eg, use shelves instead), it should be sufficient to leave front clearance of chassis depth + 3". It is recommended to leave 30" rear clearance "to provide space for servicing the rack". I'm planning to back the rack into a corner of the room, and wheel it slightly out when I need to access the rear. If the wheeling plan is ok, I still need to know how much rear clearance is required for air circulation and ventilation purposes. Castor wheels and stabilising feet. Since I'm backing the rack into a corner of the room, I'll only be able to set the stabilising feet on the front corners. Thoughts on safety? The rack that I'm considering has front glass doors with side ventilation slits and fully perforated rear doors. I'm hoping this will be a good balance between temperature and noise (only ventilation slits facing out the front, while the rear is facing the walls). Or is the sound of high-rpm fans going to escape through the front slits anyway and destroy my sanity?

    Read the article

  • How to treat your genius team-mate [closed]

    - by Shiplu
    I am soon going to be in a team where a very talented programmer works. Everyone int the company likes him as he knows a lot of thing and does a lot of programming. The PM and the CEO likes him a lot. I am his fan as a programmer. But as a team mate? I always try to avoid him. The reason is, in the very early days of our company our CEO used to choose both of us in a same team we worked together. Then I had many terrible experiences. Most of the time he is doing others work. When team leader breaks the work load and distributes it, He used to work more than a workday everyday and also doing my own work. The result was same duplicate code. He is not working my working finishing his own, he is doing it in the middle. how do you treat such team-mates.

    Read the article

  • Apache connection intermittantly times out on LAN

    - by jonescb
    I'm using Fedora 12 with Apache 2.2.14, and I was having this error on 2.2.13 as well. Even when I connect to my server over LAN, Firefox will occasionally time out while connecting. I can't figure out what is causing this. The error log isn't showing anything. I even cleaned the error log file, so that if something happened, it'd be a little easier to spot. But I'm still getting time outs, and nothing in the error log. Can anybody help me find what the problem is? Here is my httpd.conf Pastebin It's the default Fedora configuration; I've only changed the ServerName if I remember correctly. I'm pretty sure it's not the Timeout setting, because on LAN it should never time out. I don't believe it's a load issue either, I'm the only one connecting to it. I'm not an Apache expert, so if more information is needed I'll need some instructions on how to get that data.

    Read the article

  • Internet Explorer 9 auto "feeling lucky" for gmail bing search

    - by Gareth Jones
    When Im at school and using school computers, i have to use has IE9. When I want to access my gmail, I type in "gmail" in the URL bar, and thus IE9 does a bing search. The page half loads (As in, loads just about every thing but the search results) and then opens my gmail, kinda like google's "i'm feeling lucky". My question is this: Why? IE9 doesn't have the URL of gmail, as i can watch the bing search load, and then the url changes to gmail, and it only happens for Gmail, having tired with searching Google and Facebook in the same method. The computer is running Windows 7 with Windows Aero disabled, and limited account privileges. While its a cool thing, I would like to known what causes it to happen. Thanks

    Read the article

  • wifi routers and concurrent devices

    - by Joelio
    We have a Linksys WRT54G WiFi router in our office which was working great when we had 5-6 folks. Now on peak days we have 10-15 people, each with a computer, smartphone, etc, and an ooma VOIP device. On average 1-2 times a day I need to go hard reboot the router, and sometimes the border router (Cox-supplied device). I assume this is just because the router cant handle this many concurrent users. So my question is can these consumer routers handle this kind of load? If not, would adding more devices solve the problem, and how close proximity can I put 2 routers without having interference problems (our office area is not that big physically)?

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Installing mod_xsendfile on MAMP

    - by mail4alberto
    Hello, I'm having trouble installing mod_xsendfile on MAMP. I've used some sources to try to help me install it: http://iprog.com/posting/2008/04/compiling_mod_xsendfile_for_mac_os_x http://groups.google.com/group/phusion-passenger/browse_thread/thread/e6dac9d5ea0de9c1 I ended up installing apache20 via macports and used apsx command to create the module and then copy it into MAMP's modules folder. I was able to seem to load the module at least. But then I get this error in my apache logs: [Thu May 27 19:08:28 2010] [notice] child pid 68606 exit signal Bus error (10) [Thu May 27 19:08:41 2010] [notice] child pid 68607 exit signal Bus error (10) Can anyone help me out? :S

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • APC module causing strange error

    - by clifgriffin
    When I run php -v I get: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/apc.so' - /usr/lib64/php/modules/apc.so: undefined symbol: php_pcre_exec in Unknown on line 0 This isn't my first rodeo. I've setup APC multiple times. This is a MediaTemple Dedicated Virtual 4.0 with Plesk 11. Plesk 11 is the only thing essentially different from the other servers I've set this up on. I've verified that pcre-devel is installed. I've compiled APC from source as well as used pecl to install it. No difference. I also tried downgrading to APC 3.0.19, with no love.

    Read the article

  • IIS not listening over external network, all other traffic working

    - by Beuy
    Hello there, I have a very odd situation, I have a server (let's call it X) running 2008 R2 with two NIC's in it, one is connected to the work domain and has a subnet of 192.168.10.0/24 the other is connected to a ADSL connection and has a subnet of 192.168.1.0/24. The server has IIS installed. On the ADSL connection I have setup a dynamic dns and port forwarding to allow external HTTP, HTTPS, FTP and RDP connections. FTP and RDP are working fine however neither HTTP or HTTPS are working at all. I can browse the websites by going to localhost on the machine, the HTTP and HTTPS ports appear as "Filtered" when I try to scan them using PortQueryUI and browsers respond with a "Server took too long to load or was not responding" error. This was working fine just a few days ago, Windows firewall is disabled I don't have any software firewall on it. And I'm really lost. Any help would be great.

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • Slow Starting DHCP Client Service - HP Thin Clients

    - by Ryan
    We have recently began adding XPe thin clients to our domain in preperation for a new citrix environment. One thing that has been picked up on in testing is that they appear slow to boot. The issue manifest's it's self as the classic "Applying Computer Settings..." screen we are all used to seeing. After digging into the issue it appears the DHCP Client service is taking some time to load on boot, this varies but I would estimate it can take around 1 minute in some cases. I've eliminated the classic issues, DHCP is responding correctly and in quick time. DNS is not the cause and GPO's are applying promptly. A simple workaround is to assign the client a static IP which work's great so the TCP/IP servies are obviously firing up quickly, just not DHCP Client. Does anyone have any idea's on how I may be able to improve the service start time? Keen to find a better solution before I get my arm twisted into setting up 250 thin clients with static addressing!

    Read the article

  • Issues with Cinnamon?

    - by Corrodie
    I just recently switched my system over to Ubuntu 12.10, and decided on Cinnamon as my environment-- it all worked fine, at first. But I was poorly educated and started using Compiz and Emerald along with it--Setting both as replacements in startup processes. I now know, that's a big, big mistake. Now when loading Cinnamon, I am greeted to my background image, and only that. My only options seem to be to open a terminal. I was advised to attempt muffin --replace and mutter --replace Neither to any avail, the terminal closes, and I cannot load another one unless I completely reload. I went back to Unity, purged and autoremoved Cinnamon, emerald, and compizconfig, and attempted to reinstall Cinnamon, thinking that would solve the problem--no, it came back just as broken as before. So, I reinstalled ubuntu, then cinnamon---still broken. I'm assuming I must find a way to remove the replace commands-- but as I have no menu, I'm not positive I can do that. Is there any way I can access the startup processes via terminal? I'd think though, if I completely removed Cinnamon, all configurations would be gone too, so, it's just not making much sense. Is there some kind of reset I could possibly do? I've been browsing forums and questions here, all leading to things I'd already done, so, it can't hurt to ask for myself. I apologize if you would rather I have posted this over at mint. Next time, I will definitely check compatibility instead of assuming something just has to work. Any help is greatly appreciated, thanks! ****EDIT***** It seems although it didn't allow me to do it before, I was now allowed to access the settings and startup processes for Cinnamon via Unity, and, after quickly removing aforementioned processes- I'm up and running again.

    Read the article

  • How does one check whether the OS X "disabled" flag for launchd services is set?

    - by Charles Duffy
    According to the man page for launchctl (emphasis mine):    -w   Overrides the Disabled key and sets it to false. In previous versions, this option would modify the configuration file. Now the state of the Disabled key is stored elsewhere on-disk. Because the current state of the disabled flag is no longer set in the .plist file itself, checking for the Disabled key is no longer an accurate way to tell if the service will run on next boot. Where is this "elsewhere on-disk"? More to the point (and more importantly), how does one check whether this flag is set? Also, is it possible to set a service to run on next boot without forcing it to start immediately (as with launchctl load -w /Library/LaunchDaemons/my-service.plist)?

    Read the article

  • How do you manage large web farms?

    - by Andrew Katz
    I have a quickly growing web farm running IIS 7 (30+ servers). All servers are identical copies of each other and all servers are physical. We update the software about once a month, and in the current process, we follow the following steps: Disable server from pool on F5 load balancer. Disable HTTP Keep-alives in IIS so connections drop quickly. Change default directory of website to new folder containing new binaries. Test server Enable HTTP Keep-alives. Enable server in F5 pool. Move to server 2 Microsoft used to have Application Center which was abandoned a while ago. They have made a second attempt with the Web Farm Framework, but this adds as much QA time testing the release package as it saves in the deployment. Has anyone seen a commercial off the shelf application that is tailored for managing and deploying to large web farms? Thanks!

    Read the article

  • How to automatically resume php-fpm?

    - by alfish
    I am using nginx+php-fpm on Debian Squeeze for a busy server and have had great difficulty to deal with maximum connections being reached. Here the problem is that php processes sometimes just die randomly under high load and leave the server with no php process. Then I need to manually restart php5-fpm service to bring back the server to life. I am wondering how to avoid this to happen, or at least treat the symptoms by restarting the php5-fpm automatically whenever there is not php process left to listen to incoming requests. My relevant configs are: pm = dynamic pm.max_children = 1400 pm.start_servers = 10 pm.max_spare_servers = 20 pm.process_idle_timeout = 1s; #not sure it will be useful when pm=dynamic pm.max_requests = 100000 request_terminate_timeout = 30 I appreciate your suggestions to cope with this nasty problem.

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >