Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 605/832 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Firefox: Clear History Is SUPER EFFECTIVE?

    - by acidzombie24
    I'm seeing a performance problem on certain sites (like gmail) which clearing the history should not affect. Is this a website problem or a firefox problem and what can i do to fix it w/o clearing my history? Also as a webdeveloper i am interested in how to make this happen (or not happen). I'm using firefox 8 and i confirmed the problem by copying my profile to firefox 11 (portable). To reproduce go to gmail.com and sign in. Have your task manager open. Once you click signin or hit enter gmail will bring up your emails. Keep your eye on the CPU usage. I checked and right now on this machine its using all my CPU for 22seconds!!!! Yes. 22 seconds. Once i cleared my "browser & download history" Its <6seconds. WTF. I have no idea why or how the size of history and CPU usage when loading up gmail are correlated. I have firefox setup so it never clears the history. But... 22seconds is a disaster. Can someone explain why this is happening or a fix that isnt clearing my history? I tried visiting a few websites and only gmail eats up that much CPU. Most websites only take <5sec of max CPU. So maybe this is a gmail problem? Or a firefox problem that gmail happens to hit? I still dont understand why it happens. -edit- I forgot to mention places.sqlite is 90mb. I dont think that matters. I have a sqlite file 400mb which is pretty much 2 large tables. It has no performance issues

    Read the article

  • GroupWise 6.5 IMAP Service Shuts Down-Needs Constant Restarting

    - by Jeffrey Majette
    I have an issue with my aging (and soon to be replaced) GroupWise 6.5 system. The IMAP service in the GWIA tends to periodically shut down, requiring that I restart the service. When IMAP is down, my end users who use Blackberry BIS cannot send or receive email on their devices. It's a royal pain to have to restart the IMAP service several times a day. The GWIA logs do not seem to indicate a problem. I thought I was on to something yesterday. I discovered that the gwia.cfg file located in SYS:SYSTEM was actually from GW 6.0. The gwia.cfg in DOMAIN\WPGATE\GWIA was titled for GW 6.5 when opening the file for editing. I changed the generic placeholder info in the file to match my environment and restarted the gwia. However, it made no difference in performance. The IMAP service shuts down about 30 minutes after the last restart. I know this version of GW is antiquated. We are in the process of migrating to Google Apps. But, if anyone has an idea that could fix this issue, I and my end user community would be forever grateful!

    Read the article

  • lenovo x1 carbon windows 8 frequent wifi disconnect issue

    - by hIpPy
    I'm having frequent wifi disconnects on my Lenovo X1 Carbon Touch laptop. I got this laptop 2 months back and it has been happening ever since about 3-5 times a day and 10 times a week on average. I've Frontier Fios internet. Power connected or not does not matter. Once I get disconnected, I try below to connect again in that order: turn Airplane mode on and off, troubleshoot network problems windows troubleshooter), restart the laptop I'd find that the WiFi adapter would get disabled and sometimes windows troubleshooting would help but more than often I'd end up restarting the laptop. A week back, I upgraded my wifi network adapter drivers (now Intel, version 15.5.6.48, 10/3/2012). I still get disconnected frequently but turning Airplane mode on and off gets me connected again. So the driver update did help. Windows 8 is updated. None of the other devices (nexus, iphone phones, nexus7, ipad tablets) would have wifi issues when my laptop would get disconnected. config: Intel(R) Centrino(R) Advanced-N 6205 (WiFi network adapter) Microsoft Windows 8 Pro Microsoft Windows [Version 6.2.9200] x64-based PC LENOVO System Model: 3443CTO X1 Carbon Touch I recently noticed this log message When I got disconnected in event viewer: Your computer was not assigned an address from the network (by the DHCP Server) for the Network Card with network address 0x[XXXXXXXXXXXX]. The following error occurred: 0x79. Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. Any idea?

    Read the article

  • Playback random section from multiple videos changing every 5 minutes

    - by brian
    I'm setting up the A/V for a holiday party this weekend and have an idea that I'm not sure how to implement. I have a bunch of Christmas movies (Christmas Vacation, A Christmas Story, It's a Wonderful Life, etc.) that I'd like to have running continuously at the party. The setup is a MacBook connected to a projector. I want this to be a background ambiance thing so it's going to be running silently. Since all of the movies are in the +1 hour range, instead of just playing them all through from beginning to end, I'd like to show small samples randomly and continuously. Ideally the way it would work is that every five minutes it would choose a random 5-minute sample from a random movie in the playlist/directory and display it. Once that clip is over, it should choose another clip and do some sort of fade/wipe to the chosen clip. I'm not sure where to start on this, beginning with what player I should use. Can VLC do something like this? MPlayer? If there's a player with an extensive scripting language (support for rand(), video length discovery, random video access). If so, I will probably be able to RTFM and get it working; it's just that I don't have time to backtrack from dead-ends so I'd like to start off on the right track. Any suggestions?

    Read the article

  • What user information is exposed via a browser?

    - by ipso
    Is there a function or website that can collect and display ALL of the user information that can be obtained via a browser? Background: This of course does not account for the significant cross-reference abilities of large corporations to collate multiple sources and signals from users across various properties, but it's a first step. Ghostery is just a great idea; to show people all of the surreptitious scripts that run on any given website. But what information is available – what is the total set of values stored – that those scripts can collect from? If you login to a search engine and stay logged in but leave their tab, is that company still collecting your webpage viewing and activity from other tabs? Can past or future inputs to pages be captured – say comments on another website? What types of activities are stored as variables in the browser app that can be collected? This is surely a highly complex question, given to countless user scenarios – but my whole point is to be able to cut through all that – and just show the total set of data available at any given point in time. Then you can A/B test and see what is available with in a fresh session with one tab open vs. the same webpage but with 12 tabs open, and a full day of history to boot. (Latest Firefox & Chrome – on Win7, Win8 or Mint13 – although I'd like to think that won't make too much of a difference. Make assumptions. Simple is better.)

    Read the article

  • Not able to connect to port different than 22 - OpenVPN

    - by t8h7gu
    I have OpenVPN network with 5 clients. Computer with Arch Linux which hosts OpenVPN server, It also hosts virtual machine with Computer with CentOS which is also connnected to OpenVPN subnet. Windows 8 which hosts virtual machine with CentOS. Both of them are connected to OpenVPN. Last one machine is virtual machine with CentOS which is hosted by computer with Ubuntu 14( which is not connected to OpenVPN. All machines in OpenVPN subnet are bolded. All phisical computers are in different networks. The problem is that when I use nmap to scan Windows and it's guest virtual machine it's saids that host seems down. When I force namp to scan specific port it shows filtered state: nmap -Pn -p 50010 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 19:49 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.11s latency). rDNS record for 10.8.0.3: node3.com PORT STATE SERVICE 50010/tcp filtered unknown Telnet also cannot connect to this port telnet n3 50010 Trying 10.8.0.3... telnet: Unable to connect to remote host: No route to host But ss on this host show's proper state of this port ss -anp | grep 50010 LISTEN 0 50 10.8.0.3:50010 *:* users:(("java",12310,271)) What might be possible reason of that and how to fix it? EDIT I've found that I am able to connect via telnet to ssh port: telnet n3 22 Trying 10.8.0.3... Connected to n3. Escape character is '^]'. SSH-2.0-OpenSSH_5.3 So it seems that it's not problem with Windows firewall. But I have no idea what it might be. Also nmap result for first thousand ports: nmap -Pn -p 1-1000 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 20:08 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.49s latency). rDNS record for 10.8.0.3: node3.com Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 77.87 seconds

    Read the article

  • What are the replacement options for an IDE hd for a DOS based system?

    - by dummzeuch
    I have got a few "embedded" systems running MSDOS 6.2 which boot from and store data to IDE hard disks. Since these drives are nearing their end of life, the question arises how we can replace them. The requirements are: DOS must be able to install and boot from these drives. They must be able to sustain heavy (mostly) write access. If possible, they should be able to survive moderate vibrations (not too bad since the current hds have survived several years of that) I considered the following options so far: other ide hard drives: Unfortunately modern IDE drives are too large so DOS cannot boot from them even if I create small partitions. Older IDE drives are just that: old, so they are probably not the most reliable ones any more. SSDs: There are a few SSDs with IDE interface available. I have not yet tried them. Does anybody have any experience with them? They look like the ideal replacement provided that DOS can boot from them and that writing speed does not deteriorate too much (the old hds are no race cars either). Compact Flash: There are adapters for using CF with IDE controllers and they work fine. DOS can boot from them and they have no problems at all with vibrations. What I am not sure about is their durability. DOS uses FAT so some very few sectors are written every time the medium is being written to. IDE to SATA converters: I have no idea whether they are any good. Has anybody tried them? It might be an option to use one of these to connect an SATA SSD to the system. Are there any alternatives that I have missed? (We are working on replacing these systems, but it will still take a few years.)

    Read the article

  • Rescue system running TFS that BSODs, into vmware esxi

    - by 3molo
    Hi, After moving to new facilities, one of our old Dell servers running Windows Server 2003 R2 on PowerEdge 2650 HW BSODs with 0x8e. The server runs Team Foundation Server, so we have a few guys dependent on it. No one here knows TFS, so we have no idea how difficult it would be to setup from scratch. We have the MSSQL database(s) backed up, recent and fresh copy. Tried removing/refitting memory modules, but with no success. The system boots into safe mode but hangs occasionally. I booted a linux livecd and did a dd of both c: and d:, so I have all the data in compressed images on a vmware machine. For the guest, I created a 38G (actually it became 40GB) partition to act as C:, and booted a live cd. I then uncompressed the compressed disk image of c: and dd'd it to the new c: using 'gunzip -dc c.img.gz | dd of=/dev/sda1 bs=1M'. The operation ran for about 1000 seconds, and completed successfully. I assumed it would at least try to boot windows (but most likely BSOD due to not having correct drivers), but the Vmware ESXi guest does not seem to recognize it as a bootable disk. We don't have the vmware enterprise license, so the vmware converter cold cloning is not an option. Did I do something wrong in my dd's etc with the ISOs, or why would it not (try to) boot? Am I wasting my time? What other approach is there? Will continue to try to remove services and drivers to make the physical machine at least work reasonably well in safe mode. What do you suggest? 1. Continue to get the dd'd images to the virtual disk and get it to boot. 2. Install a new windows server, get team foundation server and restore from backup. 3. Focus on the old problematic hardware Any help appreciated

    Read the article

  • pdns-recursor allocates resources to non-existing queries

    - by azzid
    I've got a lab-server running pdns-recursor. I set it up to experiment with rate limiting, so it has been resolving requests openly from the whole internet for weeks. My idea was that sooner or later it would get abused, giving me a real user case to experiment with. To keep track of the usage I set up nagios to monitor the number of concurrent-queries to the server. Today I got notice from nagios that my specified limit had been reached. I logged in to start trimming away the malicious questions I was expecting, however, when I started looking at it I couldn't see the expected traffic. What I found is that even though I have over 20 concurrent-queries registered by the server I see no requests in the logs. The following command describes the situation well: $ sudo rec_control get concurrent-queries; sudo rec_control top-remotes 22 Over last 0 queries: How can there be 22 concurrent-queries when the server has 0 queries registered? EDIT: Figured it out! To get top-remotes working I needed to set ################################# # remotes-ringbuffer-entries maximum number of packets to store statistics for # remotes-ringbuffer-entries=100000 It defaults to 0 storing no information to base top-remotes statistics on.

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • SSD not detected on boot up running windows 7, with installed blank hdd

    - by Matt. G
    I have recently built a PC for a friend, after the original system build, which included a 60GB primary SSD and a secondary 1TB HDD. I kept getting blue screens of death and kernel power errors, after investigation it was revealed that a faulty power cable and insufficient thermal paste provided with the included heat sink was the cause. This resolved the problem but after 3 months I received a phone call saying that the PC was not starting at the point of loading the operating system, with an NTLDR error. I had an idea of the cause, and after the user removed the HDD the computer started up with no issues, then I asked him to power off and reattach the HDD, and this completely resolved the issue; beforehand even restarting would not fix it. He does not have a surge protector and I thought that maybe some registry corruption had occurred due to a power surge, this might be a stupid answer though. Any ideas to what occurred with the machine would be most appreciated. No other issues have been found since the initial fault. The PC uses Windows 7 Home Premium installed on the SSD.

    Read the article

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • Gateway laptop module bay light repeating 12 flashes - what error is that?

    - by Simurr
    I have a Gateway M465-E laptop currently running fine with a T2300E Core Duo installed. I wanted to upgrade it to a Core 2 Duo. My brother has the same model laptop and that took a Core 2 Duo (T7200) just fine. Picked up a T7200 on ebay and installed it. Normally when booting all the indicator lights flash once and the fan spins up before the machine actually starts to POST. With the T7200 installed all the lights flash and the fan spins up, but the module bay activity light flashes 12 times repeatedly. I'm assuming this is an error code, but can find no information about it. There are no beep codes. I've removed the ram, HD, Bay module and no change. Switched back to the T2300E and everything works fine. Anyone know what that error code is? The motherboard was actually manufactured by Foxconn if that helps. Update 1 Returned the CPU as defective. I tested it in 3 M465-E's and all of them did exactly the same thing. I still have no idea what the error code is. I'd still like to know for future reference. Perhaps I should try removing the CPU from one of them and see what happens.

    Read the article

  • Troubleshooting my internet connection

    - by Simon Verbeke
    While I was out of the house, my father rearranged the network cables a bit. I don't know what he has done exactly - He says nothing more then pulling and untangling. When I came back home, my internet connection changed its IP from 192.168.0.205 to 169.254.197.233. The speed changed from 1Gbps to 10Mbps. It has also been at 100Mbps for a while. My subnetmask changed from 255.255.255.0 to 255.255.0.0. The standard gateway changed from 192.168.0.1 to no standard gateway. My DNS servers remain the same. I have checked the lights of the UTP ports, and it looks like it's only sending a heartbeat every few seconds. A sketch of the (relevant part of) the network: My PC ----- extender ----- modem ^ ^ ^ Wired | Wired | This thing connects two cables to each other All the cabling is gigabit, my network card is a Realtek RTL8168C(P)/8111(P) Family PCI-E Gigabit Ethernet NIC (NDIS 6.20). THe modem is a CBN SVG6540E I have no idea what is going on here and I don't know how to find out either. Any help is welcome! If you need any more info, please ask.

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • Vmware player change dhcp server settings

    - by Tathagata
    I have a Windows Server 2003 running from a Vmware player on Win 7 box. The idea is to test Windows Deployment service in the virtual network. Is it possible to configure the vmware dhcp server with WDS related stuff(option 66, 67)? I found a few references where people were using the vnetlib.exe to start, stop the dhcp serverchange the subnet mask etc - but there's no info on how to get set the dhcp server options. DHCP config from the virtual network editor I do have the Workstation, without the license for it. In the Virtual network Editor, the DHCP settings for the network I'm using, only allows me to set the subnetmask, IP ranges and stuff like that. But not the dhcp options. DHCP server on the WDS server Authorizing the DHCP server in the guest WDS server fails. The VMware player can run its own dhcp server fro the virtual network with out any authorization from the Active directory - can I do the same, with Win dhcp server in the guest Win Server? ~~~~~ Can I authorize W2K8 DHCP server for private network, even when prohibited in enterprise network? says we have to run a third party dhcp server... :/.

    Read the article

  • Port to use CentOS init.d functions

    - by jcalfee314
    What are good equivalent centos commands using functions in /etc/init.d/functions such as daemon to perform the following tasks? STARTCMD='start-stop-daemon --start --exec /usr/sbin/swapspace --quiet --pidfile /var/run/swapspace.pid -- -d -p' STOPCMD='start-stop-daemon --stop --oknodo --quiet --pidfile /var/run/swapspace.pid' It looks like daemon will work for the start command and killproc is used for the stop command. . /etc/init.d/functions pushd /usr/sbin daemon --pidfile /var/run/swapspace.pid /usr/sbin/swapspace . /etc/init.d/functions killproc -p $(cat /var/run/swapspace.pid) Would the --oknodo be needed in the CentOS env (the swap file is really only boot-time)? "oknodo - Return exit status 0 instead of 1 if no actions are (would be) taken." I don't see quiet in daemon or killproc, I can't imagine that it would matter though. The original start-stop-daemon for swapspace seems to have both -p and --pidfile (the same command). That must be an error. Did I miss anything? Any idea why daemon not create the pid file?

    Read the article

  • Client cannot access my IIS7 web server

    - by Soccerwiz
    I have a Windows 2008 web server on running IIS 7 with about 25 websites. One of those sites is an SaaS application that is accessed constantly throughout the day. However, one particular client keeps getting blocked from my server. They will be using the service, and then all of a sudden they cannot access the program, or any other site on the server. The entire office of 4 users is blocked from accessing anything on the web server. A trace route reveals they get all the way to the server before they are blocked. However, they can access a linux server that is a different VM with a different IP on the same physical server. Also, when they are blocked from their office, they can still access the site from their mobile phone or local Starbucks. They can also occasionally reset the router and gain access to the web server again as they are on a dynamic IP address. I checked IIS and allow all IPs to access the server. There is nothing in the logs the says anything about a user being banned. I really have no idea what is causing this? Could it be a virus on their end? I have even moved the SaaS to a completely new server in a different location, and they were working fine for about a month, and then the problem started occurring again. Are there any hidden blacklists in IIS? Or is it a routing issue on their end?

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • How to show what will be updated next pull?

    - by ???
    In SVN, doing svn update will show a list of full paths with a status prefix: $ svn update M foo/bar U another/bar Revision 123 I need to get this update list to do some post-process work. After I have transferred the SVN repository to Git, I can't find a way to get the update list: $ git pull Updating 9607ca4..61584c3 Fast-forward .gitignore | 1 + uni/.gitignore | 1 + uni/package/cooldeb/.gitignore | 1 + uni/package/cooldeb/Makefile.am | 2 +- uni/package/cooldeb/VERSION.av | 10 +- uni/package/cooldeb/cideb | 10 +- uni/package/cooldeb/cooldeb.sh | 2 +- uni/package/cooldeb/newdeb | 53 +++- ...update-and-deb-redist => update-and-deb-redist} | 5 +- uni/utils/2tree/{list2tree => 2tree} | 12 +- uni/utils/2tree/ChangeLog | 4 +- uni/utils/2tree/Makefile.am | 2 +- I can translate the Git pull status list to SVN's format: M .gitignore M uni/.gitignore M uni/package/cooldeb/.gitignore M uni/package/cooldeb/Makefile.am M uni/package/cooldeb/VERSION.av M uni/package/cooldeb/cideb M uni/package/cooldeb/cooldeb.sh M uni/package/cooldeb/newdeb M ...update-and-deb-redist => update-and-deb-redist} M uni/utils/2tree/{list2tree => 2tree} M uni/utils/2tree/ChangeLog M uni/utils/2tree/Makefile.am However, some entries having long path names are abbreviated, such as uni/package/cooldeb/update-and-deb-redist is abbreviated to ...update-and-deb-redist. I deem I can do with Git directly, maybe I can configure git pull's output in special format. Any idea?

    Read the article

  • RabbitMQ Management console not working

    - by rrejc
    I have started with RabbitMQ. I have a (windows) machine on which I installed two RabbitMQ nodes as a service - I have choose the nodename, port and service name for each of them. The services are running normally (i see that they are listening in a netstat-a). I have also installed management plugin with "rabbitmq-plugins enable rabbitmq_management" and restarted both services. But the plugin isn't running - I dont see it listening in a netstat and I can't connect to the management console via browser. Any idea what could be wrong? Is there any log to see what is goind on? Updated: when I do rabbitmq-plugins list i get: c:\RabbitMq\sbin>rabbitmq-plugins list [e] amqp_client 3.0.1 [ ] cowboy 0.5.0-rmq3.0.1-git4b93c2d [ ] eldap 3.0.1-gite309de4 [e] mochiweb 2.3.1-rmq3.0.1-gitd541e9a [ ] rabbitmq_auth_backend_ldap 3.0.1 [ ] rabbitmq_auth_mechanism_ssl 3.0.1 [ ] rabbitmq_consistent_hash_exchange 3.0.1 [ ] rabbitmq_federation 3.0.1 [ ] rabbitmq_federation_management 3.0.1 [ ] rabbitmq_jsonrpc 3.0.1 [ ] rabbitmq_jsonrpc_channel 3.0.1 [ ] rabbitmq_jsonrpc_channel_examples 3.0.1 [E] rabbitmq_management 3.0.1 [e] rabbitmq_management_agent 3.0.1 [ ] rabbitmq_management_visualiser 3.0.1 [e] rabbitmq_mochiweb 3.0.1 [ ] rabbitmq_mqtt 3.0.1 [ ] rabbitmq_old_federation 3.0.1 [ ] rabbitmq_shovel 3.0.1 [ ] rabbitmq_shovel_management 3.0.1 [ ] rabbitmq_stomp 3.0.1 [ ] rabbitmq_tracing 3.0.1 [ ] rabbitmq_web_stomp 3.0.1 [ ] rabbitmq_web_stomp_examples 3.0.1 [ ] rfc4627_jsonrpc 3.0.1-git7ab174b [ ] sockjs 0.3.3-rmq3.0.1-git92d4ba4 [e] webmachine 1.9.1-rmq3.0.1-git52e62bc

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • proxy pass domain FROM default apache port 80 TO nginx on another port

    - by user10580
    Im still learning server things so hope the title is descriptive enough. Basically i have sub.domain.com that i want to run on nginx at port 8090. I want to leave apache alone and have it catch all default traffic at port 80. so i am trying something with a virtual name host to proxy pass to sub.domain.com:8090, nothing working yet and go no idea what the right syntax could be. any ideas? most of what i found was to pass TO apache FROM nginx, but i want to the do the opposite. LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so <VirtualHost sub.domain.com:80> ProxyPreserveHost On ProxyRequests Off ServerName sub.domain.com DocumentRoot /home/app/public ServerAlias sub.domain.com proxyPass / http://appname:8090/ (also tried localhost and sub.domain.com) ProxyPassReverse / http://appname:8090/ </VirtualHost> when i do this i get [warn] module proxy_module is already loaded, skippin [warn] module proxy_http_module is already loaded, skipping [error] (EAI 2)Name or service not known: Could not resolve host name sub.domain.com -- ignoring! and yes, the app is working (i have it running on port 80 with another subdomain) and it works at sub.domain.com:8090

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >