Search Results

Search found 32453 results on 1299 pages for 'osr wls oracle service re'.

Page 1068/1299 | < Previous Page | 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075  | Next Page >

  • Battery is drained too quickly

    - by LucaB
    I'm getting really low battery life under ubuntu, not even close to windows. I tried powertop, and I saw that my laptop is consuming in idle nearly 20 watts (a bit more). I tried to install laptop-mode-tools, change "good" into "bad" in powertop, but nothing changes. I see that I have the the HD audio output device which is running at 100% every time. Could this be the problem? This is a report from powertop. The battery reports a discharge rate of 22.8 W The estimated remaining time is 33 minutes Summary: 381.8 wakeups/second, 0.0 GPU ops/second and 0.0 VFS ops/sec Usage Events/s Category Description 3.2 ms/s 182.7 Timer tick_sched_timer 100.0% Device Audio codec hwC0D3: Intel 7.9 ms/s 25.1 Process /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background no 1.9 ms/s 24.2 Interrupt [6] tasklet(softirq) 2.9 ms/s 23.2 Process /usr/lib/chromium-browser/chromium-browser --type=zygote 8.1 ms/s 20.3 Process /usr/lib/unity/unity-panel-service 0.7 ms/s 17.4 Timer hrtimer_wakeup 4.2 ms/s 12.6 Process unity-2d-panel 604.4 µs/s 9.7 Process syndaemon -i 2.0 -K -R -t 149.7 µs/s 9.7 kWork ieee80211_iface_work 0.8 ms/s 8.7 Process metacity 19.5 ms/s 1.0 Process powertop 3.0 ms/s 6.8 Process //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 699.0 µs/s 6.8 Process /usr/lib/thunderbird/thunderbird 4.3 ms/s 4.8 Process gnome-terminal 658.9 µs/s 2.9 Interrupt [1] timer(softirq) 75.1 µs/s 2.9 kWork iwl_bg_run_time_calib_work 163.8 µs/s 1.9 Process /usr/lib/accountsservice/accounts-daemon 70.6 µs/s 1.9 Process [ksoftirqd/2] 25.8 µs/s 1.9 Process [ksoftirqd/0] 1.0 ms/s 1.0 Process /usr/bin/python /usr/sbin/powernapd 408.2 µs/s 1.0 Process unity-2d-shell 189.8 µs/s 1.0 Process /usr/lib/chromium-browser/chromium-browser 124.4 µs/s 1.0 Process /usr/lib/unity-lens-applications/unity-applications-daemon 113.3 µs/s 1.0 Process /usr/lib/gnome-settings-daemon/gnome-settings-daemon 112.0 µs/s 1.0 Process nautilus -n 104.9 µs/s 1.0 Process /usr/lib/gvfs/gvfsd-trash --spawner :1.2 /org/gtk/gvfs/exec_spaw/0 77.5 µs/s 1.0 Process /usr/lib/x86_64-linux-gnu/colord/colord 75.6 µs/s 1.0 Process /usr/lib/gvfs/gvfs-gdu-volume-monitor 75.0 µs/s 1.0 Interrupt [53] i915 74.9 µs/s 1.0 Process /usr/lib/gvfs/gvfs-afc-volume-monitor What should I do to make the battery consumption lower?

    Read the article

  • FTP Sites vs Sites in IIS 7.0

    - by NealWalters
    We have one FTP site set up (and working) basically like the instructions here: http://www.iis.net/learn/publish/using-the-ftp-service/creating-a-new-ftp-site-in-iis-7 It shows up under "Sites" and then the name of our FTP Site. However, above "Sites" (in the left navigation tree view), we see a node called "FTP Sites". When we click on it, it says "FTP Management is provided by IIS 6.0". Can someone give me the big picture of why this node appears, and why IIS 6 is involved? Is is some backward compatible feature? I didn't build these machines, so don't know the reasoning of what was done before I arrived on the scene. Also, is the tree view icon for websites and FTP sites the same?

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • First Experience with Web Services

    When I first started programming with Microsoft .Net (1.0 Framework) I had a strong desire to learn how search engines indexed web sites. At that time I was a working as a search engine spammer creating web pages to generate traffic for specific themes for various clients. One way I attempted to better understand .Net was to build a web spider that analyzed web pages on demand. An example of the spider is hosted at AddLinkz.com. After my spider was built I had no real idea what I could/should do with it until I found the MSN Search API. I used this web service to compare its results with my spider. Additionally, I used the API to feed my .Net web spider new URLs from the API based on specific search terms. MSN’s search API was very easy to use, I just had to request information by calling a web URL with parameters via a Get request and the results were returned in XML. At that time all requests were limited to XML responses and a maximum of 1,000 results per query.   Since then the entire API has gone through several reconstructions, rebranding and new search services.  Microsoft’s new Bing API replaced the older MSN search API and added several new search capabilities. These new features allow search data to be returned for web searches, image searches, new searches, and related search terms to name a few. Bing API Version 2.0 SourceTypes Web Searches for web content Sushi Image Searches for images on the web Sushi News Searches news stories Sushi InstantAnswer Searches Encarta online what is sushi, convert 5 feet to meters, x*5=7, and 2 plus 2 Spell Searches Encarta dictionary for spelling suggestions Phonebook Searches phonebook entries sushi in Los Angeles RelatedSearch Returns the query strings most similar to yours Ad Returns advertisements to incorporate with results I currently plan to start using the web search feature from the new Bing 2.0 API in an open source project related to exception management. Currently, it is still in the conception phase.

    Read the article

  • Rant - Why is Windows Azure not available in Africa?

    - by Allan Rwakatungu
    Yesterday at the .NET user group meeting in Kampala Uganda  I gave a talk on cloud computing with Windows Azure  (details will be in my next blog post). The guys where excited. Without owning they own inftrastucture and at low cost they can build scalable , highly available applications. Not quite. Azure accounts are only available to people in particular countries - none from Africa. I attended PDC in 2008 when Microsoft unleashed Windows Azure. One of the case studies to show the benefits ofr cloud computing was a project in Africa for an education service in Ethiopia. The point they where making was that the cloud was perfect for scenarios where computing infrastructure is not sophiscated, like Ethiopia. Perfect , i thought. So i got my beta account from PDC and started playing around in the cloud. Then Azure goes live , my beta account does not work any more and I cant pay because am from Uganda. Microsoft , this sucks. I dont know the reasons for Microsoft doing this, but am sure we can work out something. We in Africa need the cloud more than anybody else in the world. Setting up data centers that are higly scalable and available for our startups is not an option we have. But we also cant pay for cloud computing with Microsoft. Microsoft, we know we are a tiny insigficant market for a company your size, but your excluding us only continues to widen the digital divide. Microsoft , how about you have a reseller model for cloud computing. Instead of trying to deal direclty with each client you have local partners who help you sell and bill your cloud services. I think that would lead to Windows Azure being available in Africa. I can help you resell in Uganda.

    Read the article

  • Which Bliki (Blog+Wiki) solution can you recommend?

    - by asmaier
    I'm searching for a good Bliki solution, meaning a combination of blog and wiki that I can install on my own web space. I would like to be able to write articles in the wiki style much like with media wiki. So I want to use a wiki markup language, have a revision history, comments, internal links to other pages (maybe in other languages) and be able to collaboratively edit the articles. On the other side I would like to have a blog-like view on my articles, showing new articles (and changes to existing articles) in a time ordered fashion. It would be nice if it would be possible to search through the articles and also tag the articles, so one could generate a tag cloud for the articles. A nice feature would also be to be able to order the articles according to views or even a voting system for the articles. Good would also be a permission system to keep certain articles private, showing them only to people logged in to the platform. Apart from these nice to have features an absolute must have feature for the Bliki platform I'm searching is the possibility to handle math equations (written in LaTeX syntax) and display them either as pictures like media wiki or even better using Mathjax. At the moment I'm using a web service called wikiDot which offers some of the mentioned features, however the free version shows to much advertisements, the blog feature is not mature, the design is quite ugly and loading of the page is often slow. So I want to install a Bliki solution on my own webspace. Can you recommend any solution for that?

    Read the article

  • SQL SERVER AGENT CAN NOT START

    - by Keith Ph?m
    http://imageshack.us/photo/my-images/819/sqlserveragent3.png/ http://imageshack.us/photo/my-images/341/sqlserveragent.png/ I still can not start SQL server agent . When Sql server agent just starts and stop immedietly .I don't know what's happening in the middle .. I already login by the Network Service account in configuration tools and assign member role in msmb database for SQL Server agent. Pls give me an advice I use win 7 ultimate , sql server 2008: Microsoft SQL Server Management Studio 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) Microsoft Data Access Components (MDAC) 6.1.7600.16385 (win7_rtm.090713-1255) Microsoft MSXML 3.0 4.0 5.0 6.0 Microsoft Internet Explorer 9.0.8112.16421 Microsoft .NET Framework 2.0.50727.4971 Operating System 6.1.7600 Thanks

    Read the article

  • Windows xp problems

    - by anca
    I had problems with web virus cake 3.00 and I downloaded Malwarebytes Anti-WALWARE, I got rid of that virus but I did other problems. When I open my PC appears to skype error "exception in module Skype.exe at 0013CE7D.The EOleSysError RPC server is unavailable" and another one "Error in C: \ WINDOWS \ system32 \ Nvcpl.Dll Missing entry: NnStartup" when I install / uninstall a program says "Windows Installer service Could you not be accessed. This CAN occure if you are running Windows in safe mode, or if the Windows Installer is not corectly installes. Contact your support personnel for assistance. I have tried many "solutions" but nothing worked. Please HELP! It windows xp sp 2

    Read the article

  • Reliable procedure/tool for removing print drivers in Windows 7 (domain environment)

    - by ultrasawblade
    One of the troubleshooting steps in resolving printer-related issues with any version of Windows is to remove installed print drivers and then reinstall the drivers. This is a domain environment and drivers are pulled from a print server. I've had occasion to need to do this on a user's system running Windows 7 Enterprise 64-bit. These procedures don't work: Removing the printer from Devices and Printers (doesn't remove driver obviously) Doing the above, going into Server Properties, and attempting to remove the driver (fails with a "driver in use" error) Opening an empty mmc, adding the Print Management snap-in, and attempting to do the above. Doing sc stop spooler and sc start spooler before doing both of the above Now I know it's possible to remove drivers with the spooler service stopped and then going into the spool directory, as well as deleting registry entries. I'm asking if a tool exists to do this where I can just select the driver in question and it be removed.

    Read the article

  • Can I configure mod_proxy to use different parameters based on HTTP Method?

    - by Graham Lea
    I'm using mod_proxy as a failover proxy with two balance members. While mod_proxy marks dead nodes as dead, it still routes one request per minute to each dead node and, if it's still dead, will either return 503 to the client (if maxattempts=0) or retry on another node (if it's 0). The backends are serving a REST web service. Currently I have set maxattempts=0 because I don't want to retry POSTs and DELETEs. This means that when one node is dead, each minute a random client will receive a 503. Unfortunately, most of our clients are interpreting codes like 503 as "everything is dead" rather than "that didnt work but please try that again". In order to program some kind of automatic retry for safe requests at the proxy layer, I'd like to configure mod_proxy to use maxattempts=1 for GET and HEAD requests and maxattempts=0 for all other HTTP Methods. Is this possible? (And how? :)

    Read the article

  • How do I connect to Ubuntu One after changing the password?

    - by rumtscho
    I changed my password for Ubuntu One using the Web interface, and added a new computer. Since then, the old computer does not synchronize with Ubuntu One. It doesn't show any error messages or such, but files uploaded from the web interface or changed on the newly added computer don't appear/change on the old computer. I guess that it can't connect because it is still using the old password. The problem is that I can't find an interface to change the password the client is using to connect to the service. The "manage account" option opens the Web interface. I looked into the keyring, and found the key for Ubuntu One, but there I only see an encrypted version of the password, so I can't change it there. So what is the correct way to tell my client that my account password has changed? Edit this is what I see when I open Preferences -- Ubuntu One. Is there something wrong with it? It also stubbornly insists that it has successfully synchronized. But the files I have added from other computers are not in my Ubuntu One folder.

    Read the article

  • WSUS setup on ADS Domain Controller VM

    - by NickC
    How should I setup sharing/directory permissions to allow the following to work: VMHost - Hyper-V partition (not part of domain) ADServer - Active Directory Domain Controller running as guest VM on VMHost \VMHost\Updates disk share for WSUS to put the updates and other local files Problem is what permissions do I need to give to \VMHost\Updates to allow WSUS to work with this directory to avoid "The WSUS content directory is not accessible" error which I am currently seeing. As far as I can tell WSUS runs as "domain/Network Service" account. Question is without adding VMHost to the domain how do I give that user appropriate permissions to this directory? Is there a way that VMHost can be told to trust ADServer and then be able to use users accounts from there? Sort of relates to my other post here: Win 2012 Domain controller VM, should the Hyper-V host be part of the domain? Thanks, Nick

    Read the article

  • Remote Debugging

    - by Pemdas
    Obviously, the easiest way to solve a bug is to be able to reproduce it in house. However, sometimes that is not practical. For starters, users are often not very good at providing you with useful information. Costumer Service: "what seems to be the issue?" User: "It crashed!" To further compound that, sometimes the bug only occurs under certain environmentally conditions that can not be adequately replicated in house. With that in mind, it is important to build some sort of diagnostic framework into your product. What type of solutions have you seen or used in your experience? Logging seems to be the predominate method, which makes sense. We have a fairly sophisticated logging frame work in place with different levels of verbosity and the ability to filter on specific modules (actually we can filter down to the granularity of a single file). Error logs are placed strategically to manufacture a pretty good representation of a stack trace when an error occurs. We don't have the luxury of 10 million terabytes of disk space since I work on embedded platforms, so we have two ways of getting them off the system: a serial port and a syslog server. However, an issue we run into sometimes is actually getting the user to turn the logs on. Our current framework often requires some user interaction.

    Read the article

  • Deprioritize BitTorrent traffic

    - by Steven Xu
    I'm sure the question has been asked before, but I can't seem to find it for myself; my Google-fu eludes me. My router, the Linksys E2000, does a decent job at being reasonable about prioritizing some sorts of traffic above BitTorrent traffic (there isn't too much interruption to port 80, 443, or 22 traffic, the ones I use most often). But other ports get pummeled. For instance, 3000 (which I use for local Rails testing) becomes almost entirely non-functioning. Xbox Live traffic (not sure about the ports, but they are in the 1000 range) doesn't do well either. So I'm wondering how to ensure that XBL and local Rails testing maintain strong service while BitTorrent is going. Is it enough that I turn up the QoS on their associated ports to high? It doesn't seem to be as effective as when BitTorrent isn't running at all (I don't know if there's a way to deprioritize BitTorrent traffic).

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • how to pass domain name to backend with pound

    - by FurtiveFelon
    I am using pound as a way to decode SSL for the backend, but the bulk of the work is done on varnish (including virtualhost stuff). As a result, I need pound to just forward all other traffic to varnish verbatim, but it doesn't seem to do that. I am using the default configuration: ListenHTTP Address 1.2.3.4 Port 8080 ## allow PUT and DELETE also (by default only GET, POST and HEAD)?: xHTTP 0 Service BackEnd Address 127.0.0.1 Port 80 End End End So whenever I hit example.com:8080, it will always redirect to the default backend for varnish, which i assume was because the domain (host) header isn't send along. Anyone know what could be wrong? Thanks a lot! Jason

    Read the article

  • [SOLVED] vmware problems - networking - no packet response

    - by jack
    XP is my host. Ubuntu is my Guest in VMware. When I do the following commands, I should get SMTP respones but now get no response. I use wireshark to analayze it. Also in wireshark shows nothing. root@vmware:~# netcat 192.168.1.2 25 220 762462a8c4d Microsoft ESMTP MAIL Service, Version: 6.0.2600.5949 ready at Fri, 12 May 2010 18:04:20 +0800 EHLO SAYHELLO VRFY TEST@LOCALHOST test \ sdfsafsd How can I fix it? UPDATE: I came to know that this is no VMWare problem. This is Netcat problem. For this, you might have to type Ctrl+M {ENTER} {ENTER}

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • Powershell, Task Scheduler or loop and sleep

    - by Paddy Carroll
    I have a job that needs to go off every minute or so, it loads a DLL i have written in C# that retrieves state for an SQL Server Mirror (Primary, Mirror and witness) for a number of databases; it allows us to poke DNS to show where the primary instances are. Please don't mention Clustering - We're not doing that. I can't be arsed to write a service, there simply isn't enough time do I Task Scheduler - every minute: Invoke a powershell script that loads the DLL does the business Task scheduler - At Startup : Invoke a similer powershell script that loads the DLL once but then loops and sleeps, refreshing the Object that the DLL exposes. Pros and cons?

    Read the article

  • powershell task scheduler or loop and sleep

    - by Paddy Carroll
    I have a job that needs to go off every minute or so, it loads a DLL written in C# that retrieves state for an SQL Server Mirror (Primary, Mirror and witness) for a number of databases; it allows us to poke DNS to show where the primary instances are. Please don't mention Clustering - We're not doing that. I can't be arsed to write a service, there simply isn't enough time do I Task Scheduler - every minute: Invoke a powershell script that loads the DLL does the business Task scheduler - At Startup : Invoke a similer powershell script that loads the DLL once but then loops and sleeps, refreshing the Object that the DLL exposes. Pros and cons?

    Read the article

  • SharePoint 2010 in the cloud a.k.a. SharePoint Online

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). There are 3 ways to run SharePoint On premises, you buy the servers, and you run the servers. Hosted servers, where you don’t run the servers, but you let a hosting company run dedicated servers. Multi-tenant, like SharePoint online – this is what I am talking about in this blog post. Also known as SaaS (Software as a Service). The advantages of a cloud solution are undeniable. Availability, (SharePoint line offers a 99.9% uptime SLA) Reliability. Cost. Due to economies of scale, and no need to hire specialized dedicated staff. Scalability. Security. Flexibility – grow or shrink as you need to. If you are seriously considering SharePoint 2010 in the cloud, there are some things you need to know about SharePoint online. What will work - OOTB Customization, collaboration features etc. will work SharePoint Designer 2010 is supported, so no code workflows will work Visual Studio sandbox solutions, client object model will work. What won’t work - SharePoint 2010 online cloud environment supports only sandbox solutions. BCS, business connectivity services is not supported in SharePoint online. What you can do however is to host your services in Azure, and call them using Silverlight. Custom timer jobs will not work. Long story short, get used to Sandbox solutions – and the new way of programming. Sandbox solutions are pretty damn good. Most of the complaints I have heard around sandbox solutions being too restrictive, are uninformed mechanisms of doing things mired in the ways of 2002. .. or you could just live in 2002 too. Comment on the article ....

    Read the article

  • Can admins monitor my activity locally even when I use a VPN?

    - by Arjun Create
    My school has one of those super overreacting web blockers (specifically Fortisnet) that blocks things that should be accessible by a high-school senior trying to research projects. Despite many students' complaints the administration's hands are tied due to parents' complaints. I have setup a VPN account from http://www.vpnreactor.com. With this I am able to bypass the blocker. I know this service hides my IP from websites and servers on the web. I also know that the school pays an IT guy just to monitor sites and network traffic that the students are using. Basically, will he be able to see my network traffic? More importantly, will he be able to trace it to my computer or its MAC address? I am connecting over Wi-Fi, not ethernet.

    Read the article

  • Make puppet agent restart itself

    - by SamKrieg
    I've got a file that notifies the puppet agent. In the network module, the proxy settings are included in the .gemrc file like this: file { "/root/.gemrc": content => "http_proxy: $http_proxy\n", notify => Service['puppet'], } The problem is that puppet stops and does not restart. Aug 31 12:05:13 snch7log01 puppet-agent[1117]: (/Stage[main]/Network/File[/root/.gemrc]/content) content changed '{md5}2b00042f7481c7b056c4b410d28f33cf' to '{md5}60b725f10c9c85c70d97880dfe8191b3' Aug 31 12:05:13 snch7log01 puppet-agent[1117]: Caught TERM; calling stop I assume the code does something like /etc/init.d/puppet stop && /etc/init.d/puppet start Since puppet is not running, it cannot start itself... it kind of makes sense. How to make puppet restart itself when this file changes? Note that this file may not exist as well.

    Read the article

  • Is there value in having new developers (graduates) start as testers / bug-fixers?

    - by Nico Huysamen
    Hi Programmers Community. What are your thoughts on the following: Is there value in having new developers (graduates) start as testers / bug-fixers? There are two schools of thought here that I have come across. Having new developers (graduates) start as testers / bug-fixers / doing SLA (Service Level Agreement) work, get's them familiar with the code base. It also allows them the opportunity to learn how to read [other people's] code. Further more, by fixing bugs, they will learn certain bad and good practices, which could hopefully help them in the future. The other way of thinking though, is that if you immediately start new developers on something like testing / bug-fixing / SLA work, their appetite for the development world might go away, and/or they might leave the company and you potentially loose out on a great future resource. Is there a balance that should be kept between these two? Currently where I work there is no clear-cut definition of what new starters do. Some go directly on to client work, while some fall in to the SLA world. Should companies have such a policy? Or should it be handled on a case-by-case or opportunity-based basis? Hope to hear from some of you that have experience in this field. Thanks!

    Read the article

  • Remote location looses connectivity

    - by user24559
    I have a remote location that has constant internet access using DSL. The remote location only has three Linksys cameras that use the internet access with a router. There is no computer on the network. The router is configured to use the DYN.org service to keep the IP address updated if the internet provider assigns a new ip address. The problem I am having is the router after a period of time stops responding to incoming ping requests and the cameras are shown offline. However, if I keep one of the cameras constantly streaming data then the connection to all the cameras works just fine all the time. How can I keep the router from appearing to be sleeping and not responding to incoming requests? The router is the Linksys WRT300N.

    Read the article

< Previous Page | 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075  | Next Page >