Search Results

Search found 2088 results on 84 pages for 'jobs'.

Page 52/84 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Bacula configuration for clients that are turned on and off randomly

    - by Rastloser
    I'm evaluating Bacula as a centralized backup tool for a small network where users will turn machines on and off unpredictably. Some of the headless Linux boxes I need to back up are intended to be turned off by pressing the on/off-button on the case, without any way of telling the user to wait for a backup job to finish. So, we don't know when backup jobs may run (anacron might help with this, right?) and we don't know whether they'll be allowed to finish. Is Bacula a reasonable choice for such an environment?

    Read the article

  • Best practice for administering a (hadoop) cluster

    - by Alex
    Dear all, I've recently been playing with Hadoop. I have a six node cluster up and running - with HDFS, and having run a number of MapRed jobs. So far, so good. However I'm now looking to do this more systematically and with a larger number of nodes. Our base system is Ubuntu and the current setup has been administered using apt (to install the correct java runtime) and ssh/scp (to propagate out the various conf files). This is clearly not scalable over time. Does anyone have any experience of good systems for administering (possibly slightly heterogenous: different disk sizes, different numbers of cpus on each node) hadoop clusters automagically? I would consider diskless boot - but imagine that with a large cluster, getting the cluster up and running might be bottle-necked on the machine serving the OS. Or some form of distributed debian apt to keep the machines native environment synchronised? And how do people successfully manage the conf files over a number of (potentially heterogenous) machines? Thanks very much in advance, Alex

    Read the article

  • Do best-practices say to restrict the usage of /var to sudoers?

    - by NewAlexandria
    I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Incidentally, this package is a ruby gem, and you can find it here.

    Read the article

  • how to include error messages into backup reports for SQL Server 2008 R2?

    - by avs099
    Right now I have daily (differential) and weekly (full) backups set on my SQL Server 2008 R2 as jobs for SQL Server Agent with email notifications if job fails. I do get emails like this: JOB RUN: 'Daily backup.Diff backup' was run on 4/11/2012 at 3:00:00 AM DURATION: 0 hours, 0 minutes, 28 seconds STATUS: Failed MESSAGES: The job failed. The Job was invoked by Schedule 9 (Daily backup.Diff backup). The last step to run was step 1 (Diff backup). but often that happens because we delete/create new databases - and diff backup fails. And the only way for me to see the actual reason is to go to Log Viewer - Maintenance Plans logs. Is it possible to include "Error Message" field from the logs into notification emails? And more generic - is it possible to change notification email templates somehow?.. Thanks you.

    Read the article

  • What's the best way to work out if a virtual server is overloaded?

    - by zemaj
    I have a series of virtual servers. I'm running a command to login to each one and take a look at the load averages using uptime. What's the best way to work out if load values represent overloading? I'm running on rackspace cloud, so the servers have burst capability and can be all different sizes. I'm a little stumped on how to come up with a consistent way of figuring out when I need to spin up new servers. I can do things like estimate the jobs running on each one, but I'd like a system that runs a little closer to the real resource use available on each instance, as it obviously varies quite a bit! Help greatly appreciated!

    Read the article

  • Restart an in-use NFS server without interruption (within timeout)

    - by zebediah49
    I have a bunch of compute clients working on jobs, saving output data to a NAS machine. All machines are centos 6.2. They mount it via automount NFS, with a timeout of 1200 (default config). The NAS machine needs to be restarted. If I can restart the machine within that 1200s (20 minute) window, will the clients just block on IO until it comes back up? A minor interruption (pause) in service is ok, as long as it doesn't cause the running processes to error out. If necessary I could loop through and SIGSTOP all job processes, restart and resume them -- I just don't want to break the open file handles. How can I run a restart like this without killing processes with open files?

    Read the article

  • VPS stops responding every now and again

    - by Or W
    I have a Linode vps that I use to host some of my websites on. It's Ubuntu based and it's up to date in terms of all packages. I don't have any cron jobs scheduled or any automatic processes. I host a few (up to date) wordpress blogs there that have very little traffic altogether. Every day (at a different time) my server stops responding, I can't SSH to it, web access is getting timed out and it just dies until I reboot it through the Linode manager. On the linode dashboard I can see that the CPU is not very high (2-3%) Incoming/Outgoing traffic is on 0 and the IO count has a spike just before the server stops responding (SWAP IO is at 2k and IO Rate is at 5k). When I reboot the server everything is just fine. I'm trying to figure out a way to analyze what's going on at these random times where the server freezes up. How can I determine the problem?

    Read the article

  • Forensics on Virtual Private servers [closed]

    - by intiha
    So these days with talks about having hacked machines being used for malware spreading and botnet C&C, the one issue that is not clear to me is what do the law enforcement agencies do once they have identified a server as being a source or controller of attack/APT and that server is a VPS on my cluster/datacenter? Do they take away the entire machine? This option seems to have a lot of collateral damage associated with it, so I am not sure what happens and what are the best practices for system admins for helping law enforcement with its job while keeping our jobs!

    Read the article

  • rsync (or robocopy) from multiple computers - what happens?

    - by TheCleaner
    If a Linux server has two different rsync jobs nightly for the same folder to two different destinations, do both destinations end up with the same end set of files? Or does the first job run, and set something on the source folder/files that would cause the 2nd rsync job to not realize the daily changes/updates to the source? Same for a Windows environment using something like robocopy, or even a "differential" backup using BUE or similar. Does each "sync" compare the destination to the source and update the destination regardless of if it is synced multiple times to different destinations?

    Read the article

  • Apache spawning one process a day, what causes it?

    - by xtrimsky
    I have a PHP server running, with cron jobs etc... And once a day (between midnight and 3am), apache spawns one process that never ends. The server is a virtual server, so in a couple of days it eats up the whole memory. Is there a way to figure out what is crashing ? What php script isn't finishing ? Or what url via apache was triggered ? I have tried looking on the access logs, errors logs, but didn't find anything unusual. Thank you

    Read the article

  • Can I rent exclusive time on a powerful server running linux? [closed]

    - by Mark Borgerding
    My company is involved in a proposal that requires speed estimates of our software on a server with the latest & greatest processors. This is not the first time we've been in this situation. The servers themselves are too expensive to buy a new one every time, so we end up extrapolating from what we have. There are so many variables: processor generation & speed, memory speed, memory channels, cache configurations; it makes extrapolation difficult and error-prone. Is there a business that rents time on the newest servers? At least part of the time we'd need exclusive access to an otherwise quiescent system either via ssh shell access or unattended batch jobs. I am not looking for general cloud computing services. I don't need much time on the server, but it needs to be exclusive. And the server needs to be pretty cutting edge for a solid basis of estimate.

    Read the article

  • Share iTunes Library from Nas across several clients. How in Windows

    - by Mych
    I found this article http://gigaom.com/apple/one-itunes-library-on-multiple-computers/ which describes sharing a single library with multiple clients. Unfortunately this article is for Macs and I use Windows. The article mentions 3 jobs that need to be completed.' A) - pointing all clients to the location of the Library. This is understandable and I can replicate using windows clients. B) Universal Library Set-up. This mentions holing the option key and double click iTunes icon. This is so you can create the index at a specific location C) Point clients to index. Again this mentions Option double click. What is the Windows equivalent to option double click iTunes?

    Read the article

  • Checking that tasks are executed

    - by homer5439
    I'm not sure how to explain this. Once one starts having dozens or hundres of servers, each running some sort of periodic jobs (mostly from cron), there is a problem of making sure (or as sure as possible) that these tasks are actually ran. I mean, I get an email if a job fails fails, and no mail if it succeeds, but also no mail if it doesn't run for whatever reason. Sure, I could change them and have them send a "successfully ran" email, only to be flooded by mails that most of the time I don't want to see. Basically, I want to be notified only if: a task ran and failed a task didn't run at the expected time. Is there a way to do this?

    Read the article

  • Slurm: How to find out how much memory is not allocated at a given Node

    - by PlagTag
    i am new to SLURM. I am searching for a comfortable way, to see how many memory at an node/nodelist is available for my srun allocation. I already played around with sinfo and scontrol and sstat but none of them gives me the information i need in one comfortable overview. I had the idea to write a shell script, in order to fetch all fields of all jobs from scontrol and sum them up. But there must be an easier way. Would be great if anyone has an hint or idea!

    Read the article

  • can I use @reboot in cron.d files?

    - by fschwiet
    I want to run a job with cron on reboot as a particular user. I have been able to do this successfully using crontab to write to /var/spool/cron/crontabs/username with something like: @reboot ./run.sh >>~/tracefile 2>&1 However, I want to use /etc/cron.d/filename. Cron jobs in this file require an extra column to indicate what user runs, so I use: @reboot wwwuser ./run.sh >>~/tracefile 2>&1 This doesn't seem to work. Should I be able to use @reboot with a username in a cron.d file?

    Read the article

  • Best server for mailing application [closed]

    - by Cyber Junkie
    My application is similar to a reminder service that reminds users of events that they scheduled. I'm sending emails to users through a PHP script. I'm not sending one email to multiple recipients. Each recipient receives a different message. I plan to use cron jobs every minute and expect the application to send roughly 200 individual emails in 1 hour (for a small user base that may grow). I don't have hosting experience with this type of application. I plan to start on a shared host and move up in the future to vps or dedicated. Most shared hosts that I looked into allow 50-100 emails per hour with delays between mailings. Please kindly inform me what I should look for in web hosts for this kind of application.

    Read the article

  • Automated monitoring of a remote system that sends email alerts.

    - by user23105
    I need to monitor a remote system where the only access I have is that I can subscribe to email alerts of completed/failed jobs. I would like a system that can monitor these emails and provide an SMS or other alert when: An email indicates failure. A process that was expected to complete by a given time has not. A process that was expected to complete N minutes after completion of another process has not completed. Are there any existing tools that allow this? I'd consider any option - SaaS, open-source, COTS, as long as I don't have to write it myself! Cheers, Blake

    Read the article

  • Port forwarding stopped working in my Linksys WRT54G2 Linksys router.

    - by user23490
    How to do it again? I had simply forwarded needed ports (e.g. for counter strike, ftp, http etc)) but now with same system, same OS and same router and settings, it's not working. Tried setting router to "factory defaults" and do everything again. However other jobs are being done like it is connecting to my DSL ISP and I can access Internet easily. Still no success. PS. I tried on both Windows and Ubuntu. On Windows I use it for Counter Strike and others (e.g. host my local FTP server) on Ubuntu.

    Read the article

  • Rails routing to XML/JSON without views gone mad

    - by John Schulze
    I have a mystifying problem. In a very simple Ruby app i have three classes: Drivers, Jobs and Vehicles. All three classes only consist of Id and Name. All three classes have the same #index and #show methods and only render in JSON or XML (this is in fact true for all their CRUD methods, they are identical in everything but name). There are no views. For example: def index @drivers= Driver.all respond_to do |format| format.js { render :json => @drivers} format.xml { render :xml => @drivers} end end def show @driver = Driver.find(params[:id]) respond_to do |format| format.js { render :json => @driver} format.xml { render :xml => @driver} end end The models are similarly minimalistic and only contain: class Driver< ActiveRecord::Base validates_presence_of :name end In routes.rb I have: map.resources :drivers map.resources :jobs map.resources :vehicles map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' I can perform POST/create, GET/index and PUT/update on all three classes and GET/read used to work as well until I installed the "has many polymorphs" ActiveRecord plugin and added to environment.rb: require File.join(File.dirname(__FILE__), 'boot') require 'has_many_polymorphs' require 'active_support' Now for two of the three classes I cannot do a read any more. If i go to localhost:3000/drivers they all list nicely in XML but if i go to localhost:3000/drivers/3 I get an error: Processing DriversController#show (for 127.0.0.1 at 2009-06-11 20:34:03) [GET] Parameters: {"id"=>"3"} [4;36;1mDriver Load (0.0ms)[0m [0;1mSELECT * FROM "drivers" WHERE ("drivers"."id" = 3) [0m ActionView::MissingTemplate (Missing template drivers/show.erb in view path app/views): app/controllers/drivers_controller.rb:14:in `show' ...etc This is followed a by another unexpected error: Processing ApplicationController#show (for 127.0.0.1 at 2009-06-11 21:35:52)[GET] Parameters: {"id"=>"3"} NameError (uninitialized constant ApplicationController::AreaAccessDenied): ...etc What is going on here? Why does the same code work for one class but not the other two? Why is it trying to do a #view on the ApplicationController? I found that if I create a simple HTML view for each of the three classes these work fine. To each class I add: format.html # show.html.erb With this in place, going to localhost:3000/drivers/3 renders out the item in HTML and I get no errors in the log. But if attach .xml to the URL it again fails for two of the classes (with the same error message as before) while one will output XML as expected. Even stranger, on the two failing classes, when adding .js to the URL (to trigger JSON rendering) I get the HTML output instead! Is it possible this has something to do with the "has many polymorphs" plugin? I have heard of people having routing issues after installing it. Removing "has many polymorphs" and "active support" from environment.rb (and rebooting the sever) seems to make no difference whatsoever. Yet my problems started after it was installed. I've spent a number of hours on this problem now and am starting to get a little desperate, Google turns up virtually no information which makes me suspect I must have missed something elementary. Any enlightenment or hint gratefully received! JS

    Read the article

  • Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

    - by Kevin Donde
    We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ... MsBuild Version .NET 4.0 MsBuild Build File ./WebProjectFolder/WebProject.csproj Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project. If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process. Options ... Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project. Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package. Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it. Problems ... There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ... This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements. This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc. Initial problem. We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ... We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...

    Read the article

  • what's wrong with my producer-consumer queue design?

    - by toasteroven
    I'm starting with the C# code example here. I'm trying to adapt it for a couple reasons: 1) in my scenario, all tasks will be put in the queue up-front before consumers will start, and 2) I wanted to abstract the worker into a separate class instead of having raw Thread members within the WorkerQueue class. My queue doesn't seem to dispose of itself though, it just hangs, and when I break in Visual Studio it's stuck on the _th.Join() line for WorkerThread #1. Also, is there a better way to organize this? Something about exposing the WaitOne() and Join() methods seems wrong, but I couldn't think of an appropriate way to let the WorkerThread interact with the queue. Also, an aside - if I call q.Start(#) at the top of the using block, only some of the threads every kick in (e.g. threads 1, 2, and 8 process every task). Why is this? Is it a race condition of some sort, or am I doing something wrong? using System; using System.Collections.Generic; using System.Text; using System.Messaging; using System.Threading; using System.Linq; namespace QueueTest { class Program { static void Main(string[] args) { using (WorkQueue q = new WorkQueue()) { q.Finished += new Action(delegate { Console.WriteLine("All jobs finished"); }); Random r = new Random(); foreach (int i in Enumerable.Range(1, 10)) q.Enqueue(r.Next(100, 500)); Console.WriteLine("All jobs queued"); q.Start(8); } } } class WorkQueue : IDisposable { private Queue _jobs = new Queue(); private int _job_count; private EventWaitHandle _wh = new AutoResetEvent(false); private object _lock = new object(); private List _th; public event Action Finished; public WorkQueue() { } public void Start(int num_threads) { _job_count = _jobs.Count; _th = new List(num_threads); foreach (int i in Enumerable.Range(1, num_threads)) { _th.Add(new WorkerThread(i, this)); _th[_th.Count - 1].JobFinished += new Action(WorkQueue_JobFinished); } } void WorkQueue_JobFinished(int obj) { lock (_lock) { _job_count--; if (_job_count == 0 && Finished != null) Finished(); } } public void Enqueue(int job) { lock (_lock) _jobs.Enqueue(job); _wh.Set(); } public void Dispose() { Enqueue(Int32.MinValue); _th.ForEach(th = th.Join()); _wh.Close(); } public int GetNextJob() { lock (_lock) { if (_jobs.Count 0) return _jobs.Dequeue(); else return Int32.MinValue; } } public void WaitOne() { _wh.WaitOne(); } } class WorkerThread { private Thread _th; private WorkQueue _q; private int _i; public event Action JobFinished; public WorkerThread(int i, WorkQueue q) { _i = i; _q = q; _th = new Thread(DoWork); _th.Start(); } public void Join() { _th.Join(); } private void DoWork() { while (true) { int job = _q.GetNextJob(); if (job != Int32.MinValue) { Console.WriteLine("Thread {0} Got job {1}", _i, job); Thread.Sleep(job * 10); // in reality would to actual work here if (JobFinished != null) JobFinished(job); } else { Console.WriteLine("Thread {0} no job available", _i); _q.WaitOne(); } } } } }

    Read the article

  • jquery accordion - keeps hiding parent element?

    - by PopRocks4344
    Well, I fixed my original question by including after the closing tags, but now, those links won't open? So, basically, in the html below, I'm trying so that only the services section slides open, but I want the h3 tags to open those links... <div id="sp-accordion"> <h3><a href="/?page_id=3">Home</a></h3><div></div> <h3><a href="/?page_id=2">About Us</a></h3><div></div> <h3><a href="#"> Services</a></h3> <div> <p><a href="/?page_id=16">S1</a></p> <p><a href="/?page_id=14">S2</a></p> <p><a href="/?page_id=20">S3</a></p> </div> <h3><a href="/?page_id=9">Contact Us</a></h3><div></div> <h3><a href="/?page_id=5">Tips</a></h3><div></div> <h3><a href="/?page_id=108">Jobs</a></h3><div></div> <h3><a href="/?page_id=131">Newsletter</a></h3><div></div> </div> The accordion works, insomuch that when you click on the h3 tag, the container slides open, however, when it slides open, it hides the h3 before it. So, in the html below, when I click on "Services" the div beneath it slides open, but the About Us h3 disappears... This is the html: <div id="sp-accordion"> <h3><a href="/?page_id=3">Home</a></h3> <h3><a href="/?page_id=2">About Us</a></h3> <h3><a href="#"> Services</a></h3> <div> <p><a href="/?page_id=16">S1</a></p> <p><a href="/?page_id=14">S2</a></p> <p><a href="/?page_id=20">S3</a></p> </div> <h3><a href="/?page_id=9">Contact Us</a></h3> <h3><a href="/?page_id=5">Tips</a></h3> <h3><a href="/?page_id=108">Jobs</a></h3> <h3><a href="/?page_id=131">Newsletter</a></h3> </div> I'm using jquery ui, so the jquery is just this: $(document).ready(function() { $("#sp-accordion").accordion({autoHeight:false}); });

    Read the article

  • Unable to get my master & details gridview to work.

    - by Javier
    I'm unable to get this to work. I'm very new at programming and would appreciate any help on this. <%@ Page Language="C#" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { } protected void DataGridSqlDataSource_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:SqlDataSource ID="DataGrid2SqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:JobPostings1ConnectionString %>" SelectCommand="SELECT [Jobs_PK], [Position_Title], [Educ_Level], [Grade], [JP_Description], [Job_Status], [Position_ID] FROM [Jobs]" FilterExpression="Jobs_PK='@Jobs_PK'"> <filterparameters> <asp:ControlParameter Name="Jobs_PK" ControlId="GridView1" PropertyName="SelectedValue" /> </filterparameters> </asp:SqlDataSource> <asp:SqlDataSource ID="DataGridSqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:JobPostings1ConnectionString %>" SelectCommand="SELECT [Position_Title], [Jobs_PK] FROM [Jobs]" onselecting="DataGridSqlDataSource_Selecting"> </asp:SqlDataSource> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="Jobs_PK" DataSourceID="DataGridSqlDataSource" AllowPaging="True" AutoGenerateSelectButton="True" SelectedIndex="0" Width="100px"> <Columns> <asp:BoundField DataField="Position_Title" HeaderText="Position_Title" SortExpression="Position_Title" /> <asp:BoundField DataField="Jobs_PK" HeaderText="Jobs_PK" InsertVisible="False" ReadOnly="True" SortExpression="Jobs_PK" /> </Columns> </asp:GridView> <br /> <asp:DetailsView ID="DetailsView1" runat="server" AutoGenerateRows="False" DataKeyNames="Jobs_PK" DataSourceID="DataGrid2SqlDataSource" Height="50px" Width="125px"> <Fields> <asp:BoundField DataField="Jobs_PK" HeaderText="Jobs_PK" InsertVisible="False" ReadOnly="True" SortExpression="Jobs_PK" /> <asp:BoundField DataField="Position_Title" HeaderText="Position_Title" SortExpression="Position_Title" /> <asp:BoundField DataField="Educ_Level" HeaderText="Educ_Level" SortExpression="Educ_Level" /> <asp:BoundField DataField="Grade" HeaderText="Grade" SortExpression="Grade" /> <asp:BoundField DataField="JP_Description" HeaderText="JP_Description" SortExpression="JP_Description" /> <asp:BoundField DataField="Job_Status" HeaderText="Job_Status" SortExpression="Job_Status" /> <asp:BoundField DataField="Position_ID" HeaderText="Position_ID" SortExpression="Position_ID" /> </Fields> </asp:DetailsView> </div> </form> </body> error message: Cannot perform '=' operation on System.Int32 and System.String. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.EvaluateException: Cannot perform '=' operation on System.Int32 and System.String.

    Read the article

  • Error while installing vmware tools v8.8.2 in Ubuntu 12.04 beta

    - by Dipen Patel
    I just upgraded to Ubuntu 12.04 from 11.10 using update manager. I use it as virtual machine on VMWare Player 4.xx. As usual I installed vmware tools to enable full screen mode and shared folder functionality. But while installing I got an error while building modules for shared folder and fast networking utilities for vmware tools. Error is ============================================== /tmp/vmware-root/modules/vmhgfs-only/fsutil.c: In function ‘HgfsChangeFileAttributes’: /tmp/vmware-root/modules/vmhgfs-only/fsutil.c:610:4: error: assignment of read-only member ‘i_nlink’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/fsutil.o] Error 1 make[2]: *** Waiting for unfinished jobs.... /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: initialization from incompatible pointer type [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: (near initialization for ‘HgfsFileFileOperations.fsync’) [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/tcp.c:53:30: error: expected ‘)’ before numeric constant /tmp/vmware-root/modules/vmhgfs-only/tcp.c:56:25: error: expected ‘)’ before ‘int’ /tmp/vmware-root/modules/vmhgfs-only/tcp.c:59:33: error: expected ‘)’ before ‘int’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/tcp.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-22-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' The filesystem driver (vmhgfs module) is used only for the shared folder feature. The rest of the software provided by VMware Tools is designed to work independently of this feature. Let me know if anyone has encountered and solved this problem. Regards, Dipen Patel

    Read the article

  • Minimize Windows Live Mail to the System Tray in Windows 7

    - by Asian Angel
    Are you frustrated that you can not minimize Windows Live Mail to the system tray in Windows 7? With just a few tweaks you can make Live Mail minimize to the system tray just like in earlier versions of Windows. Windows Live Mail in Windows Vista In Windows Vista you could minimize Windows Live Mail to the system tray if desired using the context menu… Windows Live Mail in Windows 7 In Windows 7 you can minimize the app window but not hide it in the system tray. The Hide window when minimized menu entry is missing from the context menu and all you have is the window icon taking up space in your taskbar. How to Add the Context Menu Entry Back Right click on the program shortcut(s) and select properties. When the properties window opens click on the compatibility tab and enable the Run this program in compatibility mode for setting. Choose Windows Vista (Service Pack 2) from the drop-down menu and click OK. Once you have restarted Windows Live Mail you will have access to the Hide window when minimized menu entry again. And just like that your taskbar is clear again when Windows Live Mail is minimized. If you have wanted the ability to minimize Windows Live Mail to the system tray in Windows 7 then this little tweak will fix the problem. Similar Articles Productive Geek Tips Make Windows Live Messenger Minimize to the System Tray in Windows 7Move Live Messenger Icon to the System Tray in Windows 7Backup Windows Mail Messages and Contacts in VistaTurn off New Mail Notification for PocoMail Junk Mail FolderPut Your PuTTY in the System Tray TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >