Search Results

Search found 20388 results on 816 pages for 'nvidia current'.

Page 389/816 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • sudo dhclient eth0 | sudo: unable to resolve host ubuntu

    - by Merianos Nikos
    I have a computer of a friend of mine, that runs Ubuntu (I don't know what version, due to the current system status) and while he was updating the kernel, he reboot the computer (yes that could be happen !!, anyway) Currently I am trying to recover the system by using a live USB, with Ubuntu installed on it. What I am doing, is the following: Update Failure The problem is that when I try to execute the fifth step, I am getting error because I do not have Internet access. The computer is properly wired on my rooter, and I have Internet access in any place apart of the shell. This message for example is send it via the live USB. but I cannot access the Internet via the shell. In my shell I try to use this command: sudo dhclient eth0 but the result of this command is the following message sudo: unable to resolve host ubuntu My hosts file has the following content: 127.0.0.1 localhost 127.0.1.1 ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts how can I get connected on the Internet, in order to download the appropriate updates ? UPDATE 1 I just notice, that when I execute the ifconfig I am getting the following warning: Warning: cannot open /proc/net/dev (No such file or directory). Limited output. UPDATE 2 I just found that, and looks like solving the problem with dhclient eth0 command, but still I cannot ping Google UPDATE 3 Now the sudo dhclient eth0 returns the following message: RTNETLINK answers: File exists UPDATE 4 I just ping my rooter and I getting response, so, it is looks like I cannot ping outside the rooter (ie. Google) Kind regards ...

    Read the article

  • Team Foundation Server 2012 Build Global List Problems

    - by Bob Hardister
    My experience with the upgrade and use of TFS 2012 has been very positive. I did come across a couple of issues recently that tripped things up for a while. ISSUE 1 The first issue is that 2012 prior to Update 1 published an invalid build list item value to the collection global list. In 2010, the build global list, list item value syntax is an underscore between the build definition and the build number. In the 2012 RTM this underscore was replaced with a backslash, which is invalid.  Specifically, an upload of the global list fails when the backslash is followed at some point by a period. The error when using the API is: <detail ExceptionMessage="TF26204: The account you entered is not recognized. Contact your Team Foundation Server administrator to add your account." BaseExceptionName="Microsoft.TeamFoundation.WorkItemTracking.Server.ValidationException"><details id="600019" http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03"http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03" /></detail> when uploading the global list via the process editor the error is: This issue is corrected in Update1 as the backslash is changed to a forward slash. ISSUE 2 The second issue is that when upgrading from 2010 to 2012, the builds in 2010 are not published to the 2012 global list.  After the upgrade the 2012 global lists doesn’t have any builds and only builds run in 2012 are published to the global list. This was reported to the MSDN forums and Connect. To correct this I wrote a utility to pull all the builds and recreate the builds global list for each project in each collection.  This is a console application with a program.cs, a globallists.cs and a app.config (not published here). The utility connects to TFS 2012, loops through the collections or a target collection as specified in the app.config. Then loops through the projects, the build definitions, and builds.  It creates a global list for each project if that project has at least one build. Then it imports the new list to TFS.  Here’s the code for program and globalists classes. Program.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Server; using System.IO; using System.Xml; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Diagnostics; using Utilities; using System.Configuration; namespace TFSProjectUpdater_CLC { class Program { static void Main(string[] args) { DateTime temp_d = System.DateTime.Now; string logName = temp_d.ToShortDateString(); logName = logName.Replace("/", "_"); logName = logName + "_" + temp_d.TimeOfDay; logName = logName.Replace(":", "."); logName = "TFSGlobalListBuildsUpdater_" + logName + ".log"; Trace.Listeners.Add(new TextWriterTraceListener(Path.Combine(ConfigurationManager.AppSettings["logLocation"], logName))); Trace.AutoFlush = true; Trace.WriteLine("Start:" + DateTime.Now.ToString()); Console.WriteLine("Start:" + DateTime.Now.ToString()); string tfsServer = ConfigurationManager.AppSettings["TargetTFS"].ToString(); GlobalLists gl = new GlobalLists(); //replace this with the URL to your TFS instance. Uri tfsUri = new Uri("https://" + tfsServer + "/tfs"); //bool foundLite = false; TfsConfigurationServer config = new TfsConfigurationServer(tfsUri, new UICredentialsProvider()); config.EnsureAuthenticated(); ITeamProjectCollectionService collectionService = config.GetService<ITeamProjectCollectionService>(); IList<TeamProjectCollection> collections = collectionService.GetCollections().OrderBy(collection => collection.Name.ToString()).ToList(); //target Collection string targetCollection = ConfigurationManager.AppSettings["targetCollection"]; foreach (TeamProjectCollection coll in collections) { if (targetCollection.Equals(string.Empty)) { if (!coll.Name.Equals("TFS Archive") && !coll.Name.Equals("DefaultCol") && !coll.Name.Equals("Team Project Template Gallery")) { doWork(coll, tfsServer); } } else { if (coll.Name.Equals(targetCollection)) { doWork(coll, tfsServer); } } } Trace.WriteLine("Finished:" + DateTime.Now.ToString()); Console.WriteLine("Finished:" + DateTime.Now.ToString()); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("\nHit any key to exit..."); Console.ReadKey(); } Trace.Close(); } static void doWork(TeamProjectCollection coll, string tfsServer) { GlobalLists gl = new GlobalLists(); //target Collection string targetProject = ConfigurationManager.AppSettings["targetProject"]; Trace.WriteLine("Collection: " + coll.Name); Uri u = new Uri("https://" + tfsServer + "/tfs/" + coll.Name.ToString()); TfsTeamProjectCollection c = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(u); ICommonStructureService icss = c.GetService<ICommonStructureService>(); try { Trace.WriteLine("\tChecking Collection Global Lists."); gl.RebuildBuildGlobalLists(c); } catch (Exception ex) { Console.WriteLine("Exception! :" + coll.Name); } } } } GlobalLists.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.Build.Client; using System.Configuration; using System.Xml; using System.Xml.Linq; using System.Diagnostics; namespace Utilities { public class GlobalLists { string GL_NewList = @"<gl:GLOBALLISTS xmlns:gl=""http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists""> <GLOBALLIST> </GLOBALLIST> </gl:GLOBALLISTS>"; public void RebuildBuildGlobalLists(TfsTeamProjectCollection _tfs) { WorkItemStore wis = new WorkItemStore(_tfs); //export the current globals lists file for the collection to save as a backup XmlDocument globalListsFile = wis.ExportGlobalLists(); globalListsFile.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_backupGlobalList.xml"); LogExportCurrentCollectionGlobalListsAsBackup(_tfs); //Build a new global build list from each build definition within each team project IBuildServer buildServer = _tfs.GetService<IBuildServer>(); foreach (Project p in wis.Projects) { XmlDocument newProjectGlobalList = new XmlDocument(); newProjectGlobalList.LoadXml(GL_NewList); LogInstanciateNewProjectBuildGlobalList(_tfs, p); BuildNewProjectBuildGlobalList(_tfs, wis, newProjectGlobalList, buildServer, p); LogEndOfProject(_tfs, p); } } // Private Methods private static void BuildNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, WorkItemStore wis, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p) { //locate the template node XmlNamespaceManager nsmgr = new XmlNamespaceManager(newProjectGlobalList.NameTable); nsmgr.AddNamespace("gl", "http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists"); XmlNode node = newProjectGlobalList.SelectSingleNode("//gl:GLOBALLISTS/GLOBALLIST", nsmgr); LogLocatedGlobalListNode(_tfs, p); //add the name attribute for the project build global list XmlElement buildListNode = (XmlElement)node; buildListNode.SetAttribute("name", "Builds - " + p.Name); LogAddedBuildNodeName(_tfs, p); //add new builds to the team project build global list bool buildsExist = false; if (AddNewBuilds(_tfs, newProjectGlobalList, buildServer, p, node, buildsExist)) { //import the new build global list for each project that has builds newProjectGlobalList.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_" + p.Name + "_" + "newGlobalList.xml"); //write out temp copy of the global list file to be imported LogImportReady(_tfs, p); wis.ImportGlobalLists(newProjectGlobalList.InnerXml); LogImportComplete(_tfs, p); } } private static bool AddNewBuilds(TfsTeamProjectCollection _tfs, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p, XmlNode node, bool buildsExist) { var buildDefinitions = buildServer.QueryBuildDefinitions(p.Name); foreach (var buildDefinition in buildDefinitions) { var builds = buildDefinition.QueryBuilds(); foreach (var build in builds) { //insert the builds into the current build list node in the correct 2012 format buildsExist = true; XmlElement listItem = newProjectGlobalList.CreateElement("LISTITEM"); listItem.SetAttribute("value", buildDefinition.Name + "/" + build.BuildNumber.ToString().Replace(buildDefinition.Name + "_", "")); node.AppendChild(listItem); } } if (buildsExist) LogBuildListCreated(_tfs, p); else LogNoBuildsInProject(_tfs, p); return buildsExist; } // Logging Methods private static void LogExportCurrentCollectionGlobalListsAsBackup(TfsTeamProjectCollection _tfs) { Trace.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); Console.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); } private void LogInstanciateNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tInstanciated the new build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tInstanciated the new build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogLocatedGlobalListNode(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tLocated the build global list node for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tLocated the build global list node for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogAddedBuildNodeName(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded the name attribute to the build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded the name attribute to the build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogBuildListCreated(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded all builds into the " + "Builds - " + p.Name + " list in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded all builds into the " + "Builds - \n\t\t\t" + p.Name + " list in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogNoBuildsInProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tNo builds found for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tNo builds found for project " + p.Name + " \n\t\t\tin the " + _tfs.Name + " collection."); } private void LogEndOfProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tEND OF PROJECT " + p.Name); Trace.WriteLine(" "); Console.WriteLine("\t\tEND OF PROJECT " + p.Name); Console.WriteLine(); } private static void LogImportReady(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tReady to import the build global list for project " + p.Name + " to the " + _tfs.Name + " collection."); Console.WriteLine("\t\tReady to import the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogImportComplete(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tImport of the build global list for project " + p.Name + " to the " + _tfs.Name + " collection completed."); Console.WriteLine("\t\tImport of the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection completed."); } } }

    Read the article

  • Mirror using apt-mirror and exclud certain sections/categories

    - by Onitlikesonic
    I'm currently using apt-mirror to create a local mirror of the debian repositories. As the mirrored repositories will be used only by machines destined to be headless servers and as an effort to reduce the current mirroring size (around 75GB), categories like games and possibly others will never be needed. How can I go about specifying (on the mirror.list perhaps?) what sections/categories I want to be excluded from the mirroring? Maybe a bit subjective, but apart from games what other sections/categories could be "safely" ignored from the mirroring for my environment purposes? My mirror.list looks as below since all the machines are using precise. # MAIN deb-amd64 http://archive.ubuntu.com/ubuntu precise main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu precise main restricted universe multiverse # SECURITY deb-amd64 http://archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse Also, what others would you recommend adding to the list to be mirrored for a relatively stable environment? Again I understand this is subjective, just looking for some pointers. Much appreciated in advance

    Read the article

  • What should I quote for a project I hope to get a job at the end of?

    - by thesunneversets
    Long story short: I applied for a (CakePHP, MySQL, etc) development job in London, UK. I grew up in Britain but am currently based quite a few thousand miles away in Canada, so I wasn't really expecting success. But quite a few emails and phone interviews later it seems that they really like me. At least to a point. Because such a major relocation would be a horrible thing to go wrong, they've sensibly suggested a trial run of getting me to build a website at a distance. I have the spec for this and it's quite a substantial amount of work. My problem is that I now need to suggest both a fee and a timescale for the job, and I haven't got any significant experience of working as a contractor. Looking at the spec, which is 1500 words of many concisely stated features, some fairly trivial and some moderately involved, I can easily imagine there being 2 weeks of intensive work there. (If everything went really well it might be closer to one week, but even though I want to impress, I definitely don't want to fall into the inexperienced-contractor trap of massively underestimating the amount of time a project will run to.) As an extra complication, there is no expectation that I should give up my day job to get this trial project done, so the hours will have to be clawed from evenings and weekends. I don't want to overcommit to a quick delivery date, only to find myself swiftly burning out due to an unrealistic workload. So, any advice for me? My main question is, what is a realistic hourly figure to demand of a stable but not excessively wealthy London-based company in the current market, bearing in mind that I'd like them to hire me afterwards? But any more general recommendations based on my circumstances above would be much appreciated too. Many thanks!

    Read the article

  • iWork '09 Keynote: is there is straightforward way to 'dim' and 'highlight' each item in a bullet li

    - by doug
    I have a bullet list on a slide: first item second item third item What I want to do is show those three bulleted items in a sequence like this: first bullet appears first bullet dims second bullet appears second bullet dims third bullet appears third bullet dims In other words, only one bullet is shown at a time (the one i am current discussing) to reduce audience distraction by what comes next or what i just finished discussing (the prior bullet). This is such a common thing to do, there's got to be a simple, reliable way to do it. The only way I know of is to configure the items individually (using "Build In" and "Action" on each bullet item, which is not only slow but doesn't work well). Another way i've found--which, again is very slow--is to create my bullet list not by selecting a bullet list, but to build the list manually with text boxes (one bullet item per text box) then line them up as a list. This way it's easier to manipulate them independently--again though, takes way too long to do one slide this way.

    Read the article

  • Industrialized SOA – topic of Business Technology Magazine

    - by JuergenKress
    Although it has become quieter around SOA, the concept is not buried at all. On the contrary, over the years it has reached a new maturity level. Hypes such as Cloud Computing and Big Data have pushed SOA out of the headlines; however "the new hypes have not replace service orientation, but built on it." The authors of this edition rank among to the SOA pioneers in Germany. They have gathered their collective knowledge for this issue and created a unique picture of the current state of SOA. According to them SOA has developed evolutionarily towards industrialization, towards a holistic platform - and thus towards a new Industrialized SOA. The issue 3.12 of the BT magazine (in Germany!) is available as an iPad App (http://it-republik.de/business-technology/bt-magazin-ipad-app), via mail (http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html) or at the kiosk! The magazine is published by: Berthold Maier Jürgen Kress Hajo Normann Danilo Schmiedel Guido Schmutz Bernd Trops Clemens Utschig-Utschig Torsten Winterberg For more information see www.bt-magazin.de SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Technorati Tags: Industrial SOA,Industrialized SOA,Berthold Maier,Hajo Normann,Danilo Schmiedel,Guido Schmutz,Bernd Trops,Clemens Utschig-Utschig,Torsten Winterberg,SOA Spezial II,Business Technology Magazin,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • -bash: ls: command not found at Terminal on MAC OS

    - by art.mania
    I need to start using GIT for my projects from now on and I need to use some UNIX commands. but no matter what I do, I always receive "command not found" error. I installed MacPorts, but still cant run any UNIX command :/ When I try ls, I get the error below, same for sudo, or any other command: -bash: ls: command not found and when I try $PATH, I get the lines below: hakan-yilmaz-MacBook-Pro:~ hakanyilmaz$ **$PATH** -bash: /opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/Current/bin:/opt/subversion/bin/:PATH: No such file or directory I'm on Mac OS X 10.6.6 I spent 2-3 days and kept googling and trying everything I found at forums, but no success. SOLUTION: I opened .bash_profile with TextWrangler and removed everything else than export PATH=/opt/local/bin:/opt/local/sbin:$PATH Then, I reboot that Mac, and WORKING!!!!

    Read the article

  • Linux: Combine two partitions?

    - by Jakobud
    This workstation is running Fedora 11. It has 4 HDDs raided into 4 partitions: / (31 gig) /boot (134 meg) /data (140 gig) /FC12 (31 gig) The previous employee that used my current workstation set it up this way. He apparently created the FC12 partition to test a Fedora 12 installation. I don't need Fedora 12 so I wiped that partition and now I'm wondering if its possible for me to combine the /FC12 partition into the / partition, so that the / partition will now be 62 gigs. Is this possible? If so, how? Can it be done w/o reinstalling the OS? I've toyed with Fedora's LVM admin interface but it seems very basic and there doesn't seem to be anything about combining partitions. I've also messed with other HDD utilities that are in Fedora (Palimpsest Disk Utlity) but all it seems to be able to do is mount and umount partitions.

    Read the article

  • ffmpeg open webcam using YUYV but i want MJPEG

    - by Pavel
    I need ffmpeg to open webcam (logitech c910) in MJPEG mode, because the webcam can give ~24 using MJPEG "protocol" and only ~10 fps using the YUYV. Can i choose between them using ffmpeg command line? xx@(none) ~ $ v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG My current command line: ffmpeg -y -f alsa -i hw:3,0 -f video4linux2 -r 20 -s 1280x720 -i /dev/video0 -acodec libfaac -ab 128k -vcodec libx264 /tmp/web.avi ffmpeg produces corrupted h264 stream when i record from webcam, but normal h264 strem when i record from x11grab. Another codecs (mjpeg, mpeg4) works well with webcam... But this is another story.

    Read the article

  • How much processor speed and cores do I need for these tasks?

    - by ajay
    I am planning to buy a new laptop as I find my current one very slow. My question here is specifically related to RAM size and CPU power. I will mostly be doing development (not much games). I would be dabbling in distributed computing, multithreaded and data intensive parallelizable tasks on multi-cores. For e.g. I would want to be able to Concurrent programming in Scala/Java/Clojure etc. and be able to see parallelization. Furthermore, I would want the RAM to be enough. But from a developer machine standpoint, do you think 4GB RAM and 2.53GHz Dual Core processor would be enough. I'm basically looking at this model: http://store.apple.com/us/configure/MC118LL/A?mco=MTM3NDcyODk (link dead)

    Read the article

  • Running scripts from another directory

    - by Desmond Hume
    Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it . /path/to/script or sh /path/to/script and how to use sudo in such cases? This, for example, doesn't work: sudo . /path/to/script

    Read the article

  • How to change from own Internal/Extrernal DNS to use an outsourced service like DNS Made Easy?

    - by Joakim
    Our current setup is a co-located linux box with an openvz kernel with a handful virtual containers for www, mail etc. and one container run Bind9 with a split views configuration serving External and Internal DNS. The HW-Node runs a shorewall firewall and all containers uses private ip's. The box (and DNS) basically handles web and mail for a handful domains and it works well but we still think it would be a good idea to outsource the public DNS and now to my question... Although I am fairly comfortable with the server stuff and DNS, I'm far from a pro and guess I basically need some confirmation that I am thinking in the right direction in that I basically just move the content of our external view (with zone files) to the external service and keep the internal view (or actually remove the view), update the new external DNS with thier names servers, update the info at my registrar and wait for propagation or have I missed something? Maybe someone else here run something similar already and can share some exteriences? I found this question which at least confirms it can be done.

    Read the article

  • UPK and the Oracle Unified Method can be used to deploy Oracle-Based Business Solutions

    - by Emily Chorba
    Originally developed to support Oracle's acquisition strategy, the Oracle Unified Method (OUM) defines a common implementation language across all of Oracle's products and technologies. OUM is a flexible, scalable, and evolving body of knowledge that combines existing best practices and field experience with an industry standard framework that includes the latest thinking around agile implementation and cloud computing.    Strong, proven methods are essential to ensuring successful enterprise IT projects both within Oracle and for our customers and partners. OUM provides a collection of repeatable processes that are the basis for agile implementations of Oracle enterprise business solutions. OUM also provides a structure for tracking progress and managing cost and risks. OUM is applicable to any size or type of IT project. While OUM is a plan-based method—including overview material, task and artifact descriptions, and templates—the method is intended to be tailored to support the appropriate level of ceremony (or agility) required for each project. Guidance is provided for identifying the minimum subset of tasks, tailoring the approach, executing iterative and incremental planning, and applying agile techniques, including support for managing projects using Scrum. Supplemental guidance provides specific support for Oracle products, such as UPK. OUM is available to Oracle employees, partners, and customers. Internal Use at Oracle: Employees can download OUM from MyDesktop. OUM Partner Program: OUM is available free of charge to Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold partners as a benefit of membership. These partners may download OUM from the Oracle Unified Method Knowledge Zone on OPN. OUM Customer Program: The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for a services engagement of two weeks or longer. Customers who have a signed contract with Oracle and meet the engagement qualification criteria as published on Customer tab of the OUM Website, are permitted to download the current release of OUM for their perpetual use. They may obtain subsequent releases published during a renewable, three-year access period To learn more about OUM, visit OUM Blog OUM on LinkedIn OUM on Twitter Emily Chorba, Principle Product Manager, Oracle User Productivity Kit

    Read the article

  • How to stop a infinite running process(ztail) started by a ssh session after that session is closed

    - by Sanath Adiga
    I have a peculiar problem. My server supports multiple ssh session simultaneously, so that multiple admins can manage it simultaneously. We have a command which calls ztail to show the compressed log files and when the current ssh session is closed (without pressing ctrlc, to stop the tail command), the command should ideally stop working. But what I observed when I start a new ssh session is that the process ztail is still running in the background and consuming CPU, even though the previous session was closed. How can I determine when the session is closed, so that I can use that variable/flag to close/stop any commands initiated by that previously closed session?

    Read the article

  • How to maintain PCI compliance on a LAMP server when repositories don't keep up with versions

    - by Jared Green
    We run Ubuntu Lucid 10.0.4 as the foundation of our LAMP environment. We are trying to become PCI compliant so that we can pass CC info through our server. We have run some third-party scans on our servers to begin the certification process and have run into errors regarding PHP 5 versions and Apache versions. The latest PHP version hosted in our official lucid repository is about 10 versions lower than what PCI compliance requires. How do we upgrade to stay current with PCI compliance requirements? We need to get from php 5.3.2 to php 5.3.15 As well as up to apache 2.2.23 I've searched far and wide for an answer and haven't come up with a realistic answer. Some recommend compiling manually - which sounds like a nightmare, and others recommend a PPA - which sounds insecure. What should we do?

    Read the article

  • Structuring database for multi-object "activity" and "following" functionalities

    - by romaninsh
    I am working on a web application which operate with different types of objects such as user, profiles, pages etc. All objects have unique object_id. When objects interact it may produce "activity", such as user posting on the page or profile. Activity may be related to multiple objects through their object_id. Users may also follow "objects" and they need to be able to see stream of relevant activity. Could you provide me with some data structure suggestions which would be efficient and scalable? My goal is to show activity limited to the objects which user is following I am not limited by relational databases. Update As I'm getting advices on ORM and how index things, I'd like to again, stress my question. According to my current design model the database structure looks like this: As you can see - it's quite easy to implement database like that. Activity and Follower tables do contain much larger amount of records than the upper level but it's tolerable. But when it comes for me to create a "timeline" table, it becomes a nightmare. For every user I need to reference all the object activities which he follows. In terms of records it easily gets out of control. Please suggest me how to change this structure to avoid timeline creation and also be abel to quickly retrieve activity for any given user. Thanks.

    Read the article

  • Setting up Google Analytics for multiple subdomains

    - by Andrew G. Johnson
    so first here's a snippet of my current Analytics javascript: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-30490730-1']); _gaq.push(['_setDomainName', '.apartmentjunkie.com']); _gaq.push(['_setSiteSpeedSampleRate', 100]); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(ga, s); })(); So if you wanna have a quick peak at the site the url is ApartmentJunkie.com, keep in mind the site is pretty bare bones but you'll get the idea -- basically it's very similar to craigslist in the sense that it's in the local space so people pick a city then get sent to a subdomain that is specific for that city, e.g. winnipeg.mb.apartmentjunkie.com. I put that up late last night then had a look at the analytics and found that I am seeing only the request uri portion of the URLs in analytics as I would with any other site only with this one it's a problem as winnipeg.mb.apartmentjunkie.com/map/ and brandon.mb.apartmentjunkie.com/map/ are two different pages and shouldn't be lumped together as /map/ I know the kneejerk response is likely going to be "hey just setup a different google analytics profile for each subdomain" but there will eventually be a lot of subdomains so google's cap of 50 is going to be too limited and even more important I want to see the data in aggregate for the most part. I am thinking of making a change to the javascript, to something like: _gaq.push(['_trackPageview',String(document.domain) + String(document.location)]); But am unsure if this is the best way and figured someone else on wm.se would have had a similar situation that they could talk a bit about.

    Read the article

  • Finding out whether files are added, changed or deleted on a FTP server

    - by futureelite7
    I've recently been given the task to migrate about 200GB of data from one dedicated server to another. As this will take a week or more, I've been taking a snapshot of the current files on the FTP server using wget's mirror feature. However, since other users will probably be uploading / changing stuff in the meantime, the snapshot that I have made will not include the most recent changes. Since I only have access to FTP on this server, I'm planning to write a script that will recursively do a FTP stat on all files in the FTP folder, and compare the directory listing against the snapshot I have locally. If there are differences in the number of files, then I know files have been added or deleted. If the modification dates have been changed, then I know the files have been changed, and should redownload those files specifically. Am I missing anything in my approach, or are there any possible improvements to this approach?

    Read the article

  • How tot track which program causes my harddrive to spin up

    - by Andreas
    I have a strange problem: Every half hour one of my hard disks gets powered on again. I recognize this by the sound of a hard disk spinning up. So far I was not able to track which program could cause this. I ran Process Monitor to see whether there is an I/O peak coinciding with the spin-up. I checked Windows event viewer if there is an appropriate event at the same time Any ideas other than the usual disabling-services/programs etc. (which would be my next investigation step)? Also, it would be helpful to have a program that shows the current power status of all my drives, if there is one. Harddisk Sentinel unfortunately cannot do the job because it powers on all drives upon start and prevents their going into sleep mode. Thanks in advance. :)

    Read the article

  • Which is better for running Ubuntu and other Linux OSes, Chromebook or Windows, why? [on hold]

    - by Serge
    I'm learning programming and I would like to switch to a Linux OS, perhaps Ubuntu, to continue this, but the current machine is generally getting pretty old and slow and Windows is the least favorite option for production, and I can manage getting something new right around the price range of the nicest Chromebook on the market right now. However, I have compared specs of HP Chromebook 14 with those of regular PC laptops that roughly cost the same, and the latter consistently have approximately the same and sometimes higher (like the processor speed) specs. Yet usage of Chromebooks for this purpose is pretty widespread nowadays. Is this because they were initially built for a Linux OS - and is it really THAT crucial - or are there other major factors at play here?

    Read the article

  • How to disallow use of disable_functions in?

    - by Gaia
    I'm obviously not the first one to have this problem, but I cannot not find an answer to this situation. I want to lock down PHP a bit, more specifically the use disable_functions. The environment is CentOS 6.2/PHP 5.3.3 fcgid/Apache 2.2.15: Whats the proper apache config (AllowOverride, etc) to disable any PHP setting to be changed via .htaccess? All other overrides are ok (current setting is AllowOverride All) Whats the proper config to forbid effective use of disable_functions in all but the master php.ini (as in forbid use of disable_functions in /home/myvhost/etc/php5/php.ini or any directory within in that vhost public_html. another way to say this: the only effective disable_functions comes from the master php.ini)? If #2 is not possible, at least whats the proper config to disallow a vhost owner to effectively use any php.ini but the vhost main one (/home/myvhost/etc/php5/php.ini)? Thanks

    Read the article

  • Aspect Ratio on Nero 9 for burning DVD

    - by user27720
    I am currently attempting to burn a screen capture file to DVD. I will admit that I know very little about the process, the terminology, and am at a loss of how to find this information. I am using Nero 9 and am very displeased that the manuals available to me online explain very little. My current problem is that when I burn to DVD, my beautiful screen capture ends up being cropped. Through endless amounts of googling I am under the impression that this is due to aspect ratio. However, as windows will not tell me the resolution size for me to determine the correct aspect ratio I do not know how to proceed. Is there a way using Nero 9 for me to be able to burn my screen capture to DVD? Any advice or suggestions are appreciated.

    Read the article

  • VirtualBox 3.2 Release

    - by [email protected]
    The latest version of VirtualBox is out - version 3.2.  It is the first release as Oracle VirtualBox and there are a lot of new features.  Many of these I see directly impacting the Oracle VDI solution in upcoming releases (just my guess, of course), and I am updating my notebook as I write this. Er... OK - Done!There are enough features that they warrant you taking a look at this two-page VirtualBox Community Bulletin.pdf.  No point in me restating them.If you and your organization haven't tried VirtualBox, or you haven't looked at it in a while, you owe it to yourself to give this a run.  This is small, simple, powerful, software that allows you to do way more than most people would ever need in hosting a Virtual machine on you local machine.  I routinely will do a demo on a two-year-old Macbook running OS X locally, plus a Solaris 10 VM running the Sun Ray server, and a Windows XP VM and hang a couple Sun Rays off of it - and the performance is stellar.You can subscribe to the mailing lists and get access to the Beta releases as they come out as well, if you are into 'bleeding edge'.40,000 downloads a day is the current rate (before this new release), but it will jump for sure now.  Might as well join in!  

    Read the article

  • Supporting HR Transformation with HelpDesk for Human Resources

    - by Robert Story
    Upcoming WebcastTitle: Supporting HR Transformation with HelpDesk for Human ResourcesDate: May 13, 2010 Time: 9:00 am PDT, 10 am MDT, 17:00 GMT Product Family: PeopleSoft HCM & EBS HRMS Summary HR transformation is a strategic initiative at many companies where world-class employee HR service delivery and a reduction of HR operating costs are top priorities. Having a centralized service delivery model and providing employees with tools to better help themselves can be very key to this initiative. This session shares how Oracle's PeopleSoft HelpDesk for Human Resources provides the technology foundation and best practices for this transformation. HelpDesk for Human Resources now integrates with both PeopleSoft HCM and E-Business Suite HRMS. This one-hour session is recommended for technical and functional users who want to understand what is new in PeopleSoft Help Desk for Human Resources 9.1 and how it benefits both PeopleSoft HCM and E-Business Suite HRMS customers. Topics will include: Understand the latest features and functionality Gain insight into future product direction Plan for implementation or upgrade of this module in your current system A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Enable: Asp.net connection pool monitoring with performance monitor

    - by BlackHawkDesign
    If this question is at the wrong forum, be free to tell me. I'm a c# developer, but I'm running in a system management issue here. Intro: Im suspecting that an asp.net application is having some issues with the connection pool and that the pool is flooding from time to time. So to make sure, I want to monitor the connection pool. After some searching I found this article : http://blog.idera.com/sql-server/performance-and-monitoring/ensure-proper-sql-server-connection-pooling-2/ Basicly it explains stuff about connection pools and how you can monitor the application pool with performance monitor. The problem: So I logged in to the asp.net server(The sql database is hosted on a different server) which hosts the website. Started performance monitor. But when I want to select 'Current # pooled and nonpooled connections', I have no instance to select. There fore I can't add it. Question How can I create/supply an instance so I can monitor the connection pool? Thanks in advance BHD

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >