Search Results

Search found 35201 results on 1409 pages for 'custom build step'.

Page 101/1409 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Making custom syntax highlighting in TextMate

    - by Andrei
    Hi, I am trying to highlight custom language in TextMate. However, the following definition does not highlight PHP insertions: { scopeName = 'source.serpent'; fileTypes = ( 'serpent' ); patterns = ( { begin = '<\?'; end = '\?>'; patterns = ( { include = 'source.php'; } ); }, ); } What can be the reason?

    Read the article

  • How can I create a link to a custom form in Outlook 2003

    - by Mulmoth
    If I created a custom form (including Script) in Outlook 2003, published it to the personal forms library, the only way I know to create a item out of this form is "File - New - Choose Form... - Personal Forms Library - Select my Form - Ok". Is there a faster way? For example, a link from the desktop or from the Outlook favorites folder?

    Read the article

  • CPU temperatures high on new build after gaming

    - by Reznor
    My friend had a problem with his computer a while back. His games were crashing, even within the menus. He was stumped as to what the problem was, so I posted on here requesting help. He found out the day later, when his computer would start up but wouldn't display anything on the screen. His video card must have came screwed up. So, he got a replacement. Now, there's a new problem. His temperatures, which were acceptable before, are now insanely high. His GPU temperature runs 70-80c, which is understandable considering he's running his games maxed out, but the real problem here is his processor and motherboard temperatures. All four of his cores are running at 88-90c after coming out of a game. His motherboard temperature was also 70c at one point. In terms of cooling, his case should definitely be adequate. He has an Antec Twelve Hundred. He's using stock fans. The cable management in his case is very good; better than average. He's using the stock heatsink with the processor too, but note, it was fine before the replacement, so it isn't like there's some inherent problem. He has checked the case too. Everything's fine! No cables in the way. The heatsink is seated properly. He turned his case fans up to high, as well, but the temperatures are persisting. Could the processor be overheating due to running games maxed out? Any ideas?

    Read the article

  • Custom url rewrite stop working after 20 seconds?

    - by 101224863727594634919
    Hi all I have a simple question about using of custom URL rewrite module - http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx. After period of time redirects stop working. When I trace non working requests I found that -URL_CACHE_ACCESS_END PhysicalPath URLInfoFromCache true URLInfoAddedToCache false ErrorCode 0 ErrorCode The operation completed successfully. (0x0) Is there any option to disable URL cache access for website. Thanks in advance.

    Read the article

  • Workstation 7 build 203739 - Capture Movie Mouse Pointer Not Visible

    - by BMIVM
    Hi, I have noticed that whenever I create a Capture Movie, the movie is fine but the mouse pointer is not visible at all. The clicks are on buttons file menus are executed but is hard for a viewer of the capturre movie session to follow the recording smoothly. This was not a problem in previous versions of WorkStation. Is this a bug or is there a setting I can set to see the mouse pointer? Note: VM Tools are installed, Host is Vista Ultimate Edtion SP1, x64 and the guest is a Win XP SP2. Thanks in advance for your help.

    Read the article

  • How to exclude directories from Mozy custom backup sets using spotlight queries

    - by bromfiets
    I would like to create custom backup sets for Mozy which exclude certain directories. For example, I would like to backup my Itunes folder, but exclude all podcasts. I have created a backup set which searches in /Users/me/Music and used this query kMDItemPath == "*Podcasts*"wc to exclude all matching files. However, nothing matches. Queries which use the kMDItemFSName spotlight attribute work fine, but any query using kMDItemPath doesn't seem to work at all. What am I doing wrong?

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Setting up a subdomain SSL with custom port

    - by Webnet
    I'm setting up a subdomain on a dedicated server that I'm going to use for SVN services. The SVN server is up and running I just need to setup the subdomain. The https has been switched to a custom port because there's a confliction with a port forward pointing to another server. Should I do this through GoDaddy or Apache?

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • Is there a chroot build script somewhere?

    - by Nils
    I am about to develop a little script to gather information for a chroot-jail. In my case this looks (at the first glance) pretty simple: The application has a clean rpm-install and did install almost all files into a sub-directory of /opt. My idea is: Do a find of all binaries Check their library-dependencies Record the results into a list Do a rsync of that list into the chroot-target-directory before startup of the application Now I wonder - ist there any script around that already does such a job (perl/bash/python)? So far I found only specialized solutions for single applications (like sftp-chroot). Update I see three close-votes for the reason "off topic". This is a question that arose because I have to install that ancient piece of software on a server at work. So if you still feel this is off-topic - leave a comment...

    Read the article

  • VMware Workstation 7 build 203739 - Capture Movie Mouse Pointer Not Visible

    - by BMIVM
    Hi, I have noticed that whenever I create a Capture Movie, the movie is fine but the mouse pointer is not visible at all. The clicks are on buttons file menus are executed but is hard for a viewer of the capturre movie session to follow the recording smoothly. This was not a problem in previous versions of WorkStation. Is this a bug or is there a setting I can set to see the mouse pointer? Note: VM Tools are installed, Host is Vista Ultimate Edtion SP1, x64 and the guest is a Win XP SP2. Thanks in advance for your help.

    Read the article

  • how to build network across buildings ?

    - by Omie
    Hi ! I need some help in building a network between hundreds of computers spread across multiple buildings of my college. Yes, I'll be doing this as a part of my college project. Please see this image, it will give you enough idea of what I'm trying to achieve. http://i.imgur.com/rOohx.png All the computers in all buildings should be able to connect server. Once network is up, there will be a set of services over intranet and network use will be moderate. well, say there will be an email server and a http server. My point is, I cannot afford much of performance loss. It feels easy to connect computers inside 1 building to each other, however, I'm clueless as to how to connect all of them to server. I mean, just 1 cable won't be enough to connect 1 building to server, right ? How should I go with it ? I am not expecting detailed configuration. Just heads up will do :) Thanks

    Read the article

  • Trying to build a history of popular laptop models

    - by John
    A requirement on a software project is it should run on typical business laptops up to X years old. However while given a specific model number I can normally find out when it was sold, I can't find data to do the reverse... for a given year I want to see what model numbers were released/discontinued. We're talking big-name, popular models like Dell Latitude/Precision/Vostro, Thinkpads, HP, etc. The data for any model is out there but getting a timeline is proving hard. Sites like Dell are (unsurprisingly) geared around current products, and even Wikipedia isn't proving very reliable. You'd think this data must have been collated by manufacturers or enthusiasts, surely?

    Read the article

  • WPF ICommand over a button

    - by toni
    Hi, I have implemented a custom IComand class for one of my buttons. The button is placed in a page 'MyPage.xaml' but its custom ICommand class is placed in another class, not in the MyPage code behind. Then from XAML I want to bind the button with its custom command class and then I do: MyPage.xaml: <Page ...> <Page.CommandBindings> <CommandBinding Command="RemoveAllCommand" CanExecute="CanExecute" Executed="Execute" /> </Page.CommandBindings> <Page.InputBindings> <MouseBinding Command="RemoveAllCommand" MouseAction="LeftClick" /> </Page.InputBindings> <...> <Button x:Name="MyButton" Command="RemoveAllCommand" .../> <...> </Page> and the custom command button class: // Here I derive from MyPage class because I want to access some objects from // Execute method public class RemoveAllCommand : MyPage, ICommand { public void Execute(Object parameter) { <...> } public bool CanExecute(Object parameter) { <...> } public event EventHandler CanExecuteChanged { add { CommandManager.RequerySuggested += value; } remove { CommandManager.RequerySuggested -= value; } } } My problem is how to say MyPage.xaml that Execute and CanExecute methods for the button is in another class and not the code behind where is placed the button. How to say these methods are in RemoveAllCommand Class in XAML page. Also I want to fire this command when click mouse event is produced in the button so I do an input binding, is it correct? Thanks

    Read the article

  • validation control unable to find its control to validate

    - by nat
    i have a repeater that is bound to a number of custom dataitems/types on the itemdatabound event for the repeater the code calls a renderedit function that depending on the custom datatype will render a custom control. it will also (if validation flag is set) render a validation control for the appropriate rendered edit control the edit control overrides the CreateChildControls() method for the custom control adding a number of literalControls thus protected override void CreateChildControls() { //other bits removed - but it is this 'hidden' control i am trying to validate this.Controls.Add(new LiteralControl(string.Format( "<input type=\"text\" name=\"{0}\" id=\"{0}\" value=\"{1}\" style=\"display:none;\" \">" , this.UniqueID , this.MediaId.ToString()) )); //some other bits removed } the validation control is rendered like this: where the passed in editcontrol is the control instance of which the above createchildcontrols is a method of.. public override Control RenderValidationControl(Control editControl) { Control ctrl = new PlaceHolder(); RequiredFieldValidator req = new RequiredFieldValidator(); req.ID = editControl.ClientID + "_validator"; req.ControlToValidate = editControl.UniqueID; req.Display = ValidatorDisplay.Dynamic; req.InitialValue = "0"; req.ErrorMessage = this.Caption + " cannot be blank"; ctrl.Controls.Add(req); return ctrl; } the problem is, altho the validation controls .ControlToValidate property is set to the uniqueid of the editcontrol. when i hit the page i get the following error: Unable to find control id 'FieldRepeater$ctl01$ctl00' referenced by the 'ControlToValidate' property of 'FieldRepeater_ctl01_ctl00_validator'. i have tried changing the literal in the createchildcontrols to a new TextBox(), and then set the id etc then, but i get a similar problem. can anyone enlighten me? is this because of the order the controls are rendered in? ie the validation control is written before the editcontrol? or... anyhow any help much appreciated thanks nat

    Read the article

  • Build problems when adding `__str__` method to Boost Python C++ class

    - by Rickard
    I have started to play around with boost python a bit and ran into a problem. I tried to expose a C++ class to python which posed no problems. But I can't seem to manage to implement the __str__ functionality for the class without getting build errors I don't understand. I'm using boost 1_42 prebuild by boostpro. I build the library using cmake and the vs2010 compiler. I have a very simple setup. The header-file (tutorial.h) looks like the following: #include <iostream> namespace TestBoostPython{ class TestClass { private: double m_x; public: TestClass(double x); double Get_x() const; void Set_x(double x); }; std::ostream &operator<<(std::ostream &ostr, const TestClass &ts); }; and the corresponding cpp-file looks like: #include <boost/python.hpp> #include "tutorial.h" using namespace TestBoostPython; TestClass::TestClass(double x) { m_x = x; } double TestClass::Get_x() const { return m_x; } void TestClass::Set_x(double x) { m_x = x; } std::ostream &operator<<(std::ostream &ostr, TestClass &ts) { ostr << ts.Get_x() << "\n"; return ostr; } BOOST_PYTHON_MODULE(testme) { using namespace boost::python; class_<TestClass>("TestClass", init<double>()) .add_property("x", &TestClass::Get_x, &TestClass::Set_x) .def(str(self)) ; } The CMakeLists.txt looks like the following: CMAKE_MINIMUM_REQUIRED(VERSION 2.8) project (testme) FIND_PACKAGE( Boost REQUIRED ) FIND_PACKAGE( Boost COMPONENTS python REQUIRED ) FIND_PACKAGE( PythonLibs REQUIRED ) set(Boost_USE_STATIC_LIBS OFF) set(Boost_USE_MULTITHREAD ON) INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS}) INCLUDE_DIRECTORIES ( ${PYTHON_INCLUDE_PATH} ) add_library(testme SHARED tutorial.cpp) target_link_libraries(testme ${Boost_PYTHON_LIBRARY}) target_link_libraries(testme ${PYTHON_LIBRARY} The build error I get is the following: Compiling... tutorial.cpp C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(31) : error C2780: 'void boost::python::api::object_operators::visit(ClassT &,const char *,const boost::python::detail::def_helper &) const' : expects 3 arguments - 1 provided with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/object_core.hpp(203) : see declaration of 'boost::python::api::object_operators::visit' with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(67) : see reference to function template instantiation 'void boost::python::def_visitor_access::visit,classT>(const V &,classT &)' being compiled with [ DerivedVisitor=boost::python::api::object, classT=boost::python::class_, V=boost::python::def_visitor ] C:\Program Files (x86)\boost\boost_1_42\boost/python/class.hpp(225) : see reference to function template instantiation 'void boost::python::def_visitor::visit>(classT &) const' being compiled with [ DerivedVisitor=boost::python::api::object, W=TestBoostPython::TestClass, classT=boost::python::class_ ] .\tutorial.cpp(29) : see reference to function template instantiation 'boost::python::class_ &boost::python::class_::def(const boost::python::def_visitor &)' being compiled with [ W=TestBoostPython::TestClass, U=boost::python::api::object, DerivedVisitor=boost::python::api::object ] Does anyone have any idea on what went wrrong? If I remove the .def(str(self)) part from the wrapper code, everything compiles fine and the class is usable from python. I'd be very greatful for assistance. Thank you, Rickard

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • "SignTool error: Access is denied" in TFS 2010 build process

    - by user351352
    I'm getting "SignTool Error: Access is Denied" when I attempt to sign a file. When I use an administrator cmd, all works fine. However, this process is going to be used in a TFS 2010 build process and using the InvokeProcess task with signtool gives the same access denied message as a non-administrator command prompt. More info: On a Win2008 R2 enterprise machine. User is machine admin and on the domain. The TFS Build service is also set to run as this user. Using a self signed certificate created using these instructions: How do I create a self-signed certificate for code signing on Windows? After following these instructions I have the following files: MyCA.cer MyCA.pvk MySPC.cer MySPC.pvk MySPC.pfx MyCA is in my Trusted Root Certification Authorities I imported MySPC.pfx into personal certificates, following the advice here: SignTool error: Access is denied To do the signing I'm using the thumbprint of the MySPC.pfx that was imported into the Personal section so my signtool command looks like: sign /sha1 1e9d7b5ad98552d9c58944e3f3903e6b929f4819 /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" Once again this works in Admin mode. This also works when running cmd as administrator: sign /f "C:\Code Signing Non-Release\MySPC.pfx" /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" New to code signing in general, so any help is welcome.

    Read the article

  • how to set custom interval to horizontal axis in Flex Charts

    - by Ali Syed
    Hello folks, I am trying to set custom step (interval) to my Line Chart's horizontal axis. The chart gets its data from a grid. The grid has a lot of data and it is displayed accurately but because there are so many data points the horizontal axis is screwed up. I wanted to set a step on horizontal axis so that you get an idea when you see the graph without hovering the mouse on a data point! thanks for any help! -Ali Flexi Comment Box

    Read the article

  • custom MVC2 editortemplates

    - by tschreck
    i would like to create a library of custom EditorTemplates and DisplayTemplates. How do I "load" these into my MVC application? I want to be able to re-use my custom template library across a variety of MVC apps.

    Read the article

  • Is make -j distcc possible to scale over 5 times?

    - by holmes
    Since distcc cannot keep states and just possible to send jobs and headers and let those servers to use only the data just sent and preprocess and compile, I think the lastest distcc has problem in scalability. In my local build environment which has appx. 10,000 c/c++ files to build, I could only make 2 times faster than not using distcc (but using make -j) when having 20 build servers. What do you think is the problem? If anyone has achieved scalability more than 10 - 20 times using make -j and distcc, please let me know. The following product claims that it is impossible to scale make -j and distcc faster than 5 times. http://www.electric-cloud.com/products/electricaccelerator.php I think this can be improved by: Letting the distccd server to maintain sessions Tied to those sessions, they will cache their own header directories Preprocess will be done demand base from the distccd server This will be done through a LD_PRELOADed library libdistcc.so which will replace stat/open syscalls and fetches the header files over network. ... Has anyone done this kind of thing?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >