Search Results

Search found 4231 results on 170 pages for 'pre buildevent'.

Page 148/170 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Windows 8/Surface Lunch Event Summary

    - by Tim Murphy
    Today was a big day for Microsoft with two separate launch event.  The first for Windows 8 and all of it’s hardware partners.  The second was specifically to introduce the Microsoft Windows 8 Surface tablet.  Below are some of the take-aways I got from the webcasts. Windows 8 Launch The three general area that Microsoft focused on were the release of the OS itself, the public unveiling of the Windows Store and the new devices available from its hardware partners. The release of the OS focused on the fact that it will be available at mid-night tonight for both new PCs and for upgrades.  I can’t say that this interested me that much since it was already known to most people.  I think what they did show well was how easy the OS really is to use. The Windows Store is also not a new feature to those of us who have been running the pre-release versions of Windows 8 or have owned Windows Phone 7 for the past 2 years.  What was interesting is that the Windows Store launches with more apps available than any other platforms store at their respective launch.  I think this says a lot about how Microsoft focuses on the ability of developers to create software and make it available.  The of course were sure to emphasize that the Windows Store has better monetary terms for developers than its competitors. The also showed off the fact that XBox Music streaming is available for to all Windows 8 user for free.  Couple this with the Bing suite of apps that give you news, weather, sports and finance right out of the box and I think most people will find the environment a joy to use. I think the hardware demo, while quick and furious, really show where Windows shine: CHOICE!  They made a statement that over 1000 devices have been certified for Windows 8.  They showed tablets, laptops, desktops, all-in-ones and convertibles.  Since these devices have industry standard connectors they give a much wider variety of accessories and devices that you can use with them. Steve Balmer then came on stage and tried to see how many times he could use the “magical”.  He focused on how the Windows 8 OS is designed to integrate with SkyDrive, Skype and Outlook.com.  He also enforced that they think Windows 8 is the best choice for the Enterprise when it comes to protecting data and integrating across devices including Windows Phone 8. With that we were left to wait for the second event of the day. Surface Launch The second event of the day started with kids with magnets.  Ok, they were adults, but who doesn’t like playing with magnets.  Steven Sinofsky detached and reattached the Surface keyboard repeatedly, clearly enjoying himself.  It turns out that there are 4 magnets in the cover, 2 for alignment and 2 as connectors. They then went to giving us the details on the display.  The 10.6” display is optically bonded to the case and is optimized to reduce glare.  I think this came through very well in the demonstrations. The properties of the case were also a great selling point.  The VaporMg allowed them to drop the device on stage, on purpose, and continue working.  Of course they had to bring out the skate boards made from Surface devices. “It just has to feel right” was the reason they gave for many of their design decisions from the weight and size of the device to the way the kickstand and camera work together.  While this gave you the feeling that the whole process was trial and error you could tell that a lot of science went into the specs.  This included making sure that the magnets were strong enough to hold the cover on and still have a 3 year old remove the cover without effort. I am glad that they also decided the a USB port would be part of the spec since it give so many options.  They made the point that this allows Surface to leverage over 420 million existing devices.  That works for me. The last feature that I really thought was important was the microSD port.  Begin stuck with the onboard memory has been an aggravation of mine with many of the devices in the market today. I think they did job of really getting the audience to understand why you want this platform and this particular device.  Using personal examples like creating a video of a birthday party and being in it or the fact that the device was being used to live blog the event and control the lights and presentation.  They showed very well that it was not only fun but very capable of getting real work done.  Handing out tablets to the crowd didn’t hurt either.  In the end I really wanted a Surface even though I really have no need for one on a daily basis.  Great job Microsoft! del.icio.us Tags: Windows 8,Win8,Windows 8 Luanch

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • Very different IO performance in C/C++

    - by Roberto Tirabassi
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Configuring Cisco 877W router from scratch for DHCP, WiFi, ADSL2+, NAT

    - by David M Williams
    Hi all, I apologise if this is a BIG question but I am quite lost with the Cisco IOS. I know what I want to achieve just not how to do it :( I have a Cisco 877W router with 4 FastEthernet interfaces, 1 ATM interface and 1 802.11 Radio. I want to set it up for a small network and am trying to construct a configuration below. I was using Google to try and flesh it out but I think I need help and guidance from actual experts! If it helps, output from show ver says Cisco IOS software, C870 software (C870-ADVSECURITYK9-M), version 12.4(4)T7, release software (fc1) ROM: System bootstrap, version 12.3(8r)YI4, release software Here's what I have so far, which hopefully outlines clearly enough what I am wanting to do. The bits in angle brackets are placeholders (eg the secret password). ! ! Set router hostname ! hostname Shazam ! ! Set usernames and passwords ! username david privilege 15 secret 0 <PASSWORD> enable secret <SECRETPASSWORD> ! ! Configure SSH and telnet access ! line vty 0 4 privilege level 15 login local transport input telnet ssh ! ! Local logging ! logging buffered 51200 warning ! ! Set date and time for NSW, Australia (GMT +10h) ! ! ! Set router IP address to 192.168.1.1 on FastEthernet0 port ! interface FastEthernet0 ip address 192.168.1.1 255.255.255.0 no shut ip nat inside ! ! Forward any unknown DNS requests to Google ! ip dns server ip name-server 8.8.8.8 ip name-server 8.8.4.4 ! ! Set up DHCP ! DHCP pool covers 192.168.1.100 - .199 ! Set gateway and DNS server to be the router, ie 192.168.1.1 ! service dhcp ip routing ip dhcp excluded-address 192.168.1.1 192.168.1.99 ip dhcp excluded-address 192.168.1.200 192.168.1.255 ip dhcp pool <DHCPPOOLNAME> network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 192.168.1.1 lease 7 ! ! DHCP reservations ! ! Assign IP address 192.168.1.105 to MAC address 00-21-5D-2F-58-04 ! ! Configure ADSL2 connection details ! interface atm dsl operating-mode adsl2+ ! ! Set up NAT rules ! ! Forward port 35394 to 192.168.1.105 ! ! Set up WiFi ! ! SSID visible, WPA2 security, Pre-shared key I'm hoping most of this is boiler-plate stuff to you guys. I'm keen to not just get a working script but to actually understand it also. Unfortunately, I'm finding the Cisco reference material online very complex. Thank you!

    Read the article

  • Rails requires Rubygems but I have the gems

    - by fogonthedowns
    Update I notice that which ruby and whereis ruby are different locations which ruby /opt/local/bin/ruby whereis ruby /usr/bin/ruby I recently upgraded ruby to ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10] and I think I broke rails. When I attempt to load rails. I get an odd message. Please help! $ ruby script/server Rails requires RubyGems = 1.3.2. Please install RubyGems and try again: http://rubygems.rubyforge.org $ rails -v Rails 3.0.0.beta $ gem -v 1.3.6 $ which gem /usr/bin/gem $ whereis gem /usr/bin/gem $ which rails /usr/bin/rails $ whereis rails /usr/bin/rails $ /usr/bin/gem -v 1.3.6 $ /usr/bin/rails -v Rails 3.0.0.beta $ ruby script/console Rails requires RubyGems >= 1.3.2. Please install RubyGems and try again: http://rubygems.rubyforge.org $ gem list rails *** LOCAL GEMS *** rails (3.0.0.beta, 2.3.5, 2.2.2, 1.2.6) $ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta, 2.3.5, 2.2.2, 1.3.6) actionpack (3.0.0.beta, 2.3.5, 2.2.2, 1.13.6) actionwebservice (1.2.6) activemerchant (1.4.1) activemodel (3.0.0.beta) activerecord (3.0.0.beta, 2.3.5, 2.2.2, 1.15.6) activerecord-tableless (0.1.0) activeresource (3.0.0.beta, 2.3.5, 2.2.2) activesupport (3.0.0.beta, 2.3.5, 2.2.2, 1.4.4) acts_as_ferret (0.4.3) arel (0.2.pre) authlogic (2.1.3) builder (2.1.2) bundler (0.9.3) calendar_date_select (1.15) capistrano (2.5.2) cgi_multipart_eof_fix (2.5.0) chronic (0.2.3) columnize (0.3.1) compass (0.8.17) daemons (1.0.10) dnssd (0.6.0) erubis (2.6.5) fastercsv (1.5.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.6) flay (1.4.0) flog (2.4.0) gbarcode (0.98.16) gem_plugin (0.2.3) git (1.2.5) haml (2.2.15) haml-edge (2.3.100) highline (1.5.0) hoe (2.4.0) hpricot (0.6.164) i18n (0.3.3) javan-whenever (0.3.7) jeweler (1.4.0) jscruggs-metric_fu (1.1.5) json_pure (1.2.0) libxml-ruby (1.1.2) linecache (0.43) mail (2.1.2) mechanize (0.9.3) memcache-client (1.7.8) mime-types (1.16) mislav-will_paginate (2.3.11) mocha (0.9.7) mojombo-chronic (0.3.0) mongrel (1.1.5) needle (1.3.0) net-scp (1.0.1) net-sftp (2.0.1, 1.1.1) net-ssh (2.0.4, 1.1.4) net-ssh-gateway (1.0.0) nifty-generators (0.3.0) nokogiri (1.4.0) openrain-action_mailer_tls (1.1.3) passenger (2.2.5) polyglot (0.2.9) prawn (0.6.3) prawn-core (0.6.3) prawn-format (0.2.3) prawn-layout (0.3.2) prawn-security (0.1.1) rack (1.1.0, 1.0.1) rack-mount (0.4.5) rack-test (0.5.3) rails (3.0.0.beta, 2.3.5, 2.2.2, 1.2.6) railties (3.0.0.beta) rake (0.8.7, 0.8.3) rake-compiler (0.6.0) RedCloth (4.1.1) reek (1.2.6) relevance-rcov (0.9.2.1) rmagick (2.12.2) roodi (2.1.0) rsl-stringex (1.0.3) rspec (1.2.9) rspec-rails (1.2.9) ruby-debug (0.10.3) ruby-debug-base (0.10.3) ruby-openid (2.1.2) ruby-yadis (0.3.4) ruby2ruby (1.2.4) ruby_parser (2.0.4) rubyforge (2.0.3) rubygems-update (1.3.6, 1.3.5) rubynode (0.1.5) searchlogic (2.3.9) sexp_processor (3.0.3) spree (0.9.4) sqlite3-ruby (1.2.5, 1.2.4) termios (0.9.4) test-unit (2.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.0) tlsmail (0.0.1) topfunky-gruff (0.3.5) treetop (1.4.3) tzinfo (0.3.16) xmpp4r (0.4)

    Read the article

  • How do I install PyYAML into local install of Python?

    - by Daryl Spitzer
    I've installed Python 2.6.4 into (a subdirectory in) my home directory on a Linux machine with Python 2.3.4 pre-installed, because I need to run some code that I've decided would require too much work to make it run on 2.3.4. (I'm not on the sudoers list for that machine.) I was hoping I could run ~/Python-2.6.4/python setup.py install (from the PyYAML directory in my home directory, where I untarred the PyYAML sources) and it would be smart enough to install it into my local Python 2.6.4 install. But it's not. (See the P.S.) Is it possible to install PyYAML into my local Python install, so that "import yaml" will work when I invoke that Python? If so, how do I do that? P.S. Here's the output when I ran ~/Python-2.6.4/python setup.py install: running install running build running build_py creating build/lib.linux-ppc64-2.6 creating build/lib.linux-ppc64-2.6/yaml copying lib/yaml/composer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/nodes.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/dumper.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/resolver.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/events.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/emitter.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/error.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/loader.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/cyaml.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/scanner.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/__init__.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/serializer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/reader.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/representer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/constructor.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/tokens.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/parser.py -> build/lib.linux-ppc64-2.6/yaml running build_ext creating build/temp.linux-ppc64-2.6 checking if libyaml is compilable gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/dspitzer/Python-2.6.4/Include -I/home/dspitzer/Python-2.6.4 -c build/temp.linux-ppc64-2.6/check_libyaml.c -o build/temp.linux-ppc64-2.6/check_libyaml.o build/temp.linux-ppc64-2.6/check_libyaml.c:2:18: yaml.h: No such file or directory build/temp.linux-ppc64-2.6/check_libyaml.c: In function `main': build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: `yaml_parser_t' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: (Each undeclared identifier is reported only once build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: for each function it appears in.) build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: syntax error before "parser" build/temp.linux-ppc64-2.6/check_libyaml.c:6: error: `yaml_emitter_t' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:8: warning: implicit declaration of function `yaml_parser_initialize' build/temp.linux-ppc64-2.6/check_libyaml.c:8: error: `parser' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:9: warning: implicit declaration of function `yaml_parser_delete' build/temp.linux-ppc64-2.6/check_libyaml.c:11: warning: implicit declaration of function `yaml_emitter_initialize' build/temp.linux-ppc64-2.6/check_libyaml.c:11: error: `emitter' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:12: warning: implicit declaration of function `yaml_emitter_delete' libyaml is not found or a compiler error: forcing --without-libyaml (if libyaml is installed correctly, you may need to specify the option --include-dirs or uncomment and modify the parameter include_dirs in setup.cfg) running install_lib creating /usr/local/lib/python2.6 error: could not create '/usr/local/lib/python2.6': Permission denied

    Read the article

  • Error when installing AppFabric 1.1 on Server 2012 64bit

    - by no9
    I am trying to install AppFabric 1.1 on 64bit Windows Server 2012 R2. All updates have been installed and updates are turned ON .NET Framework 4.0 is installed .NET Framework 3.5 is installed IIS is installed Windows Powershell 3.0 should already be included in Server 2012 I am getting the following error: 2014-03-21 11:02:34, Information Setup ===== Logging started: 2014-03-21 11:02:34+01:00 ===== 2014-03-21 11:02:34, Information Setup File: c:\6c4006b0b3f6dee1bf616f1967\setup.exe 2014-03-21 11:02:34, Information Setup InternalName: Setup.exe 2014-03-21 11:02:34, Information Setup OriginalFilename: Setup.exe 2014-03-21 11:02:34, Information Setup FileVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup FileDescription: Setup.exe 2014-03-21 11:02:34, Information Setup Product: Microsoft(R) Windows(R) Server AppFabric 2014-03-21 11:02:34, Information Setup ProductVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup Debug: False 2014-03-21 11:02:34, Information Setup Patched: False 2014-03-21 11:02:34, Information Setup PreRelease: False 2014-03-21 11:02:34, Information Setup PrivateBuild: False 2014-03-21 11:02:34, Information Setup SpecialBuild: False 2014-03-21 11:02:34, Information Setup Language: Language Neutral 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup OS Name: Windows Server 2012 R2 Standard 2014-03-21 11:02:34, Information Setup OS Edition: ServerStandard 2014-03-21 11:02:34, Information Setup OSVersion: Microsoft Windows NT 6.2.9200.0 2014-03-21 11:02:34, Information Setup CurrentCulture: sl-SI 2014-03-21 11:02:34, Information Setup Processor Architecture: AMD64 2014-03-21 11:02:34, Information Setup Event Registration Source : AppFabric_Setup 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1.0 Upgrade module. 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1 Upgrade pre-install. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed, not taking backup. 2014-03-21 11:02:55, Information Setup Executing C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe with commandline -iru. 2014-03-21 11:02:55, Information Setup Return code from aspnet_regiis.exe is 0 2014-03-21 11:02:55, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Microsoft CCR and DSS Runtime 2008 R3.msi" /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-55).log" 2014-03-21 11:02:57, Information Setup Process.ExitCode: 0x00000000 2014-03-21 11:02:57, Information Setup Windows features successfully enabled. 2014-03-21 11:02:57, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Packages\AppFabric-1.1-for-Windows-Server-64.msi" ADDDEFAULT=Worker,WorkerAdmin,CacheService,CacheAdmin,Setup /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-57).log" LOGFILE="C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1_CustomActions(2014-03-21 11-02-57).log" INSTALLDIR="C:\Program Files\AppFabric 1.1 for Windows Server" LANGID=en-US 2014-03-21 11:03:45, Information Setup Process.ExitCode: 0x00000643 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Information Setup Microsoft.ApplicationServer.Setup.Core.SetupException: AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.GenerateAndThrowSetupException(Int32 exitCode, LogEventSource logEventSource) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.Invoke(LogEventSource logEventSource, InstallMode installMode, String packageIdentity, List`1 updateList, List`1 customArguments) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.InstallSelectedFeatures() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.Install() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Client.ProgressPage.StartAction() 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup === Summary of Actions === 2014-03-21 11:03:45, Information Setup Required Windows components : Completed Successfully 2014-03-21 11:03:45, Information Setup IIS Management Console : Completed Successfully 2014-03-21 11:03:45, Information Setup Microsoft CCR and DSS Runtime 2008 R3 : Completed Successfully 2014-03-21 11:03:45, Information Setup AppFabric 1.1 for Windows Server : Failed 2014-03-21 11:03:45, Information Setup Hosting Services : Failed 2014-03-21 11:03:45, Information Setup Caching Services : Failed 2014-03-21 11:03:45, Information Setup Hosting Administration : Failed 2014-03-21 11:03:45, Information Setup Cache Administration : Failed 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup ===== Logging stopped: 2014-03-21 11:03:45+01:00 ===== I have tried this solution but no success: http://stackoverflow.com/questions/11205927/appfabric-installation-failed-because-installer-msi-returned-with-error-code-1 My system enviroment variable PSModulesPath has this value: C:\Windows\System32\WindowsPowerShell\v1.0\Modules I have also followed this link with no success: http://jefferytay.wordpress.com/2013/12/11/installing-appfabric-on-windows-server-2012/ Any help would be greatly appreciated !

    Read the article

  • Delaying NIS & NFS startup till after network interface is fully ready on Fedora 17

    - by obmarg
    I've recently set up a fedora 17 server for our network, and I've been having trouble getting the NIS service to work on startup. Here's some logs from the system: Aug 21 12:57:12 cairnwell ypbind-pre-setdomain[718]: Setting NIS domain: 'indigo-nis' (environment variable) Aug 21 12:57:13 cairnwell ypbind: Binding NIS service Aug 21 12:57:13 cairnwell rpc.statd[730]: Unable to prune capability 0 from bounding set: Operation not permitted Aug 21 12:57:13 cairnwell systemd[1]: nfs-lock.service: control process exited, code=exited status=1 Aug 21 12:57:13 cairnwell systemd[1]: Unit nfs-lock.service entered failed state. Aug 21 12:57:14 cairnwell setroubleshoot: SELinux is preventing /usr/sbin/rpc.statd from using the setpcap capability. For complete SELinux messages. run sealert -l 024bba8a-b0ef-43dc-b195-5c9a2d4c4d41 Aug 21 12:57:15 cairnwell kernel: [ 18.822282] bnx2 0000:02:00.0: em1: NIC Copper Link is Up, 1000 Mbps full duplex Aug 21 12:57:15 cairnwell kernel: [ 18.822925] ADDRCONF(NETDEV_CHANGE): em1: link becomes ready Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): carrier now ON (device state 20) Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Auto-activating connection 'System em1'. Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Activation (em1) starting connection 'System em1' Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: disconnected -> prepare (reason 'none') [30 40 0] ....... Aug 21 12:57:19 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:26 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:31 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:35 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:58:00 cairnwell ypbind: Binding took 47 seconds Aug 21 12:58:00 cairnwell ypbind: NIS server for domain indigo-nis is not responding. Aug 21 12:58:01 cairnwell ypbind: Killing ypbind with PID 733. Aug 21 12:58:01 cairnwell ypbind-post-waitbind[734]: /usr/lib/ypbind/ypbind-post-waitbind: line 51: kill: SIGTERM: invalid signal specification Aug 21 12:58:01 cairnwell systemd[1]: ypbind.service: control process exited, code=exited status=1 Aug 21 12:58:01 cairnwell systemd[1]: Unit ypbind.service entered failed state. By the looks of these logs the ypbind service is starting up at 12:57:12 but the network interface isn't coming up till 12:57:15. My guess is that this is causing ypbind to time out when trying to connect. As a knock-on effect the NIS failure is causing problems with NFS which is no longer able to map UIDs properly. This problem doesn't seem to be fixed by actually starting ypbind etc. so I've had to set all my NFS shares to noauto. I have tried manually adding NETWORKDELAY and NETWORKWAIT in /etc/sysconfig/network and also running systemctl enable NetworkManager-wait-online.service as I've seen suggested in some places, but neither of these have had any effect. It is relatively easy to fix manually by restarting ypbind & mounting NFS shares after the network has started up, but it's less than ideal to have to do this every time the server has been rebooted. Does anyone know of an easy (and preferably hack-free) way of delaying the ypbind startup till after the network interface is fully ready?

    Read the article

  • ubuntu wifi disconnection & frustratingly connects to unavailable wifi

    - by ashishsony
    Hi, i have already posted this here: here This has happened before with ubuntu 9.1 Beta2 build too that my wifi disconnects if im idle for 5 minutes... so i cant leave my lappy to download anything... i have to keep on continuously using it.. as soon i leave it idle for abt 5 minutes... wifi disconnects... and the pop up asking for password for wifi pops up...with the password already filled in... i just click on connect and it connects again... so whats the use of asking the password if the pre filled in pass works correctly... and this is happening on ubuntu 10.04 Beta2 too... and the workaround is that just open any menu like the applications menu in the taskbar and keep it open... under this state the ubuntu idleness never activates and so the wifi gets never disconnected... this has been confirmed by me many times.. this seems to be repeating again and again... i dont know why... and the second thing i want to report is that there is no way to report this bug from ubuntu... the launchpad.net talks of going through bug reporting process which is done against a definite package... now how does a user know which package would be causing this error?? there should be a more clear process of reporting such bugs to ubuntu team... thirdly the apport utility that reports crashing apps is totally uselss on 10.04 beta 2... as it collests information and reports that i cant submit the report because i dont have 100 other packages... without updating which i cant submit the report.... surely on a beta build there would be packages continuously being updated... so no system would be reported as fully updated... and so no practical apport reporting is possible?? please address these issues... really frustrating all this ... im a big fan of ubuntu but these things really bug me... and just to add fourthly... the suspend/hibernate feature has never ever worked on my toshiba m70-113 laptop... on any ubuntu version... always have to hard reboot after putting into suspend/hibernate mode.. on windows this has never been the case... why cant ubuntu beat windows in such cases too?? i would really like to see this soon... most importantly, when the router switches off... the wifi signals go off... then why the hell ubuntu keeps on connecting to that very wifi like hell and when doesnt connect shows the prompt to manually connect... with the wifi key already filled in... whats the use of saving the key when it has to ask the question from me either to connect or not?? and if its isnt available... just wait when its available.. i have only option to cancel and if i cancel it wont auto-connect!! what the heck?? one can see in the image that it says "authentication required by wireless network" when there isnt any.. as router has gone down!!

    Read the article

  • Squid w/ SquidGuard fails w/ "Too few redirector processes are running"

    - by DKNUCKLES
    I'm trying to implement a Squid proxy in a quick and easy fashion and I'm receiving some errors I have been unable to resolve. The box is a pre-made appliance, however it seems to fail on launch.The following is the cache.log file when I attempt to launch the squid service. 2012/11/18 22:14:29| Starting Squid Cache version 3.0.STABLE20-20091201 for i686 -pc-linux-gnu... 2012/11/18 22:14:29| Process ID 12647 2012/11/18 22:14:29| With 1024 file descriptors available 2012/11/18 22:14:29| Performing DNS Tests... 2012/11/18 22:14:29| Successful DNS name lookup tests... 2012/11/18 22:14:29| DNS Socket created at 0.0.0.0, port 40513, FD 8 2012/11/18 22:14:29| Adding nameserver 192.168.0.78 from /etc/resolv.conf 2012/11/18 22:14:29| Adding nameserver 8.8.8.8 from /etc/resolv.conf 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'bin' processes 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'squid-auth.pl' processes 2012/11/18 22:14:29| User-Agent logging is disabled. 2012/11/18 22:14:29| Referer logging is disabled. 2012/11/18 22:14:29| Unlinkd pipe opened on FD 23 2012/11/18 22:14:29| Swap maxSize 10240000 + 8192 KB, estimated 788322 objects 2012/11/18 22:14:29| Target number of buckets: 39416 2012/11/18 22:14:29| Using 65536 Store buckets 2012/11/18 22:14:29| Max Mem size: 8192 KB 2012/11/18 22:14:29| Max Swap size: 10240000 KB 2012/11/18 22:14:29| Version 1 of swap file with LFS support detected... 2012/11/18 22:14:29| Rebuilding storage in /opt/squid3/var/cache (DIRTY) 2012/11/18 22:14:29| Using Least Load store dir selection 2012/11/18 22:14:29| Set Current Directory to /opt/squid3/var/cache 2012/11/18 22:14:29| Loaded Icons. 2012/11/18 22:14:29| Accepting HTTP connections at 10.0.0.6, port 3128, FD 25. 2012/11/18 22:14:29| Accepting ICP messages at 0.0.0.0, port 3130, FD 26. 2012/11/18 22:14:29| HTCP Disabled. 2012/11/18 22:14:29| Ready to serve requests. 2012/11/18 22:14:29| Done reading /opt/squid3/var/cache swaplog (0 entries) 2012/11/18 22:14:29| Finished rebuilding storage from disk. 2012/11/18 22:14:29| 0 Entries scanned 2012/11/18 22:14:29| 0 Invalid entries. 2012/11/18 22:14:29| 0 With invalid flags. 2012/11/18 22:14:29| 0 Objects loaded. 2012/11/18 22:14:29| 0 Objects expired. 2012/11/18 22:14:29| 0 Objects cancelled. 2012/11/18 22:14:29| 0 Duplicate URLs purged. 2012/11/18 22:14:29| 0 Swapfile clashes avoided. 2012/11/18 22:14:29| Took 0.02 seconds ( 0.00 objects/sec). 2012/11/18 22:14:29| Beginning Validation Procedure 2012/11/18 22:14:29| WARNING: redirector #1 (FD 9) exited 2012/11/18 22:14:29| WARNING: redirector #2 (FD 10) exited 2012/11/18 22:14:29| WARNING: redirector #3 (FD 11) exited 2012/11/18 22:14:29| WARNING: redirector #4 (FD 12) exited 2012/11/18 22:14:29| Too few redirector processes are running FATAL: The redirector helpers are crashing too rapidly, need help! Squid Cache (Version 3.0.STABLE20-20091201): Terminated abnormally. CPU Usage: 0.112 seconds = 0.032 user + 0.080 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): total space in arena: 2944 KB Ordinary blocks: 2857 KB 6 blks Small blocks: 0 KB 0 blks Holding blocks: 1772 KB 8 blks Free Small blocks: 0 KB Free Ordinary blocks: 86 KB Total in use: 4629 KB 157% Total free: 86 KB 3% The "permission denied" area is where I have been focusing my attention with no luck. The following is what I've tried. Chmod'ing the /opt/squidguard/bin folder to 777 Changing the user that squidguard runs under to root / nobody / www-data / squid3 Tried changing ownership of the /opt/squidguard/bin folder to all names listed above after assigning that user to run with squid. Any help with this would be greatly appreciated.

    Read the article

  • RHEL - NFS4: Mounted/Exported as rw, user write permission denied

    - by brendanmac
    Hello, I have nfs4 configured between a RHEL 5.3 server (charlie) and a RHEL 5.4 client (simcom1). The machines are configured to authenticate users via kerberos by a Windows Server 2008 active directory machine called "alpha." Alpha also serves as a dns and dhcp machine for the local network. I notice that when a user logs in to a RHEL machine for the first time they are issued a unique uid to that machine; The first user to log on gets 10001. So, what I see is that users between simcom1 and charlie have different UIDs. When a user does an 'ls -la' command from within an nfs4 mount I would have thought that the usernames in the owner column would indicate 'nobody' or at least the wrong user name - since UIDs are different between the machines for each user, and not all users have logged into each machine. However, the simcom1 is able to resolve usernames in an 'ls -la' executed on files residing on charlie via nfs4 correctly. Most troubling is that users are unable to write to files across the nfs mount. The server, charlie, has the root directory exported as rw. The client, simcom1, mounts the export as rw. My configurations are shown below. My question is, how do I configure the RHEL machines to allow users to write files across nfs4 that is already mounted as read/write? [root@charlie ~]# more /etc/exports / 10.100.0.0/16(rw,no_root_squash,fsid=0) [root@charlie ~]#cat /etc/sysconfig/nfs # # Define which protocol versions mountd # will advertise. The values are "no" or "yes" # with yes being the default #MOUNTD_NFS_V1="no" #MOUNTD_NFS_V2="no" #MOUNTD_NFS_V3="no" # # # Path to remote quota server. See rquotad(8) #RQUOTAD="/usr/sbin/rpc.rquotad" # Port rquotad should listen on. #RQUOTAD_PORT=875 # Optinal options passed to rquotad #RPCRQUOTADOPTS="" # # # TCP port rpc.lockd should listen on. #LOCKD_TCPPORT=32803 # UDP port rpc.lockd should listen on. #LOCKD_UDPPORT=32769 # # # Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) # Turn off v2 and v3 protocol support #RPCNFSDARGS="-N 2 -N 3" # Turn off v4 protocol support #RPCNFSDARGS="-N 4" # Number of nfs server processes to be started. # The default is 8. RPCNFSDCOUNT=8 # Stop the nfsd module from being pre-loaded #NFSD_MODULE="noload" # # # Optional arguments passed to rpc.mountd. See rpc.mountd(8) #STATDARG="" #RPCMOUNTDOPTS="" # Port rpc.mountd should listen on. #MOUNTD_PORT=892 # # # Optional arguments passed to rpc.statd. See rpc.statd(8) #RPCIDMAPDARGS="" # # Set to turn on Secure NFS mounts. SECURE_NFS="no" # Optional arguments passed to rpc.gssd. See rpc.gssd(8) #RPCGSSDARGS="-vvv" # Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8) #RPCSVCGSSDARGS="-vvv" # Don't load security modules in to the kernel #SECURE_NFS_MODS="noload" # # Don't load sunrpc module. #RPCMTAB="noload" # [root@simcom1 ~]# cat /etc/fstab --start snip-- charlie:/home /usr/local/dev/charlie nfs4 rw,nosuid, 0 0 --end snip-- [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# touch file touch: cannot touch 'file': Permission denied [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# su Password: [root@simcom1 /usr/local/dev/charlie/brendanmac]# touch file [root@simcom1 /usr/local/dev/charlie/brendanmac]# ls -la file -rw------- 1 root root 0 May 26 10:43 file Thank you for your assistance, Brendan

    Read the article

  • WSUS 3.0 SP2 installation fails at "configuring database" step.

    - by flashkube
    Attempting to install WSUS 3.0 SP2 on a Windows Server 2003 Enterprise system. I'm asking the setup to create a new database on one of our existing SQL Server 2005 systems. When the setup gets to the "configuring database" step it stops and throws "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor." The two logs it suggests I look at are below. I'm not seeing any errors that mean anything to me. Any direction you can give will be greatly appreciated. WSUSSetup.log: 2009-12-04 15:26:21 Success MWUSSetup Validating pre-requisites... 2009-12-04 15:26:22 Error MWUSSetup Failed to determine if an higher version of WSUS is installed. Assuming it is not... (Error 0x80070002: The system cannot find the file specified.) 2009-12-04 15:26:28 Success MWUSSetup No SQL instances found 2009-12-04 15:26:42 Success MWUSSetup Initializing installation details 2009-12-04 15:26:42 Success MWUSSetup Installing ASP.Net 2009-12-04 15:27:24 Success MWUSSetup ASP.Net is installed successfully 2009-12-04 15:27:24 Success MWUSSetup Installing WSUS... 2009-12-04 15:27:28 Success CustomActions.Dll Unable to get INSTALL_LANGUAGE property, calculating it... 2009-12-04 15:27:28 Success CustomActions.Dll Successfully set propery of WSUS admin groups' full names 2009-12-04 15:27:29 Success CustomActions.Dll .Net framework path: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Reporters with Description: WSUS Administrators who can only run reports on the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Reporters user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Administrators with Description: WSUS Administrators can administer the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Administrators user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS user groups 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Successfully set binary SID properties 2009-12-04 15:28:50 Error MWUSSetup InstallWsus: MWUS Installation Failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CInstallDriver::PerformSetup: WSUS installation failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CSetupDriver::LaunchSetup: Setup failed (Error 0x80070643: Fatal error during installation.) From the end of WSUSSetupmsi_091204_1527.log MSI (s) (58:7C) [15:28:49:860]: Note: 1: 1708 MSI (s) (58:7C) [15:28:49:860]: Product: Windows Server Update Services 3.0 SP2 -- Installation failed. MSI (s) (58:7C) [15:28:49:875]: Cleaning up uninstalled install packages, if any exist MSI (s) (58:7C) [15:28:49:875]: MainEngineThread is returning 1603 MSI (s) (58:78) [15:28:49:985]: Destroying RemoteAPI object. MSI (s) (58:90) [15:28:49:985]: Custom Action Manager thread ending. === Logging stopped: 12/4/2009 15:28:49 === MSI (c) (30:54) [15:28:50:016]: Decrementing counter to disable shutdown. If counter = 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (30:54) [15:28:50:016]: MainEngineThread is returning 1603 === Verbose logging stopped: 12/4/2009 15:28:50 ===

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

  • Windows 7 BSOD - ntoskrnl?

    - by Ken Mason
    2 new HP Pavilion notebooks with 7 Home Premium pre-loaded with Norton. My first act was to use the Norton Removal Tool and load ZoneAlarm free and AVG Free. Frequent random BSOD's ever since...I found my way into Debug and have had various reports regarding ntoskrnl, depending on the status of symbols. It's been many years since I played with (DOS 3.x) debug, so this has been a considerable fumble. Excerpts follow and any insights would be greatly appreciated, as I am not a developer: ADDITIONAL_DEBUG_TEXT: Use '!findthebuild' command to search for the target build information. If the build information is available, run '!findthebuild -s ; .reload' to set symbol path and load symbols. MODULE_NAME: nt FAULTING_MODULE: fffff8000305d000 nt DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt+0x70600 fffff80004d6fd30 000000000000007f : 0000000000000008 0000000080050033 00000000000006f8 fffff80003095e58 : nt+0x6fb69 fffff80004d6fd38 0000000000000008 : 0000000080050033 00000000000006f8 fffff80003095e58 0000000000000000 : 0x7f fffff80004d6fd40 0000000080050033 : 00000000000006f8 fffff80003095e58 0000000000000000 0000000000000000 : 0x8 fffff80004d6fd48 00000000000006f8 : fffff80003095e58 0000000000000000 0000000000000000 0000000000000000 : 0x80050033 fffff80004d6fd50 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : 0x6f8 fffff80004d6fd58 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt+0x38e58 STACK_COMMAND: kb FOLLOWUP_IP: nt+70600 fffff800`030cd600 48894c2408 mov qword ptr [rsp+8],rcx SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: nt+70600 FOLLOWUP_NAME: MachineOwner IMAGE_NAME: ntoskrnl.exe BUCKET_ID: WRONG_SYMBOLS Followup: MachineOwner ...................................................................... 0: kd !lmi nt Loaded Module Info: [nt] Module: ntkrnlmp Base Address: fffff8000305d000 Image Name: ntkrnlmp.exe Machine Type: 34404 (X64) Time Stamp: 4b88cfeb Sat Feb 27 00:55:23 2010 Size: 5dc000 CheckSum: 545094 Characteristics: 22 perf Debug Data Dirs: Type Size VA Pointer CODEVIEW 25, 19c65c, 19bc5c RSDS - GUID: {7E9A3CAB-6268-45DE-8E10-816E3080A3B7} Age: 2, Pdb: ntkrnlmp.pdb CLSID 4, 19c658, 19bc58 [Data not mapped] Image Type: FILE - Image read successfully from debugger. ntkrnlmp.exe Symbol Type: PDB - Symbols loaded successfully from symbol server. d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb Load Report: public symbols , not source indexed d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb 0: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff80003095e58 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT PROCESS_NAME: System CURRENT_IRQL: 2 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffff80004d6fd30 fffff800030cb032 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x69 fffff80004d6fe70 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb2 fffff880089efc60 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!SeAccessCheckFromState+0x58 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b2 fffff800`030cb032 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b2 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 Followup: MachineOwner I tried running Rootkit Revealer but I don't think it works on x64 systems. Similarly Blacklight seems to have aged off. I'm running Sophos Anti-Rootkit now. So far so good...

    Read the article

  • How to export computers from Active Directory to XML using Powershell?

    - by CoDeRs
    I am trying to create a powershell scripts for Remote Desktop Connection Manager using the active directory module. My first thought was get a list of computers in AD and parse them out into XML format similar to the OU structure that is in AD. I have no problem with that, the below code will work just but not how I wanted. EG # here is a the array $OUs Americas/Canada/Canada Computers/Desktops Americas/Canada/Canada Computers/Laptops Americas/Canada/Canada Computers/Virtual Computers Americas/USA/USA Computers/Laptops Computers Disabled Accounts Domain Controllers EMEA/UK/UK Computers/Desktops EMEA/UK/UK Computers/Laptops Outside Sales and Service/Laptops Servers I wanted to have the basic XML structured like this Americas Canada Canada Computers Desktops Laptops Virtual Computers USA USA Computers Laptops Computers Disabled Accounts Domain Controllers EMEA UK UK Computers Desktops Laptops Outside Sales and Service Laptops Servers However if you run the below it does not nest the next string in the array it only restarts the from the beginning and duplicating Americas Canada Canada Computers Desktops Americas Canada Canada Computers Laptops Americas Canada Canada Computers Virtual Computers Americas USA USA Computers Laptops RDCMGenerator.ps1 #Importing Microsoft`s PowerShell-module for administering ActiveDirectory Import-Module ActiveDirectory #Initial variables $OUs = @() $RDCMVer = "2.2" $userName = "domain\username" $password = "Hashed Password+" $Path = "$env:temp\test.xml" $allComputers = Get-ADComputer -LDAPFilter "(OperatingSystem=*)" -Properties Name,Description,CanonicalName | Sort-Object CanonicalName | select Name,Description,CanonicalName $allOUObjects = $allComputers | Foreach {"$($_.CanonicalName)"} Function Initialize-XML{ ##<RDCMan schemaVersion="1"> $xmlWriter.WriteStartElement('RDCMan') $XmlWriter.WriteAttributeString('schemaVersion', '1') $xmlWriter.WriteElementString('version',$RDCMVer) $xmlWriter.WriteStartElement('file') $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$env:userdomain) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('userName',$userName) $xmlWriter.WriteElementString('domain',$env:userdomain) $xmlWriter.WriteStartElement('password') $XmlWriter.WriteAttributeString('storeAsClearText', 'false') $XmlWriter.WriteRaw($password) $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('size','1024 x 768') $xmlWriter.WriteElementString('sameSizeAsClientArea','True') $xmlWriter.WriteElementString('fullScreen','False') $xmlWriter.WriteElementString('colorDepth','32') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Group ($groupName){ #Start Group $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$groupName) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Server ($computerName, $computerDescription) { #Start Server $xmlWriter.WriteStartElement('server') $xmlWriter.WriteElementString('name',$computerName) $xmlWriter.WriteElementString('displayName',$computerDescription) $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() #Stop Server } Function Close-XML { $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() # finalize the document: $xmlWriter.Flush() $xmlWriter.Close() notepad $path } #Strip out Domain and Computer Name from CanonicalName foreach($OU in $allOUObjects){ $newSplit = $OU.split("/") $rebildOU = "" for($i=1; $i -le ($newSplit.count - 2); $i++){ $rebildOU += $newSplit[$i] + "/" } $OUs += $rebildOU.substring(0,($rebildOU.length - 1)) } #Remove Duplicate OU's $OUs = $OUs | select -uniq #$OUs # get an XMLTextWriter to create the XML $XmlWriter = New-Object System.XMl.XmlTextWriter($Path,$UTF8) # choose a pretty formatting: $xmlWriter.Formatting = 'Indented' $xmlWriter.Indentation = 1 $XmlWriter.IndentChar = "`t" # write the header $xmlWriter.WriteStartDocument() # # 'encoding', 'utf-8' How? # # set XSL statements #Initialize Pre-Defined XML Initialize-XML ######################################################### # Start Loop for each OU-Path that has a computer in it ######################################################### foreach ($OU in $OUs){ $totalGroupName = "" #Create / Reset Total OU-Path Completed $OU.split("/") | foreach { #Split the OU-Path into individual OU's $groupName = "$_" #Current OU $totalGroupName += $groupName + "/" #Total OU-Path Completed $xmlWriter.WriteStartElement('group') #Start new XML Group Create-Group $groupName #Call function to create XML Group ################################################ # Start Loop for each Computer in $allComputers ################################################ foreach($computer in $allComputers){ $computerOU = $computer.CanonicalName #Set the computers OU-Path $OUSplit = $computerOU.split("/") #Create the Split for the OU-Path $rebiltOU = "" #Create / Reset the stripped OU-Path for($i=1; $i -le ($OUSplit.count - 2); $i++){ #Start Loop for OU-Path to strip out the Domain and Computer Name $rebiltOU += $OUSplit[$i] + "/" #Rebuild the stripped OU-Path } if ($rebiltOU -eq $totalGroupName){ #Compare the Current OU-Path with the computers stripped OU-Path $computerName = $computer.Name #Set the computer name $computerDescription = $computerName + " - " + $computer.Description #Set the computer Description Create-Server $computerName $computerDescription #Call function to create XML Server } } } ################################################### # Start Loop to close out XML Groups created above ################################################### $totalGroupName.split("/") | foreach { #Split the if ($_ -ne "" ){ $xmlWriter.WriteEndElement() #End Group } } } Close-XML

    Read the article

  • How clean is deleting a computer object?

    - by Kevin
    Though quite skilled at software development, I'm a novice when it comes to Active Directory. I've noticed that AD seems to have a lot of stuff buried in the directory and schema which does not appear superficially when using simplified tools such as Active Directory Users and Computers. It kind of feels like the Windows registry, where COM classes have all kinds of intertwined references, many of which are purely by GUID, such that it's not enough to just search for anything referencing "GadgetXyz" by name in order to cleanly remove GadgetXyz. This occasionally leads to the uneasy feeling that I may have useless garbage building up in there which I have no idea how to weed out. For instance, I made the mistake a while back of trying to rename a DC, figuring I could just do it in the usual manner from Control Panel. I found references to the old name buried all over the place which made it impossible to reuse that name without considerable manual cleanup. Even long after I got it all working, I've stumbled upon the old name hidden away in LDAP. (There were no other DCs left in the picture at that time so I don't think it was a tombstone issue.) More specifically, I'm worried about the case of just outright deleting a computer from AD. I understand the cleanest way to do it is to log into the computer itself and tell it to leave the domain. (As an aside, doing this in Windows 8 seems to only disable the computer object and not delete it outright!) My concern is cases where this is not possible, for instance because it was on an already-deleted VM image. I can simply go into Active Directory Users and Computers, find the computer object, click it, and press Delete, and it seems to go away. My question is, is it totally, totally gone, or could this leave hanging references in any Active Directory nook or cranny I won't know to look in? (Excluding of course the expected tombstone records which expire after a set time.) If so, is there any good way to clean up the mess? Thank you for any insight! Kevin ps., It was over a year ago so I don't remember the exact details, but here's the gist of the DC renaming issue. I started with a single 2008 DC named ABC in a physical machine and wanted to end up instead with a DC of the same name running in a vSphere VM. Not wanting to mess with imaging the physical machine, my plan instead was: Rename ABC to XYZ. Fresh install 2008 on a VM, name it ABC, and join it to the domain. (I may have done the latter in the same step as promoting to DC; I don't recall.) dcpromo the new ABC as a 2nd DC, including GC. Make sure the new ABC replicated correctly from XYZ and then transfer the FSMO roles from XYZ to it. Once everything was confirmed to work with the new ABC alone, demote XYZ, remove the AD role, and remove it from the domain. Eventually I managed to do this but it was a much bumpier ride than expected. In particular, I got errors trying to join the new ABC to the domain. These included "The pre-windows 2000 name is already in use" and "No mapping between account names and security IDs was done." I eventually found that the computer object for XYZ had attributes that still referred to it as ABC. Among these were servicePrincipalName, msDS-AdditionalDnsHostName, and msDS-AdditionalSamAccountName. The latter I could not edit via Attribute Editor and instead had to run this against XYZ: NETDOM computername <simple-name> /add:<FQDN> There were some other hitches I don't remember exactly.

    Read the article

  • Windows 7: How to place SuperFetch cache on an SSD?

    - by Ian Boyd
    I'm thinking of adding a solid state drive (SSD) to my existing Windows 7 installation. I know I can (and should) move my paging file to the SSD: Should the pagefile be placed on SSDs? Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1, Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB. Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size. In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD. What I don't know is if I even can put a SuperFetch cache (i.e. ReadyBoost cache) on the solid state drive. I want to get the benefit of Windows being able to cache gigabytes of frequently accessed data on a relativly small (e.g. 30GB) solid state drive. This is exactly what SuperFetch+ReadyBoost (or SuperFetch+ReadyDrive) was designed for. Will Windows offer (or let) me place a ReadyBoost cache on a solid state flash drive connected via SATA? A problem with the ReadyBoost cache over the ReadyDrive cache is that the ReadyBoost cache does not survive between reboots. The cache is encrypted with a per-session key, making its existing contents unusable during boot and SuperFetch pre-fetching during login. Update One I know that Windows Vista limited you to only one ReadyBoost.sfcache file (I do not know if Windows 7 removed that limitation): Q: Can use use multiple devices for EMDs? A: Nope. We've limited Vista to one ReadyBoost per machine Q: Why just one device? A: Time and quality. Since this is the first revision of the feature, we decided to focus on making the single device exceptional, without the difficulties of managing multiple caches. We like the idea, though, and it's under consideration for future versions. I also know that the 4GB limit on the cache file was a limitation of the FAT filesystem used on most USB sticks - an SSD drive would be formatted with NTFS: Q: What's the largest amount of flash that I can use for ReadyBoost? A: You can use up to 4GB of flash for ReadyBoost (which turns out to be 8GB of cache w/ the compression) Q: Why can't I use more than 4GB of flash? A: The FAT32 filesystem limits our ReadyBoost.sfcache file to 4GB Can a ReadyBoost cache on an NTFS volume be larger than 4GB? Update Two The ReadyBoost cache is encrypted with a per-boot session key. This means that the cache has to be re-built after each boot, and cannot be used to help speed boot times, or latency from login to usable. Windows ReadyDrive technology takes advantage of non-volatile (NV) memory (i.e. flash) that is incorporated with some hybrid hard drives. This flash cache can be used to help Windows boot, or resume from hibernate faster. Will Windows 7 use an internal SSD drive as a ReadyBoost/*ReadyDrive*/SuperFetch cache? Is it possible to make Windows store a SuperFetch cache (i.e. ReadyBoost) on a non-removable SSD? Is it possible to not encrypt the ReadyBoost cache, and if so will Windows 7 use the cache at boot time? See also SuperUser.com: ReadyBoost + SSD = ? Windows 7 - ReadyBoost & SSD drives? Support and Q&A for Solid-State Drives Using SDD as a cache for HDD, is there a solution? Performance increase using SSD for paging/fetch/cache or ReadyBoost? (Win7) Windows 7 To Boost SSD Performance How to Disable Nonvolatile Caching

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • How to place SuperFetch cache on an SSD?

    - by Ian Boyd
    I'm thinking of adding a solid state drive (SSD) to my existing Windows 7 installation. I know I can (and should) move my paging file to the SSD: Should the pagefile be placed on SSDs? Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1, Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB. Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size. In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD. What I don't know is if I even can put a SuperFetch cache (i.e. ReadyBoost cache) on the solid state drive. I want to get the benefit of Windows being able to cache gigabytes of frequently accessed data on a relativly small (e.g. 30GB) solid state drive. This is exactly what SuperFetch+ReadyBoost (or SuperFetch+ReadyDrive) was designed for. Will Windows offer (or let) me place a ReadyBoost cache on a solid state flash drive connected via SATA? A problem with the ReadyBoost cache over the ReadyDrive cache is that the ReadyBoost cache does not survive between reboots. The cache is encrypted with a per-session key, making its existing contents unusable during boot and SuperFetch pre-fetching during login. Update One I know that Windows Vista limited you to only one ReadyBoost.sfcache file (I do not know if Windows 7 removed that limitation): Q: Can use use multiple devices for EMDs? A: Nope. We've limited Vista to one ReadyBoost per machine Q: Why just one device? A: Time and quality. Since this is the first revision of the feature, we decided to focus on making the single device exceptional, without the difficulties of managing multiple caches. We like the idea, though, and it's under consideration for future versions. I also know that the 4GB limit on the cache file was a limitation of the FAT filesystem used on most USB sticks - an SSD drive would be formatted with NTFS: Q: What's the largest amount of flash that I can use for ReadyBoost? A: You can use up to 4GB of flash for ReadyBoost (which turns out to be 8GB of cache w/ the compression) Q: Why can't I use more than 4GB of flash? A: The FAT32 filesystem limits our ReadyBoost.sfcache file to 4GB Can a ReadyBoost cache on an NTFS volume be larger than 4GB? Update Two The ReadyBoost cache is encrypted with a per-boot session key. This means that the cache has to be re-built after each boot, and cannot be used to help speed boot times, or latency from login to usable. Windows ReadyDrive technology takes advantage of non-volatile (NV) memory (i.e. flash) that is incorporated with some hybrid hard drives. This flash cache can be used to help Windows boot, or resume from hibernate faster. Will Windows 7 use an internal SSD drive as a ReadyBoost/*ReadyDrive*/SuperFetch cache? Is it possible to make Windows store a SuperFetch cache (i.e. ReadyBoost) on a non-removable SSD? Is it possible to not encrypt the ReadyBoost cache, and if so will Windows 7 use the cache at boot time? See also SuperUser.com: ReadyBoost + SSD = ? Windows 7 - ReadyBoost & SSD drives? Support and Q&A for Solid-State Drives Using SDD as a cache for HDD, is there a solution? Performance increase using SSD for paging/fetch/cache or ReadyBoost? (Win7) Windows 7 To Boost SSD Performance How to Disable Nonvolatile Caching

    Read the article

  • Setting up VPN client: L2TP with IPsec

    - by zachar
    I've got to connect to vpn server. It works on Windows, but in Ubuntu 10.04 not. Number of options is confusing for me. There is the input that I have: IP Address of VPN Pre-shared key to authenticate Information that MS-CHAPv2 is used Login and Password to VPN I was trying to achive that with network manager and with L2TP IPsec VPN Manager 1.0.9 but at failed. There is some logged information from L2TP IPsec VPN Manager 1.0.9: Nov 09 15:21:46.854 ipsec_setup: Stopping Openswan IPsec... Nov 09 15:21:48.088 Stopping xl2tpd: xl2tpd. Nov 09 15:21:48.132 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 09 15:21:48.308 ipsec__plutorun: Starting Pluto subsystem... Nov 09 15:21:48.318 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 09 15:21:48.338 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 09 15:21:48.349 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 09 15:21:48.994 104 "my_vpn_name" #1: STATE_MAIN_I1: initiate Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [RFC 3947] method set to=109 Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [Dead Peer Detection] Nov 09 15:21:48.994 106 "my_vpn_name" #1: STATE_MAIN_I2: sent MI2, expecting MR2 Nov 09 15:21:48.994 003 "my_vpn_name" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed Nov 09 15:21:48.994 108 "my_vpn_name" #1: STATE_MAIN_I3: sent MI3, expecting MR3 Nov 09 15:21:48.994 004 "my_vpn_name" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Nov 09 15:21:48.995 117 "my_vpn_name" #2: STATE_QUICK_I1: initiate Nov 09 15:21:48.995 004 "my_vpn_name" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x0c96795d <0x483e1a42 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none} Nov 09 15:21:49.996 [ERROR 210] Failed to open l2tp control file 'c my_vpn_name' and from syslog: Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup stop Nov 9 15:21:46 o99 ipsec_setup: Stopping Openswan IPsec... Nov 9 15:21:48 o99 kernel: [ 4350.245171] NET: Unregistered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec stopped Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd stop Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Closing client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup start Nov 9 15:21:48 o99 kernel: [ 4350.312483] NET: Registered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 9 15:21:48 o99 ipsec_setup: Using NETKEY(XFRM) stack Nov 9 15:21:48 o99 kernel: [ 4350.410774] Initializing XFRM netlink socket Nov 9 15:21:48 o99 kernel: [ 4350.413601] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 kernel: [ 4350.427311] padlock: VIA PadLock Hash Engine not detected. Nov 9 15:21:48 o99 kernel: [ 4350.441533] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec started Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup start finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd start Nov 9 15:21:48 o99 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 pluto: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd start finished with exit code 0 Nov 9 15:21:48 o99 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --ready Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --ready finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --up my_vpn_name Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --up my_vpn_name finished with exit code 0 Nov 9 15:21:49 o99 L2tpIPsecVpnControlDaemon: Closing client connection Can anyone tell me something more about that? Where is the mistake?

    Read the article

  • vagrant fails to bring up additional adapter for centos vm using virtual box provider

    - by Anadi Misra
    this is in continuation of the question asked here about host only adapter on dhcp I upgraded to vagrant 1.6.3 and the updated Vagrantfile to following setting for multiple adapters # add additional adapter for inter machine networking dev.vm.network :private_network, :type => "dhcp", :adapter => "2", :netmask => "255.255.255.0" it goes through creating adapters but then fails bringing up the mic on vm Anadis-MacBook-Pro:full-stack-env anadi$ vagrant up Bringing machine 'full-stack-env' up with 'virtualbox' provider... ==> full-stack-env: Clearing any previously set forwarded ports... ==> full-stack-env: Clearing any previously set network interfaces... ==> full-stack-env: Preparing network interfaces based on configuration... full-stack-env: Adapter 1: nat full-stack-env: Adapter 2: hostonly ==> full-stack-env: Forwarding ports... full-stack-env: 22 => 4223 (adapter 1) full-stack-env: 8080 => 8090 (adapter 1) ==> full-stack-env: Running 'pre-boot' VM customizations... ==> full-stack-env: Booting VM... ==> full-stack-env: Waiting for machine to boot. This may take a few minutes... full-stack-env: SSH address: 127.0.0.1:4223 full-stack-env: SSH username: vagrant full-stack-env: SSH auth method: private key full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Remote connection disconnect. Retrying... ==> full-stack-env: Machine booted and ready! ==> full-stack-env: Checking for guest additions in VM... ==> full-stack-env: Setting hostname... ==> full-stack-env: Configuring and enabling network interfaces... The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! ARPCHECK=no /sbin/ifup eth 2> /dev/null Stdout from the command: Device eth does not seem to be present, delaying initialization. Stderr from the command: how ever when I log in to the environment I see two network interfaces as expected Anadis-MacBook-Pro:full-stack-env anadi$ vagrant ssh Last login: Wed Jun 4 12:54:47 2014 from 10.0.2.2 [vagrant@full-stack-env ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:BD:39:57 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:febd:3957/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:511 errors:0 dropped:0 overruns:0 frame:0 TX packets:360 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54574 (53.2 KiB) TX bytes:46675 (45.5 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:A3:86:C9 inet addr:172.28.128.3 Bcast:172.28.128.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:86c9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1360 (1.3 KiB) TX bytes:894 (894.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) I am bit confused here on why it is trying to add another mic (eth2)? In the VM I used for creating this vagrant box, I had added two NICs already.

    Read the article

  • Permission Mystery - apt-get and other system utilities have 000 permissions

    - by emteh
    I'm trying to track down this strange behavoir for years now. Always after installing software-updates the permissions of a lot of system-tools are broken as you can see below. I am reasonable convinced that the machine is not owned by someone else. Regular security updates + grsecurity kernel + pax + daily rkhunter runs. Besides that there is no incentive for an attacker to fiddle in such obvious ways with the system. I installed bastille linux (http://bastille-linux.sourceforge.net/) und tried to deinstall it later, so the problems could be related to that. However I don't see how this can happen in a regular way after updates. System: Ubuntu 10.04, recently updated to Ubuntu 12.04 but the problem persists. Apt-Configuration in /etc/apt/ looks sane to me. But nevertheless - could here be the source of the trouble? DPkg::Pre-Install-Pkgs {"/usr/sbin/dpkg-preconfigure --apt || true";}; DPkg::Post-Invoke { "if [ -x /usr/bin/debsums ]; then /usr/bin/debsums -- generate=nocheck -sp /var/cache/apt/archives; fi"; }; // Makes sure that rkhunter file properties database is updated // after each remove or install only APT_AUTOGEN is enabled DPkg::Post-Invoke { "if [ -x /usr/bin/rkhunter ] && grep -qiE '^APT_AUTOGEN=.? (true|yes)' /etc/default/rkhunter; then /usr/share/rkhunter/scripts/rkhupd.sh; fi" } DPkg::Post-Invoke {"if [ -d /var/lib/update-notifier ]; then touch /var/lib/update- notifier/dpkg-run-stamp; fi; if [ -e /var/lib/update-notifier/updates-available ]; then echo > /var/lib/update-notifier/updates-available; fi "; }; Where do these chmod 000 come from? I'm feeling really uneasy with this problem. root@besen:~# find /usr/bin/ -perm 0 -ls 14721496 196 ---------- 1 root root 192592 Oct 15 11:58 /usr/bin/apt-get 14721144 68 ---------- 1 root root 63848 Sep 13 00:29 /usr/bin/gpasswd root@besen:~# find /usr/sbin/ -perm 0 -ls 1727732 92 ---------- 1 root root 86984 Sep 13 00:29 /usr/sbin/usermod 1727727 64 ---------- 1 root root 57640 Sep 13 00:29 /usr/sbin/userdel 1727719 64 ---------- 1 root root 57680 Sep 13 00:29 /usr/sbin/newusers 1727718 40 ---------- 1 root root 38632 Sep 13 00:29 /usr/sbin/grpunconv 1727728 48 ---------- 1 root root 47088 Sep 13 00:29 /usr/sbin/groupadd 1727724 32 ---------- 1 root root 29584 Sep 13 00:29 /usr/sbin/pwunconv 19031620 84 ---------- 1 root root 81880 Jan 3 2012 /usr/sbin/edquota 14877113 48 ---------- 1 root root 46880 Sep 13 00:29 /usr/sbin/grpck 1727722 40 ---------- 1 root root 38632 Sep 13 00:29 /usr/sbin/pwck 1727730 96 ---------- 1 root root 91464 Sep 13 00:29 /usr/sbin/useradd 19031619 16 ---------- 1 root root 14600 Jan 3 2012 /usr/sbin/quotastats 1727720 44 ---------- 1 root root 42760 Sep 13 00:29 /usr/sbin/groupdel 1727733 36 ---------- 1 root root 34504 Sep 13 00:29 /usr/sbin/pwconv 19031621 80 ---------- 1 root root 77632 Jan 3 2012 /usr/sbin/rpc.rquotad 19030041 76 ---------- 1 root root 73600 Jan 3 2012 /usr/sbin/repquota 1727731 40 ---------- 1 root root 38624 Sep 13 00:29 /usr/sbin/grpconv 1727725 56 ---------- 1 root root 49472 Sep 13 00:29 /usr/sbin/vipw 1727723 64 ---------- 1 root root 57672 Sep 13 00:29 /usr/sbin/groupmod root@besen:~# find /sbin/ -perm 0 -ls 16760927 76 ---------- 1 root root 73464 Jan 3 2012 /sbin/quotaon Any tipps? I really can't pinpoint the problem in more detail. It happens after installing updates but I can't find no hooks in the dpkg/apt system.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >