Search Results

Search found 9066 results on 363 pages for 'product'.

Page 342/363 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • .Net Custom Components "disappear" after file save

    - by EatATaco
    I might have a hard time explaining this because I am at a total loss for what is happening so I am just looking for some guidance. I might be a bit wordy because I don't know exactly what is the relevant information. I am developing a GUI for a project that I am working on in using .Net (C#) Part of the interface mimics, exactly, what we do in another product. For consistency reasons, my boss wants me to make it look the same way. So I got the other software and basically copied and pasted the components into my new GUI. This required me to introduce a component library (the now defunct Graphics Server GSNet, so I can't go to them for help) so I could implement some simple graphs and temperature/pressure "widgets." The components show up fine, and when I compile, everything seems to work fine. However, at some point during my programming it just breaks. Sometimes the tab that these components are on starts throwing exceptions when I view the designer page (A missing method exception) so it won't display. Sometimes JUST those components from the GSNet library don't show up. Sometimes, if I try to run it, I get a not-instantiated exception on one of their lines of code in the designer code file. Sometimes I can't view the designer at all. No matter what I do I can't reverse it. Even if I undo what I just did it won't fix it. If it happens, I have to revert to a backup and start over again. So I started to backup pretty much every step. I compile it and it works. I comment out a line of code, save it, and then uncomment that same line of code (so I am working with the same exact code) and the components all disappear. It doesn't matter what line of code I actually comment out, as long as it is in the same project that these components are being used. I pretty much have to use the components. . . so does anyone have any suggestion or where I can look to debug this?

    Read the article

  • Collision Detection probelm (intersection with plane)

    - by Demi
    I'm doing a scene using openGL (a house). I want to do some collision detection, mainly with the walls in the house. I have tried the following code: // a plane is represented with a normal and a position in space Vector planeNor(0,0,1); Vector position(0,0,-10); Plane p(planeNor,position); Vector vel(0,0,-1); double lamda; // this is the intersection point Vector pNormal; // the normal of the intersection // this method is from Nehe's Lesson 30 coll= p.TestIntersionPlane(vel,Z,lamda,pNormal); glPushMatrix(); glBegin(GL_QUADS); if(coll) glColor3f(1,0,0); else glColor3f(1,1,1); glVertex3d(0,0,-10); glVertex3d(3,0,-10); glVertex3d(3,3,-10); glVertex3d(0,3,-10); glEnd(); glPopMatrix(); Nehe's method: #define EPSILON 1.0e-8 #define ZERO EPSILON bool Plane::TestIntersionPlane(const Vector3 & position,const Vector3 & direction, double& lamda, Vector3 & pNormal) { double DotProduct=direction.scalarProduct(normal); // Dot Product Between Plane Normal And Ray Direction double l2; // Determine If Ray Parallel To Plane if ((DotProduct<ZERO)&&(DotProduct>-ZERO)) return false; l2=(normal.scalarProduct(position))/DotProduct; // Find Distance To Collision Point if (l2<-ZERO) // Test If Collision Behind Start return false; pNormal= normal; lamda=l2; return true; } Z is initially (0,0,0) and every time I move the camera towards the plane, I reduce its z component by 0.1 (i.e. Z.z-=0.1 ). I know that the problem is with the vel vector, but I can't figure out what the right value should be. Can anyone please help me?

    Read the article

  • What is the best way to reduce code and loop through a hierarchial commission script?

    - by JM4
    I have a script which currently "works" but is nearly 3600 lines of code and makes well over 50 database calls within a single script. From my experience, there is no way to really "loop" the script and minimize it because each call to the database is a subquery of the ones before based on referral ids. Perhaps I can give a very simple example of what I am trying to accomplish and see if anybody has experience with something similar. In my example, there are three tables: Table 1 - Sellers ID | Comm_level | Parent ----------------------------------- 1 | 4 | NULL 2 | 3 | 1 3 | 2 | 1 4 | 2 | 2 5 | 2 | 2 6 | 1 | 3 Where ID is the id of one of our sales agents, comm_level will determine what his commission percentage is for each product he sells, parent indicates the ID for whom recruited that particular agent. In the example above, 1 is the top agent, he recruited two agents, 2 and 3. 2 recruited two agents, 4 and 5. 3 recruited one agent, 6. NOTE: An agent can NEVER recruit anybody equal to or higher than their own level. Table 2 - Commissions Level | Item 1 | Item 2 | Item 3 ----------------------------------------------------- 4 | .5 | .4 | .3 3 | .45 | .35 | .25 2 | .4 | .3 | .2 1 | .35 | .25 | .15 This table lays out the commission percentages for each agent based on their actual comm_level (if an agent is at a level 4, he will receive 50% on every item 1 sold, 40% on every item 2, 30% on every item 3 and so on. Table 3 - Items Sold ID | Item --------------------- 4 | item_1 4 | item_2 1 | item_1 2 | item_3 6 | item_2 1 | item_3 This table pairs the actual item sold with the seller who sold the item. When generating the commission report, calculating individual values is very simple. Calculating their commission based on their sub_sellers however is very difficult. In this example, Seller ID 1 gets a piece of every single item sold. The commission percentages indicate individual sales or the height of their commission. For example: When seller ID 6 sold one of item_2 above, the tree for commissions will look like the following: -ID 6 - 25% of cost(item_1) -ID 3 - 5% of cost(item_1) - (30% is his comm - 25% comm of seller id 6) -ID 1 - 10% of cost(item_1) - (40% is his comm - 30% of seller id 3) This must be calculated for every agent in the system from the top down (hence the DB calls within while loops throughout my enormous script). Anybody have a good suggestion or samples they may have used in the past?

    Read the article

  • No Change for Index of DropDownList in a Custom Control!!!

    - by mahdiahmadirad
    Hi Dears, I have Created A Custom Control which is a DropDownList with specified Items. I designed AutoPostback and SelectedCategoryId as Properties and SelectedIndexChanged as Event for My Custom Control. Here Is My ASCX file Behind Code: private int _selectedCategoryId; private bool _autoPostback = false; public event EventHandler SelectedIndexChanged; public void BindData() { //Some Code... } protected void Page_Load(object sender, EventArgs e) { BindData(); DropDownList1.AutoPostBack = this._autoPostback; } public int SelectedCategoryId { get { return int.Parse(this.DropDownList1.SelectedItem.Value); } set { this._selectedCategoryId = value; } } public string AutoPostback { get { return this.DropDownList1.AutoPostBack.ToString(); } set { this._autoPostback = Convert.ToBoolean(value); } } protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { if (SelectedIndexChanged != null) SelectedIndexChanged(this, EventArgs.Empty); } I Want Used Update Panel to Update Textbox Fields According to dorp down list selected index. this is my code in ASPX page: <asp:Panel ID="PanelCategory" runat="server"> <p> Select Product Category:&nbsp; <myCtrl:CategoryDDL ID="CategoryDDL1" AutoPostback="true" OnSelectedIndexChanged="CategoryIndexChanged" SelectedCategoryId="0" runat="server" /> </p> <hr /> </asp:Panel> <asp:UpdatePanel ID="UpdatePanelEdit" runat="server"> <ContentTemplate> <%--Some TextBoxes and Other Controls--%> </ContentTemplate> <Triggers> <asp:PostBackTrigger ControlID="CategoryDDL1" /> </Triggers> </asp:UpdatePanel> But Always The Selected Index of CategoryDDL1 is 0(Like default). this means Only Zero Value will pass to the event to update textboxes Data. what is the wrong with my code? why the selected Index not Changing? Help?

    Read the article

  • Looping class, for template engine kind of thing

    - by tarnfeld
    Hey, I am updating my class Nesty so it's infinite but I'm having a little trouble.... Here is the class: <?php Class Nesty { // Class Variables private $text; private $data = array(); private $loops = 0; private $maxLoops = 0; public function __construct($text,$data = array(),$maxLoops = 5) { // Set the class vars $this->text = $text; $this->data = $data; $this->maxLoops = $maxLoops; } // Loop funtion private function loopThrough($data) { if( ($this->loops +1) > $this->maxLoops ) { die("ERROR: Too many loops!"); } else { $keys = array_keys($data); for($x = 0; $x < count($keys); $x++) { if(is_array($data[$keys[$x]])) { $this->loopThrough($data[$keys[$x]]); } else { return $data[$keys[$x]]; } } } } // Templater method public function template() { echo $this->loopThrough($this->data); } } ?> Here is the code you would use to create an instance of the class: <?php // The nested array $data = array( "person" => array( "name" => "Tom Arnfeld", "age" => 15 ), "product" => array ( "name" => "Cakes", "price" => array ( "single" => 59, "double" => 99 ) ), "other" => "string" ); // Retreive the template text $file = "TestData.tpl"; $fp = fopen($file,"r"); $text = fread($fp,filesize($file)); // Create the Nesty object require_once('Nesty.php'); $nesty = new Nesty($text,$data); // Save the newly templated text to a variable $message $message = $nesty->template(); // Print out $message on the page echo("<pre>".$message."</pre>"); ?> Any ideas?

    Read the article

  • A versioning workflow for multiple similar (but not identical) deployments

    - by rs77
    I'm currently employed at a small non-tech organisation and have been given the role of coding the organisations' website. While I have enjoyed the task and have learnt much with web dev I've encountered a few issues that I'm hoping someone will be able to help with me or at least point me in the right direction on. A little background: The site I work on has subdomains that each have their own separate WordPress installation on - as this has been the easiest "backend" admin panel for the type of user who will be responsible for updating content (etc). Within the organisation I work under the Marketing Manager (MM) and I code according to his style guide and wire frames. While we have been working with only one subdomain since the beginning of the year the project has been relatively simple and straightforward. However, lately the workflow is becoming a little more complicated as our original subdomain has been copied over to the other subdomains. Each of the new subdomains receives minor edits to their stylesheets (eg. different pictures for background, slightly different colours here and there, etc). The issue: At the moment managing all the different subdomains has been "bearable", but the straw that's braking the camel's back at the moment has been the slight reversions the MM has required now that the CEO has seen the final product. The problem I'm having with reversions in stylesheets is that the CEO will one week state that he likes change "X" and then as the MM and I continue to modify the site (to now "Z"), will another week state that he wants us to change "X" to "W" but keeping most of the changes made in "Y". What I'm looking for is something that allows for: tracking file changes reverting changes made (or reverting back to 'a' from 'e' but including changes 'b' & 'c') easily upload necessary files to their respective WP-theme installation Does anything out there come close to addressing these issues? If so, what? Thanks for any help! PS - I'm learning Git at the moment and it seems to do the "tracking file changes" quite nicely. Haven't learnt about the reverting changes bit yet, though. Maybe for my final point I'm thinking of creating a shell script to automatically upload the files to their folders. Does Git do this too though? Addendum (alexbbrown) I had a similar problem: I ran a custom version of mediawiki where I installed various extensions in the versioned core (with svn). Each of the extensions required an section in the confit file, but the confit file also needed local configuration for each of several deployments. I could have implemented it using includes, but they would not be versioned; and rebasing branches each time is a chore. +50 experience points for a good answer in git.

    Read the article

  • Advice needed: stay with Java team or move to C++ team?

    - by user68759
    Some background - I have been programming in Java as a professional for the last few years. This is mainly using Java SE. I have also touched bits and pieces of other various Java technologies and have some basic knowledge about them. I consider my self as an intermediate Java programmer. I like Java very much. I think it is only going to get bigger. Recently, my manager asked my opinion on whether I would like to be transferred to another team within the company that is developing a product in C++. This is mainly because my current Java team simply didn't make enough money due to poor sales and the economic downturn. Now, I have never had any experience with C++ nor have I ever coded a single line of code in C++. I have always wanted to learn it and now is my chance. But I really want to make sure I get benefit out of it in the future, in the sense that I will have the skills that will still be on-demand in the future. So, what do you experts think? Is C++ still the language to learn these days to secure yourself for the future? What will I learn more in C++ but not in Java? And are they worthy to learn considering the current and possible future demands in IT industry? (Apart from the obvious more control over memory management and something along that line.) What is a good excuse to refuse the offer in order to stay with the Java team? I don't want to blatantly refuse it because you can never predict the future and I could possibly come back to my manager in the future and ask him to transfer me to the C++ team. How do I say it nicely that I am taking the offer but I would like to still be involved with Java one way or another, such as when there is a new Java project I would like to be considered. I have to admit that I am kind of 50-50 at the moment. I want to learn C++ for the sake of improving my skills and also helping my company to reduce the fund required for the Java team. But it is also hard for me to leave Java because I know Java is going to get bigger, so I am afraid of getting behind when I start concentrating on C++. I could, of course, decide to just join the C++ team, and then spend my free time reading about Java to keep in touch with it, but I thought I would ask anyway in case some people can point out the strong points of either over the other given the current and possibly future circumstances.

    Read the article

  • CSS 3 specials rounded corners in borders

    - by Ruben
    How can I realise this special corner in my border with CSS3 This is what I got now: http://jsfiddle.net/hashie5/nDv5k/ <aside> <h2>Product in de kijker</h2> <h3>Mobiele telefonie</h3> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p> <a href="#" class="btn btn-runa">Ga verder</a> aside { border: 1px solid #CCCC00; border-radius: 10px; padding: 5px 20px 20px 20px; width: 290px; margin: 50px; } body { font-family:"Trebuchet MS"; color: #333333; background-color: #FFFFFF; font-size: 14px; line-height: 150%; } h1 { font-size: 30px; color: #1F668C; line-height: 120%; font-weight: normal; } h2 { font-size: 22px; color: #CCCC00; line-height: 120%; font-weight: normal; } h3 { font-size: 22px; color: #1E678E; line-height: 120%; font-weight: normal; } h4 { font-size: 20px; color: #1E668C; line-height: 120%; font-weight: normal; } h5 { font-size: 14px; color: #CCCC00; line-height: 120%; font-weight: bold; } .btn-runa { background: none; background-color: #CCCC00; color: #FFFFFF; text-shadow: none; }

    Read the article

  • how to list out the submited data in same page where form submitted?

    - by OM The Eternity
    I have a form with 3 text values and one image.. I want to save these values such that i can display these records in the list below.. how can i do that... I am using osCommerce For example: <form method="post" id="fm-form" action ="" enctype="multipart/form-data"> <label>Name:</label> <input type="text" id="fm-name" name="fm-name" value="" /> <label>Email:</label> <input type="text" id="fm-email" name="fm-email" value="" /> <label>Birthdate:</label> <input type="text" id="fm-birthdate" name="fm-birthdate" value="" /> <input type="file" id="fm-image" name="fm-image"/> <input type="submit" id="fm-submit" value="Save it"> </form> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr > <td align="center" class="productListing-heading">Product(s)</td> <td align="center" class="productListing-heading">Edit</td> <td align="center" class="productListing-heading">Delete</td> </tr> <?php for($i=0;$i<$count_image;$i++){?> <tr> <td align="left" class="productListing-data1"> <?php echo tep_image(DIR_WS_IMAGES . $file_realname, $save_image[$i], '110', '110');?> </td> <td align="center" class="productListing-data1">Edit</td> <td align="center" class="productListing-data1">Delete</td> </tr> <tr><td>&nbsp;</td></tr> <?php }?> </table> In the above format as the form is submitted the image has to be stored in a count_image array variable... and the on its count, the list below the form is displayed.. but i cannot get it worked.. could u pls help in doing this...

    Read the article

  • SSL certificates and types for securing your websites and applications

    - by Mit Naik
    Need to share few information regarding SSL certificates and there types, which SSL certificates are widely used etc. There are several SSL certificates available in the market today inorder to secure your domains, multiple subdomains, your applications and code too. Few of the details are mentioned below. CheapSSL certificates available today are Standard Rapidssl certificate, Thwate SSL 123 etc certificates which are basic level certificates. Most of these cheap SSL certificates are domain-validated only and don't provide the greatest trust for your customers. This means you shouldn't use cheap SSL certificates on e-commerce stores or other public-facing sites that require people to trust the site. EV certificates I found Geotrust Truebusinessid with EV certificate which is one of the cheapest certificate available in market today, you can also find Thwate, Versign EV version of certificates. Its designed to prevent phishing attacks better than normal SSL certificates. What makes an EV Certificate so special? An SSL Certificate Provider has to do some extensive validation to give you one including: Verifying that your organization is legally registered and active, Verifying the address and phone number of your organization, Verifying that your organization has exclusive right to use the domain specified in the EV Certificate, Verifying that the person ordering the certificate has been authorized by the organization, Verifying that your organization is not on any government blacklists. SSL WILDCARD CERTIFICATES, SSL Wildcard Certificates are big money-savers. An SSL Wildcard Certificate allows you to secure an unlimited number of first-level sub-domains on a single domain name. For example, if you need to secure the following websites: * www.yourdomain.com * secure.yourdomain.com * product.yourdomain.com * info.yourdomain.com * download.yourdomain.com * anything.yourdomain.com and all of these websites are hosted on the multiple server box, you can purchase and install one Wildcard certificate issued to *.yourdomain.com to secure all these sites. SAN CERTIFICATES, are interesting certificates and are helpfull if you want to secure multiple domains by generating single CSR and can install the same certificate on your additional sites without generating new CSRs for all the additional domains. CODE SIGNING CERTIFICATES, A code signing certificate is a file containing a digital signature that can be used to sign executables and scripts in order to verify your identity and ensure that your code has not been tampered with since it was signed. This helps your users to determine whether your software can be trusted. Scroll to the chart below to compare cheap code signing certificates. A code signing certificate allows you to sign code using a private and public key system similar to how an SSL certificate secures a website. When you request a code signing certificate, a public/private key pair is generated. The certificate authority will then issue a code signing certificate that contains the public key. A certificate for code signing needs to be signed by a trusted certificate authority so that the operating system knows that your identity has been validated. You could still use the code signing certificate to sign and distribute malicious software but you will be held legally accountable for it. You can sign many different types of code. The most common types include Windows applications such as .exe, .cab, .dll, .ocx, and .xpi files (using an Authenticode certificate), Apple applications (using an Apple code signing certificate), Microsoft Office VBA objects and macros (using a VBA code signing certificate), .jar files (using a Java code signing certificate), .air or .airi files (using an Adobe AIR certificate), and Windows Vista drivers and other kernel-mode software (using a Vista code certificate). In reality, a code signing certificate can sign almost all types of code as long as you convert the certificate to the correct format first. Also I found the below URL which provides you good suggestion regarding purchasing best SSL certificates for securing your site, as per the Financial institution, Bank, Hosting providers, ISP, Retail Merchants etc. Please vote and provide comments or any additional suggestions regarding SSL certificates.

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • Error when installing AppFabric 1.1 on Server 2012 64bit

    - by no9
    I am trying to install AppFabric 1.1 on 64bit Windows Server 2012 R2. All updates have been installed and updates are turned ON .NET Framework 4.0 is installed .NET Framework 3.5 is installed IIS is installed Windows Powershell 3.0 should already be included in Server 2012 I am getting the following error: 2014-03-21 11:02:34, Information Setup ===== Logging started: 2014-03-21 11:02:34+01:00 ===== 2014-03-21 11:02:34, Information Setup File: c:\6c4006b0b3f6dee1bf616f1967\setup.exe 2014-03-21 11:02:34, Information Setup InternalName: Setup.exe 2014-03-21 11:02:34, Information Setup OriginalFilename: Setup.exe 2014-03-21 11:02:34, Information Setup FileVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup FileDescription: Setup.exe 2014-03-21 11:02:34, Information Setup Product: Microsoft(R) Windows(R) Server AppFabric 2014-03-21 11:02:34, Information Setup ProductVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup Debug: False 2014-03-21 11:02:34, Information Setup Patched: False 2014-03-21 11:02:34, Information Setup PreRelease: False 2014-03-21 11:02:34, Information Setup PrivateBuild: False 2014-03-21 11:02:34, Information Setup SpecialBuild: False 2014-03-21 11:02:34, Information Setup Language: Language Neutral 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup OS Name: Windows Server 2012 R2 Standard 2014-03-21 11:02:34, Information Setup OS Edition: ServerStandard 2014-03-21 11:02:34, Information Setup OSVersion: Microsoft Windows NT 6.2.9200.0 2014-03-21 11:02:34, Information Setup CurrentCulture: sl-SI 2014-03-21 11:02:34, Information Setup Processor Architecture: AMD64 2014-03-21 11:02:34, Information Setup Event Registration Source : AppFabric_Setup 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1.0 Upgrade module. 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1 Upgrade pre-install. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed, not taking backup. 2014-03-21 11:02:55, Information Setup Executing C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe with commandline -iru. 2014-03-21 11:02:55, Information Setup Return code from aspnet_regiis.exe is 0 2014-03-21 11:02:55, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Microsoft CCR and DSS Runtime 2008 R3.msi" /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-55).log" 2014-03-21 11:02:57, Information Setup Process.ExitCode: 0x00000000 2014-03-21 11:02:57, Information Setup Windows features successfully enabled. 2014-03-21 11:02:57, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Packages\AppFabric-1.1-for-Windows-Server-64.msi" ADDDEFAULT=Worker,WorkerAdmin,CacheService,CacheAdmin,Setup /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-57).log" LOGFILE="C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1_CustomActions(2014-03-21 11-02-57).log" INSTALLDIR="C:\Program Files\AppFabric 1.1 for Windows Server" LANGID=en-US 2014-03-21 11:03:45, Information Setup Process.ExitCode: 0x00000643 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Information Setup Microsoft.ApplicationServer.Setup.Core.SetupException: AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.GenerateAndThrowSetupException(Int32 exitCode, LogEventSource logEventSource) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.Invoke(LogEventSource logEventSource, InstallMode installMode, String packageIdentity, List`1 updateList, List`1 customArguments) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.InstallSelectedFeatures() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.Install() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Client.ProgressPage.StartAction() 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup === Summary of Actions === 2014-03-21 11:03:45, Information Setup Required Windows components : Completed Successfully 2014-03-21 11:03:45, Information Setup IIS Management Console : Completed Successfully 2014-03-21 11:03:45, Information Setup Microsoft CCR and DSS Runtime 2008 R3 : Completed Successfully 2014-03-21 11:03:45, Information Setup AppFabric 1.1 for Windows Server : Failed 2014-03-21 11:03:45, Information Setup Hosting Services : Failed 2014-03-21 11:03:45, Information Setup Caching Services : Failed 2014-03-21 11:03:45, Information Setup Hosting Administration : Failed 2014-03-21 11:03:45, Information Setup Cache Administration : Failed 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup ===== Logging stopped: 2014-03-21 11:03:45+01:00 ===== I have tried this solution but no success: http://stackoverflow.com/questions/11205927/appfabric-installation-failed-because-installer-msi-returned-with-error-code-1 My system enviroment variable PSModulesPath has this value: C:\Windows\System32\WindowsPowerShell\v1.0\Modules I have also followed this link with no success: http://jefferytay.wordpress.com/2013/12/11/installing-appfabric-on-windows-server-2012/ Any help would be greatly appreciated !

    Read the article

  • Integrating HP Systems Insight Manager into an existing environment

    - by ewwhite
    I'm working with an environment that spans multiple data centers/sites and consists primarily of HP ProLiant servers (G5-G7) running Linux. The mix is 30% RHEL/CentOS, the rest are Gentoo :(. I also have a few dozen virtual machines running back-office and Windows servers on VMWare ESX hosts. I run OpenNMS to pull SNMP data from the various server nodes and networking devices. While OpenNMS works wonderfully for up/down, thresholds and notifications, it's native handling of traps is a little rough and the graphs are not particularly pretty. I use Orca/RRD graphs for performance trending and nice graphs. I'm tasked with inventorying the environment and wanted to come up with a clean way to organize server information. Since my environment is mostly HP, I've been playing with HP Systems Insight Manager as a way to extract server data and to deploy HP health/monitoring packages and firmware. The Gentoo systems eventually have to be converted to CentOS, so getting a quick assessment of what hardware is where would be great. Although I've read through a few hundred pages of HP manuals, I'm having a difficult time understanding how to get HP SIM to do what I want, though. My main problems are: I have about 40 subnets to deal with; 98% connected with private lines to facilities across the globe. I don't want to initiate an HP SIM discovery only to pull back every piece of intermediate networking hardware and equipment from all of the locations. I'd like this to focus on the servers. I have OpenNMS configured to accept traps. I don't want HP SIM to duplicate that effort. It seems like the built-in software deployment tool wants to overwrite the trapsink parameters for the systems it encounters during discovery. I have about 10 administrative username/password combinations in use across this infrastructure. Is there a more efficient way to get HP SIM to do the discovery or break discovery into manageable chunks? In terms of general workflow, do people typically install the HP Management Agents during the initial OS deployment (e.g. kickstart post script) or afterwards from HP SIM? Is HP SIM too thick/fat to be an inventory tool? I can't tell if it's meant to be used standalone or alongside other monitoring products. Since the majority of the systems I'm trying to track are those running Gentoo (in order to plan the move to CentOS), is there any way for HP SIM to extract system model information from them ( like dmidecode)? I have systems here where I may have an SSH key established, but not direct user or login access. Is there a way for me to import an SSH private/public key pair into HP SIM to reach out to the servers that can't accept standard credentials? There are a handful of sites where I have inconsistent access or have a double-NAT situation. I may be able to poke a server, but it may not be able to find its way back to the management system. Is there a workaround for this? The certificate configuration for HP SIM seems complicated. What is the preferred setup for trust between systems? I'd also appreciate any notes or recommendations to using this product. Or if there's a better way to do this, I'd like to know.

    Read the article

  • Cannot ping host stale ARP cache?

    - by gkchicago
    I am having a strange issue with a Debian (Lenny/Linux 2.6.26-2-amd64) that has been driving me nuts. On some machines within my network I can ping the host in question just fine, other times I have to manually hard-code the ARP ethernet address for the IP in order to establish connectivity. I've finally worked it down to somehow involving ARP. I just found how to fix it in a way that made it work but I'm looking for help explaining this issue and also I don't trust my fix to be permanent.. My thought process has been the following but I just can't make any sense out of it: Could it be the card? (Intel 82555 rev 4) Could it be because there are two network cards? (Default route is eth0) Could it be because of the network aliases? Lenny? AMD x86_64? Argh.. Thank you for any insight you might have // Ping doesn't go thru [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. --- 192.168.135.101 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3014ms // Here's the ARP Table, sometimes the .151 address is good, sometimes it // also matches the Gateways MAC like .101 is doing right here. [gordon@ubuntu ~]$ cat /proc/net/arp IP address HW type Flags HW address Mask Device 192.168.135.15 0x1 0x2 00:0B:DB:2B:24:89 * eth0 192.168.135.151 0x1 0x2 00:0B:6A:3A:30:A6 * eth0 192.168.135.1 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 192.168.135.101 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 // Drop the bad arp table listing and set it manually based on /sbin/ifconfig [gordon@ubuntu ~]$ sudo arp -d 192.168.135.101 [gordon@ubuntu ~]$ sudo arp -s 192.168.135.101 00:0B:6A:3A:30:A6 // Ping starts going thru..?!? [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. 64 bytes from 192.168.135.101: icmp_seq=1 ttl=64 time=15.8 ms 64 bytes from 192.168.135.101: icmp_seq=2 ttl=64 time=15.9 ms 64 bytes from 192.168.135.101: icmp_seq=3 ttl=64 time=16.0 ms 64 bytes from 192.168.135.101: icmp_seq=4 ttl=64 time=15.9 ms --- 192.168.135.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3012ms rtt min/avg/max/mdev = 15.836/15.943/16.064/0.121 ms The following is my network config on this. gordon@db01:~$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.151 Bcast:192.168.135.255 Mask:255.255.255.0 inet6 addr: fe80::20b:6aff:fe3a:30a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15476725 errors:0 dropped:0 overruns:0 frame:0 TX packets:10030036 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18565307359 (17.2 GiB) TX bytes:3412098075 (3.1 GiB) eth0:0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.150 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:1 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.101 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:e0:81:2a:6e:d0 inet addr:10.10.62.1 Bcast:10.10.62.255 Mask:255.255.255.0 inet6 addr: fe80::2e0:81ff:fe2a:6ed0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10233315 errors:0 dropped:0 overruns:0 frame:0 TX packets:19400286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1112500658 (1.0 GiB) TX bytes:27952809020 (26.0 GiB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:387 errors:0 dropped:0 overruns:0 frame:0 TX packets:387 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:41314 (40.3 KiB) TX bytes:41314 (40.3 KiB) gordon@db01:~$ sudo mii-tool -v eth0 eth0: negotiated 100baseTx-FD, link ok product info: Intel 82555 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD gordon@db01:~$ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface localnet * 255.255.255.0 U 0 0 0 eth0 10.10.62.0 * 255.255.255.0 U 0 0 0 eth1 default 192.168.135.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • WSUS 3.0 SP2 installation fails at "configuring database" step.

    - by flashkube
    Attempting to install WSUS 3.0 SP2 on a Windows Server 2003 Enterprise system. I'm asking the setup to create a new database on one of our existing SQL Server 2005 systems. When the setup gets to the "configuring database" step it stops and throws "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor." The two logs it suggests I look at are below. I'm not seeing any errors that mean anything to me. Any direction you can give will be greatly appreciated. WSUSSetup.log: 2009-12-04 15:26:21 Success MWUSSetup Validating pre-requisites... 2009-12-04 15:26:22 Error MWUSSetup Failed to determine if an higher version of WSUS is installed. Assuming it is not... (Error 0x80070002: The system cannot find the file specified.) 2009-12-04 15:26:28 Success MWUSSetup No SQL instances found 2009-12-04 15:26:42 Success MWUSSetup Initializing installation details 2009-12-04 15:26:42 Success MWUSSetup Installing ASP.Net 2009-12-04 15:27:24 Success MWUSSetup ASP.Net is installed successfully 2009-12-04 15:27:24 Success MWUSSetup Installing WSUS... 2009-12-04 15:27:28 Success CustomActions.Dll Unable to get INSTALL_LANGUAGE property, calculating it... 2009-12-04 15:27:28 Success CustomActions.Dll Successfully set propery of WSUS admin groups' full names 2009-12-04 15:27:29 Success CustomActions.Dll .Net framework path: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Reporters with Description: WSUS Administrators who can only run reports on the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Reporters user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Administrators with Description: WSUS Administrators can administer the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Administrators user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS user groups 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Successfully set binary SID properties 2009-12-04 15:28:50 Error MWUSSetup InstallWsus: MWUS Installation Failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CInstallDriver::PerformSetup: WSUS installation failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CSetupDriver::LaunchSetup: Setup failed (Error 0x80070643: Fatal error during installation.) From the end of WSUSSetupmsi_091204_1527.log MSI (s) (58:7C) [15:28:49:860]: Note: 1: 1708 MSI (s) (58:7C) [15:28:49:860]: Product: Windows Server Update Services 3.0 SP2 -- Installation failed. MSI (s) (58:7C) [15:28:49:875]: Cleaning up uninstalled install packages, if any exist MSI (s) (58:7C) [15:28:49:875]: MainEngineThread is returning 1603 MSI (s) (58:78) [15:28:49:985]: Destroying RemoteAPI object. MSI (s) (58:90) [15:28:49:985]: Custom Action Manager thread ending. === Logging stopped: 12/4/2009 15:28:49 === MSI (c) (30:54) [15:28:50:016]: Decrementing counter to disable shutdown. If counter = 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (30:54) [15:28:50:016]: MainEngineThread is returning 1603 === Verbose logging stopped: 12/4/2009 15:28:50 ===

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • IP address reuse on macvlan devices

    - by Alex Bubnoff
    I'm trying to create easy to use and possibly simple testing environment for some product and got some strange behaviour of macvlan's. What I'm trying to achieve: make a toolset for one-line start/stop of lxc containers(via docker) bound to external ip(I have enough of it on host machine). So, I'm doing something like this: docker run -d -name=container_name container_image pipework eth1 container_name ip/prefix_len@gateway and pipework here does this: GUEST_IFNAME=ph$NSPID$eth1 ip link add link eth1 dev $GUEST_IFNAME type macvlan mode bridge ip link set eth1 up ip link set $GUEST_IFNAME netns $NSPID ip netns exec $NSPID ip link set $GUEST_IFNAME name eth1 ip netns exec $NSPID ip addr add $IPADDR dev eth1 ip netns exec $NSPID ip route delete default ip netns exec $NSPID ip link set eth1 up ip netns exec $NSPID ip route replace default via $GATEWAY ip netns exec $NSPID arping -c 1 -A -I eth1 $IPADDR And it works for first time per IP. But for second time and later packets for containers IP isn't getting into container, while all configuration seem fine. So it looks like this: External machine ? ping 212.76.131.212 ....silence.... Host machine root@ubuntu:~# ip link show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip addr show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# tcpdump -v -i eth1 icmp tcpdump: WARNING: eth1: no IPv4 address assigned tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 00:00:46.542042 IP (tos 0x0, ttl 60, id 9623, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2345, length 64 00:00:47.549969 IP (tos 0x0, ttl 60, id 9624, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2346, length 64 00:00:48.558143 IP (tos 0x0, ttl 60, id 9625, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2347, length 64 00:00:49.566319 IP (tos 0x0, ttl 60, id 9626, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2348, length 64 00:00:50.573999 IP (tos 0x0, ttl 60, id 9627, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2349, length 64 ^C 5 packets captured 5 packets received by filter 0 packets dropped by kernel 1 packet dropped by interface Host machine, netns of container root@ubuntu:~# ip netns exec 32053 ip link show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip netns exec 32053 ip addr show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff inet 212.76.131.212/29 scope global eth1 inet6 fe80::b012:f7ff:fecc:a19d/64 scope link valid_lft forever preferred_lft forever root@ubuntu:~# ip netns exec 32053 tcpdump -v -i eth1 icmp tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes ....silence.... ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel So, can anyone say, what can it be? Can this be caused by not a bug in macvlan implementation? Is there any tools I can use to debug that configuration?

    Read the article

  • Slow disk transfer rate

    - by Nooklez
    I have problem with slow disk transfer rate. It's static files server for our website. I was making backup of data and noticed that tar is very slow. So I did hdparm -t and... hdparm -t /dev/sda3 /dev/sda3: Timing buffered disk reads: 6 MB in 4.70 seconds = 1.28 MB/sec It's low traffic hour now on our site, so huge I/O traffic is not a reason (iotop show less than 1 MB/s). It's RAID10 setup (2x2 SATA drives). Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy ------------------------------------------------------------------------------ u0 RAID-10 OK - - 64K 1396.96 W ON VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 698.63 GB SATA 0 - WDC WD7500AADS-00M2 p1 OK u0 698.63 GB SATA 1 - WDC WD7500AADS-00M2 p2 OK u0 698.63 GB SATA 2 - WDC WD7500AADS-00M2 p3 OK u0 698.63 GB SATA 3 - WDC WD7500AADS-00M2 We have recently changed almost all components of server (excluding 3ware controller + disks). And I think problems started since then. Can it be configuration problem or hardware? EDIT: I found something like that in dmesg [166843.625843] irq 16: nobody cared (try booting with the "irqpoll" option) [166843.625846] Pid: 0, comm: swapper Not tainted 3.1.5-gentoo #3 [166843.625847] Call Trace: [166843.625848] <IRQ> [<ffffffff810859d5>] __report_bad_irq+0x35/0xc1 [166843.625856] [<ffffffff81085cec>] note_interrupt+0x165/0x1e1 [166843.625859] [<ffffffff8108445f>] handle_irq_event_percpu+0x16f/0x187 [166843.625861] [<ffffffff810844a9>] handle_irq_event+0x32/0x51 [166843.625863] [<ffffffff8108640b>] handle_fasteoi_irq+0x75/0x99 [166843.625866] [<ffffffff810039d7>] handle_irq+0x83/0x8b [166843.625868] [<ffffffff810036ad>] do_IRQ+0x48/0xa0 [166843.625871] [<ffffffff8155082b>] common_interrupt+0x6b/0x6b [166843.625872] <EOI> [<ffffffff812981e8>] ? acpi_safe_halt+0x22/0x35 [166843.625877] [<ffffffff812981e2>] ? acpi_safe_halt+0x1c/0x35 [166843.625879] [<ffffffff81298216>] acpi_idle_do_entry+0x1b/0x2b [166843.625881] [<ffffffff81298276>] acpi_idle_enter_c1+0x50/0x99 [166843.625884] [<ffffffff813b792a>] cpuidle_idle_call+0xed/0x171 [166843.625886] [<ffffffff81001257>] cpu_idle+0x55/0x81 [166843.625888] [<ffffffff81532a69>] rest_init+0x6d/0x6f [166843.625891] [<ffffffff81aa1aca>] start_kernel+0x329/0x334 [166843.625893] [<ffffffff81aa12a6>] x86_64_start_reservations+0xb6/0xba [166843.625894] [<ffffffff81aa139c>] x86_64_start_kernel+0xf2/0xf9 [166843.625896] handlers: [166843.625898] [<ffffffff812dc8de>] twl_interrupt [166843.625900] Disabling IRQ #16 It's related to problem? EDIT #2: Based on feedback in comments, here is more informations. cat /proc/interrupts 16: 390813 0 0 0 IO-APIC-fasteoi 3w-sas Controller model: [ 1.095350] 3ware Storage Controller device driver for Linux v1.26.02.003. [ 1.095467] 3ware 9000 Storage Controller device driver for Linux v2.26.02.014. [ 1.095641] LSI 3ware SAS/SATA-RAID Controller device driver for Linux v3.26.02.000. [ 1.095787] 3w-sas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1.095881] 3w-sas 0000:01:00.0: setting latency timer to 64 [ 1.910801] 3w-sas: scsi0: Found an LSI 3ware 9750-4i Controller at 0xfe560000, IRQ: 16. [ 2.216537] 3w-sas: scsi0: Firmware FH9X 5.08.00.008, BIOS BE9X 5.07.00.011, Phys: 8. [ 2.216836] scsi 0:0:0:0: Direct-Access LSI 9750-4i DISK 5.08 PQ: 0 ANSI: 5 And motherboard: description: Motherboard product: P8H67-M vendor: ASUSTeK Computer INC.

    Read the article

  • hostapd running on Ubuntu Server 13.04 only allows single station to connect when using wpa

    - by user450688
    Problem Only a single station can connect to hostapd at a time. Any single station can connect (W8, OSX, iOS, Nexus) but when two or more hosts are connected at the same time the first client loses its connectivity. However there are no connectivity issues when WPA is not used. Setup Linux (Ubuntu server 13.04) wireless router (with separate networks for wired WAN, wired LAN, and Wireless LAN. iptables-save output: *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.0.0/24 -o p4p1 -j MASQUERADE -A POSTROUTING -s 10.0.1.0/24 -o p4p1 -j MASQUERADE COMMIT *mangle :PREROUTING ACCEPT [13:916] :INPUT ACCEPT [9:708] :FORWARD ACCEPT [4:208] :OUTPUT ACCEPT [9:3492] :POSTROUTING ACCEPT [13:3700] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [9:3492] -A INPUT -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i p4p1 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i wlan0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A FORWARD -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -j ACCEPT -A FORWARD -i wlan0 -j ACCEPT -A FORWARD -i lo -j ACCEPT COMMIT /etc/hostapd/hostapd.conf #Wireless Interface interface=wlan0 driver=nl80211 ssid=<removed> hw_mode=g channel=6 max_num_sta=15 auth_algs=3 ieee80211n=1 wmm_enabled=1 wme_enabled=1 #Configure Hardware Capabilities of Interface ht_capab=[HT40+][SMPS-STATIC][GF][SHORT-GI-20][SHORT-GI-40][RX-STBC12] #Accept all MAC address macaddr_acl=0 #Shared Key Authentication wpa=1 wpa_passphrase=<removed> wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP rsn_pairwise=CCMP ###IPad Connectivevity Repair ieee8021x=0 eap_server=0 Wireless Card #lshw output product: RT2790 Wireless 802.11n 1T/2R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 00 serial: <removed> width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=rt2800pci driverversion=3.8.0-25-generic firmware=0.34 ip=10.0.1.254 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn #iw list output Band 1: Capabilities: 0x272 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 2-streams Max AMSDU length: 3839 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-15, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (27.0 dBm) * 2417 MHz [2] (27.0 dBm) * 2422 MHz [3] (27.0 dBm) * 2427 MHz [4] (27.0 dBm) * 2432 MHz [5] (27.0 dBm) * 2437 MHz [6] (27.0 dBm) * 2442 MHz [7] (27.0 dBm) * 2447 MHz [8] (27.0 dBm) * 2452 MHz [9] (27.0 dBm) * 2457 MHz [10] (27.0 dBm) * 2462 MHz [11] (27.0 dBm) * 2467 MHz [12] (disabled) * 2472 MHz [13] (disabled) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) Available Antennas: TX 0 RX 0 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ AP } <= 8, total <= 8, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * set_tx_bitrate_mask * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (84) * Unknown command (87) * Unknown command (85) * Unknown command (89) * Unknown command (92) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * Unknown mode (10): 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 Supported RX frame types: * IBSS: 0x40 0xb0 0xc0 0xd0 * managed: 0x40 0xd0 * AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * mesh point: 0xb0 0xc0 0xd0 * P2P-client: 0x40 0xd0 * P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * Unknown mode (10): 0x40 0xd0 Device supports RSN-IBSS. HT Capability overrides: * MCS: ff ff ff ff ff ff ff ff ff ff * maximum A-MSDU length * supported channel width * short GI for 40 MHz * max A-MPDU length exponent * min MPDU start spacing Device supports TX status socket option. Device supports HT-IBSS.

    Read the article

  • SQL2008R2 install issues on windows 7 - unable to install setup support files?

    - by Liam
    I am trying to install the above but am getting the following errors when its attempting to install the setup support files, This is the first error that occurs during installation of the setup support files TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDF039760%25401201%25401 This is the second error that occurs after clicking continue in the installer after the first error is generated TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: SQL Server Setup has encountered an error when running a Windows Installer file. Windows Installer error message: The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance. Windows Installer file: C:\Users\watto_uk\Desktop\In-Digital\Software\Microsoft\SQL Server 2008 R2\1033_ENU_LP\x64\setup\sqlsupport_msi\SqlSupport.msi Windows Installer log file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110713_205508\SqlSupport_Cpu64_1_ComponentUpdate.log Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDC80C325 These errors are generated from an ISO package downloaded from Microsoft. I have also tried using the web platform installer to install the express version instead but the SQL Server Installation fails with that also. The management studio installs fine but not the server. I have checked to make sure that the Windows Installer is started and it is. Cant seem to find an answer for this anywhere as all previous reported issues appear to be related to XP. I did have the express edition installed on the machine previously but uninstalled it to upgrade to the full version, I wish I hadn't now. Can anyone kindly offer any advice or point me in the right direction to stop me going insane with this? Any advice will be appreciated. Update======================= After digging a bit deeper ive located details of the error from the setup log file, i can also upload the log file if required. MSI (s) (E8:28) [23:35:18:705]: Assembly Error:The module '%1' was expected to contain an assembly manifest. MSI (s) (E8:28) [23:35:18:705]: Note: 1: 1935 2: 3: 0x80131018 4: IStream 5: Commit 6: MSI (s) (E8:28) [23:35:18:705]: Note: 1: 2337 2: 0 3: Microsoft.SqlServer.GridControl.dll MSI (s) (E8:28) [23:35:22:869]: Product: Microsoft SQL Server 2008 R2 Setup (English) -- Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:916]: Internal Exception during install operation: 0xc0000005 at 0x000007FEE908A23E. MSI (s) (E8:28) [23:35:22:916]: WER report disabled for silent install. MSI (s) (E8:28) [23:35:22:932]: Internal MSI error. Installer terminated prematurely. Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:932]: MainEngineThread is returning 1603 MSI (s) (E8:58) [23:35:22:932]: RESTART MANAGER: Session closed. Installer stopped prematurely. MSI (c) (0C:14) [23:35:22:947]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (0C:14) [23:35:22:947]: MainEngineThread is returning 1601 === Verbose logging stopped: 13/07/2011 23:35:22 ===

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Installation of Access Database Engine 32-bit Fails

    - by Rayzor78
    I am trying to install Access Database Engine 2007 32-bit. The splash screen comes up, you click "Next", then it fails with the error: Installation ended prematurely because of an error You click "OK" and another error window says: The installation of the package failed. The exact same situation happens when I try this with Access Database Engine 2010 32-bit. This production server is running Windows Server 2008 R2 SP1 64-bit. Before I tried installing Access Database Engine 32-bit, I first needed to install Microsoft Office 2010 Pro (Excel and Office Tools only). I tried the 32-bit version on the production server since that is how I set it up in our Dev environment. No luck. The 32-bit version would not install. I did NOT get the error "You have 64-bit components of Office installed". I simply received the exact same two errors listed above. So, I knew that 32-bit/64-bit did not really matter for the Office install for my project, so I installed 64-bit of Office Pro 2010 (Excel and Office Tools only) with no problems. I have a requirement that I need to have the 32-bit version of the Access Database Engine installed. 2007 or 2010, doesn't matter. I cannot use the 64-bit version of Access Database Engine 2010 because my SSIS package will not work with it. I require the 32-bit version. I've tried several steps to try to get it installed. I seriously think that the production server has some aversion to installing 32-bit applications. Here's what I've tried: Tried installing via command line with the "/passive" switch....no luck. Tried numerous iterations to copy the install file to the server (downloaded a fresh copy directly to the server, downloaded a fresh copy to my local machine then copied it over, copied it over zipped up) (http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/efd3c1f0-07cd-45ca-a626-2dd0c7ac3e9f). Tried Method 1 from this link. Could not try Method 2 because it requires a server reboot and in my environment that requires a long change management process. I've verified that I am a local administrator on the server. (Evidence, I am able to install other applications (office 64-bit per above)). Verified that there are no other office products that should be blocking the installation. The fore-mentioned install of Excel 2010 64-bit was the first Office product installed on the server. VERY ODD: To test my theory that the production server does not like 32-bit applications, I installed something lightweight. I installed 7-Zip 32-bit on the production server with no problems whatsoever. Here are some things that I have not tried (i will follow-up once I do): Method 2 (as mentioned above). Requires a server reboot. Have not verified that the Dev and Production environments are 100% identical. I've done a cursory check and on the surface they appear to be the same (same OS and SP version). I need to do a deeper dive to be 100% certain. I had no problems in my Dev environment. In Dev, I installed Office 2010 Pro 64-bit (Excel & Office Tools only) then via command line w/ the "/passive" switch, installed Access Database Engine 2010 32-bit. I don't know what else to try. Any suggestions or comments?

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • How can I make subversion reset the stored passwords/users and remember my authentication credential

    - by NicDumZ
    Hello folks! Background: I used to have everything working just fine on my fresh install: $ svn co https://domain:443/ test1 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn co https://domain:443/ test2 # checkouts nicely, without asking for my password. At some point I needed to commit stuff using a different account. So I did that $ svn ci --username other.user Authentication realm: <https://domain:443> Subversion repository Password for 'other.user': # works fine But since then, everytime I want to commit as 'nicdumz' (default user, all repos have been checked-out with that user), it prompts me for my password: $ svn ci Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': Hey come on, why :) The same happens if I want a fresh checkout, since read-access is also protected. So I tried fixing the issue by myself. I read around that ~/.subversion/auth was storing credentials, so I removed it from the way: $ cd ~/.subversion $ mv auth oldauth $ mkdir auth It seemed to work at first, because svn had forgotten about certificate validation: $ svn co https://domain:443/ test3 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn up Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': What? how is this happening? If you have suggestions to investigate more about the behaviour, I am very interested. If I'm correct, there is no way to do a verbose svn up or anything of the like, so I'm not sure should I go for investigation. Oh, and for what it's worth: $ svn --version svn, version 1.6.6 (r40053) compiled Oct 26 2009, 06:19:08 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >