Search Results

Search found 13749 results on 550 pages for 'reason'.

Page 48/550 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How to increase wifi speed for laptops

    - by sagar
    Now, Let me explain the situation. I am having a query regarding Wi-Fi network. I am having PC & laptop. I requested my Wi-Fi providers that I want connection in my PC. So that - Wi-Fi provider set up an Antenna on my building Terrace - They joined a cable to pc & that Antenna. ( I think using RJ45 connector ) - The reason behind this - my does not have a built in Wi-Fi adapter. Now - almost laptops have built in Wi-Fi. Now - On terrace there is Wi-Fi with superb speed. But on my flat - Wi-fi comes with low speed. So, when ever I use internet on my pc - it has great speed - but my laptop works with low speed. The reason behind this - PC is catching wifi from terrace & laptop is catching the wifi from it's own place. Now, My question is something like this. Can we place an antenna or something like that & connect it to laptop for better wifi speed? ( I am not technical person - Please add comment for down vote - if any ) ( Please add comment for more explanation of my Problem ) Thanks in advance for sharing your knowledge. Sagar

    Read the article

  • VLAN with trunk to avoid broadcast storm in a network with redundant paths

    - by liv2hak
    I have 6 Juniper switches (EX - 2200) connected to each other as shown in the network topology. I have two PC's that I am using PC1 - (used for configuring the 6 switches via minicom) PC2 - to monitor the traffic between the switches via the Ports that are marked with arrows in the diagram. STEP 1: I create a new vlan On Switch 3 (SW3) that includes Port 12 and Port 22. I also assign l3-interface to the vlan (vlan_2) with ip address - 192.168.1.7. Now I plug-in Port 0 of Switch 3 on PC2. Now I try pinging 192.168.1.7 from PC2 (192.168.1.10) I want to know what will happen? My postulation is that I will not be able to ping SW3 from PC2.This is because SW3 (Port 12 and Port 22) is a part of a vlan_2 and vlan_2 logically breaks up broadcast domains and so 192.168.1.7 will not be reachable from 192.168.1.10. Now I have an l3-interface on SW1 with IP 192.168.1.1 using default vlan( vlan-id 0). Similarly I have enabled IP on SW2 - 192.168.1.2 SW3 - 192.168.1.3 SW4 - 192.168.1.4 SW5 - 192.168.1.5 SW6 - 192.168.1.6 all using default vlan. I create VLAN2 with the following configuration SW3 - Port 12,Port 22. SW6 - Port 14 I create VLAN3 with the following configuration SW3 - Port 0 SW6 - Port 0 I also configure a VLAN trunk between SW3 and SW6 using the following commands. edit interfaces ge-0/0/12 set unit 0 family ethernet-switching port-mode trunk edit interfaces ge-0/0/12 set unit 0 family ethernet-switching vlan members all There is a redundant path in the network as the loop between SW3 and SW6 is closed.There is no broadcast storm in the network? What is the reason for this? I have not enabled STP or RSTP.still there is no broadcast storm.what is the reason for this. (Please ignore the CISCO symbol on the switches in the diagram.All swithes are Junper EX 22-00.)

    Read the article

  • Max. Bandwidth Cannot be Reached

    - by Poyraz Sagtekin
    I have a question regarding the bandwidth usage. I don't think it's a very technical question but I really want to know the reason behind the problem. I'm having my education in one of my university's campuses. There is a 200mbps bandwidth availability for approx. 2100 students. Our main campus has 25,000 students and 2gbps bandwidth and there is no problem with the internet connection speed in main campus. However, in my campus, we are having difficulties with the internet connection speed. I went to IT department of my university and they've showed me some graphics about the bandwidth usage. The usage of the bandwidth has never reached 200mbps and it generally is around 130-140mbps. According to this information, maximum bandwidth is never made by the users but the browsing speed is not very convincing. What can be the problem? Maybe this is a silly question but I really want to know the reason. They told me that they've reached 180mbps before.

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • To update or to not update?

    - by Massimo
    Since starting working where I am working now, I've been in an endless struggle with my boss and coworkers in regard to updating systems. I of course totally agree that any update (be it firmware, O.S. or application) should not be applied carelessly as soon as it comes out, but I also firmly believe that there should be at least some reason if the vendor released it; and the most common reason is usually fixing some bug... which maybe you're not experiencing now, but you could be experiencing soon if you don't keep up with . This is especially true for security fixes; as an examle, had anyone simply applied a patch that had already been available for months, the infamous SQL Slammer worm would have been harmless. I'm all for testing and evaluating updates before deployng them; but I strongly disagree with the "if it's not broken then don't touch it" approach to systems management, and it genuinely hurts me when I find production Windows 2003 SP1 or ESX 3.5 Update 2 systems, and the only answer I can get is "it's working, we don't want to break it". What do you think about this? What is your policy? And what is your company policy, if it doesn't match your own?

    Read the article

  • Alternate way to connect a vpn through a MIFI

    - by questor
    This has gotten to be a major problem at our company and depending on who I ask, the problem either does not really exist (mfr. and vendor) or is insoluble ( according to most users including techs who know how to prove their point). The problem involves getting a normal Windows 7 system to connect to a normal Server 2008 R2 Server over a cellular router (usually called a Mifi). A very few brands/models appear to work but the majority cannot make the connection. Since it is a cellular device, there are many variables that come into play and I wondered if anyone had ever found a consistent way to either make one work or else prove to the providers that their equipment was at fault. They all specifically state “VPN use” on the sales brochures. But few if any work. And those that do are not reliable. From a standpoint of pure knowledge, I just wondered if anyone knew the real reason why they fail? Pptp, L2tp, IPsec doesn’t matter. I have not tried Shrew or OpenVPN and am using strictly MS Windows protocols. Plenty of Google Searches back up my complaints but none seem to be any closer to knowing "why" they fail, just that they do. This is a "quest for knowledge"question. I don't expect a solution. Just a reason for the problem if anyone has any ideas.

    Read the article

  • Mac Mail sent messages not in sent folder

    - by Elizabeth Akers
    Hi. My powerbook G4 has been lobbying for a furlough recently, and its latest prank is to not show sent messages in my sent mailbox. I have always had a copy of every sent message, and now, for some unknown reason, I can only see sent messages from May 7 and earlier. When I did a test, and sent myself an email, it shows up fine in my inbox, but there's nothing in the sent folder. Weird. It also has the little arrow next to messages that I replied to in the inbox. I guess I'm going to have to cc myself on everything until I get this fixed. Also, it's got another new trick, which I'm assuming is unrelated, but I mention it here just in case it is: the battery says it's "good" and has a cycle count of 0, and a full charge capacity of 2139, but for some reason it's all of a sudden not charging when it's plugged in. It works fine when connected to the wall, but there is no battery life at all. Zip. Any thoughts? Is it just time for a new 'book? This one just got back from Apple, having its harddrive replaced for the fourth time (for free, but seriously, this thing is a lemon). Thanks so much, Elizabeth

    Read the article

  • howto configure proxy.conf for mod_proxy, apache2, jetty

    - by Kaustubh P
    Hello, This is how I have setup my environment, atm. An apache2 instance on port 80. Jetty instance on the same server, on port 8090. Use-Case: When I visit foo.com, I should see the webapp, which is hosted on jetty, port 8090. If I put foo.com/blog, I should see the wordpress blog, which is hosted on apache. (I read howtos on the web, and installed it using AMP.) Below are my various configuration files: /etc/apache2/mods-enabled/proxy.conf: ProxyPass / http://foo.com:8090/ << this is the jetty server ProxyPass /blog http://foo.com/blog ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyStatus On /etc/apache2/httpd.conf: LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_balancer_module /usr/lib/apache2/modules/mod_proxy_balancer.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so I have not created any other files, in sites-available or sites-enabled. Current situation: If I goto foo.com, I see the webapp. If I goto foo.com/blog, I see a HTTP ERROR 404 Problem accessing /errors/404.html. Reason: NOT_FOUND powered by jetty:// If I comment out the first ProxyPass line, then on foo.com, I only see the homepage, without CSS applied, ie, only text.. .. and going to foo.com/blog gives me a this error: The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /blog. Reason: Error reading from remote server I also cannot access /phpmyadmin, giving the same 404 NOT_FOUND error as above. I am running Debian squeeze on an Amazon EC2 Instance. Question: Where am I going wrong? What changes should I make in the proxy.conf (or another conf files) to be able to visit the blog?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • Safely removing my linux dual boot

    - by de1337ed
    Currently, I dual boot with win7 and opensuse 12.1. Is it okay if I first restore the MBR for win7, and then format the linux drives? Or would it be better for me to first format the linux drives, and then restore the MBR? The reason I ask this is sometimes I get nasty errors when I try and boot into my win7 cd in the latter method. Is it possible to restore the MBR without having to boot into the win7 cd? Like, can I remove linux using the disk management utility in win7, and then fix the mbr while I'm still in win7, or do I have to boot into the win7 cd? If I can do this, how do I go about doing so? Thank you. EDIT::::: The spamming of f8 didn't work, it just made loud beeping noises, so I decided to simply boot into my windows disk and use the bootsect /nt60 SYS /mbr command. Note: I haven't formatted my linux partition yet. After I did that, I restarted my computer, and nothing happened. Basically, GRUB is still the MBR and I'm still able to access openSUSE. I think the reason why this is happening is because I think my GRUB is on a separate partion than the linux OS. Here is a picture of my diskmgmt in win7: . openSUSE did all the partioning stuff for me. All I know is that the 40gb that is there is where openSUSE is installed, but I have no clue what's on the 6.05gb and the 14.75gb partitions. Can anyone help me find which partition GRUB is on, and then remove it so I can restore the windows MBR? Thank you.

    Read the article

  • What speed are Wi-Fi management and control frames sent at?

    - by Bryce Thomas
    There are a bunch of different 802.11 Wi-Fi standards, e.g. 802.11a, 802.11b, 802.11g, 802.11n etc. that all support different speeds. Wi-Fi frames are generally categorised as one of the following: Data frames - carry the actual application data Control frames - coordinate when its safe to send/reduce collisions Management frames - handle connection discovery/setup/tear down (e.g. AP discovery, association, disassociation) My question is about whether all these frames, and specifically management frames, are transmitted at the fastest supported speed available, or whether certain classes of frames are transmitted at some lowest common denominator speed. I have noticed that when I put an 802.11b/g only device into monitor mode and capture traffic over the air, I still see management frames (e.g. association/disassociation) being transmitted between my phone and AP which are both 802.11n, even though 802.11n has a higher transfer rate. So I am imagining one of two possibilities: My 802.11n phone/AP had to negotiate a slower speed for some reason and that's why I can see their frames on my 802.11b/g monitoring device. Management frames (and perhaps control frames also?) are sent at a lower speed, and it's only data frames that are transmitted faster with newer 802.11 standards. The reason I would like to know which one of these two possibilities (or perhaps a third possibility) is the case is that I want to capture management frames, and need to know whether using an 802.11b/g card is going to lead to me missing some frames sent at higher speeds than the monitoring card can observe. If management frames are indeed sent at a slower rate, then it's all good. If I just happen to be seeing the management frames because my phone/AP have negotiated a slower rate though, then I need to reconsider what card I use for packet capture.

    Read the article

  • howto configure mod_proxy for apache2, jetty

    - by Kaustubh P
    Hello, This is how I have setup my environment, atm. An apache2 instance on port 80. Jetty instance on the same server, on port 8090. Use-Case: When I visit foo.com, I should see the webapp, which is hosted on jetty, port 8090. If I put foo.com/blog, I should see the wordpress blog, which is hosted on apache. (I read howtos on the web, and installed it using AMP.) Below are my various configuration files: /etc/apache2/mods-enabled/proxy.conf: ProxyPass / http://foo.com:8090/ << this is the jetty server ProxyPass /blog http://foo.com/blog ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyStatus On /etc/apache2/httpd.conf: LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_balancer_module /usr/lib/apache2/modules/mod_proxy_balancer.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so I have not created any other files, in sites-available or sites-enabled. Current situation: If I goto foo.com, I see the webapp. If I goto foo.com/blog, I see a HTTP ERROR 404 Problem accessing /errors/404.html. Reason: NOT_FOUND powered by jetty:// If I comment out the first ProxyPass line, then on foo.com, I only see the homepage, without CSS applied, ie, only text.. .. and going to foo.com/blog gives me a this error: The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /blog. Reason: Error reading from remote server I also cannot access /phpmyadmin, giving the same 404 NOT_FOUND error as above. I am running Debian squeeze on an Amazon EC2 Instance. Question: Where am I going wrong? What changes should I make in the proxy.conf (or another conf files) to be able to visit the blog?

    Read the article

  • Windows Service with a Logon user set

    - by David.Chu.ca
    I have a service running in a box with Windows XP and a box of Server (2008). The service is configured as autmactic mode with a logon user/pwd set. The log on user is a local user. The service requires this user setting in order to run. The issue I have right now is that the box intermittently reboot itself. I am going to investigate what is causing the reboot (hardware or application). Regardless the reason, what I need is that the service should be able to recover itself into running state after the reboot. I think the configuration should be able achieve this goal since the user/pwd having been set and its mode being automatic. Do I need to log in as that user to bring the service back? (sometimes the reboot happens in the midnight) I am not sure if there is any difference between Windows XP and Windows Server (2008). The only thing I realize is that when there is a unexpected reboot, the Windows Server will prompt a dialog to explain the previous reboot. Will this prevent any automatic service running or the service will run only the reason has been set?

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • Is it possible to install all packages from an APT repository?

    - by Kristoffer Hagen
    Is it possible to install all packages from an APT repository? I know it is possible to do it manually, but then you would need to know all the package names, and I don't. Any suggestions? Thanks. Update: Well, you guys are going to kill me for this, but the reason for my madness is that I want to install all the packages from BackTrack into my Ubuntu installation. I really don't like the idea of having it in a VM and having a separate partition for it is even more out of the question. I know that the folks at BackTrack doesn't like it when people leech their repositories, but that's what you get for releasing open source software. Stupid? maybe.. A valid reason? probably not.. Do I still want it? Yes. Another edit: I have now given up on this as it seems impossible to get it to work even by manually installing packages.

    Read the article

  • Outlook, Word, and normal.dot (2003 Edition)

    - by mosiac
    I have one user that for some reason has been having macro issues with her normal.dot file. At first the fix was just remove the file because she isn't actually needing to save anything. This was really a temp fix. We found out that for some reason every time she opened up word it was trying to modify normal.dot but not asking. I set it up to ask so at least we could control the changes going on to normal.dot. There was one file disabled in Word that we enabled because it was a document she never used anymore, making us think that maybe that was the issue. We have automatic antivirus updates and scans so there is little chance of a virus. The issue has stopped as far as just using Word itself. She can open, close, edit, save, etc and never get the dialog. In Outlook however if she clicks reply or forward to an e-mail but decides not to send it, and just close it. She gets the pop up to save changes to normal.dot. This leads me to believe something in outlook about how she is setup to use Word as an e-mail editor is causing the problem. Am I even on the right track here? Short form: Word works fine with normal.dot, as an Outlook mail editor wants to change normal.dot. No idea what to do.

    Read the article

  • network policy + WPA enterprise (tkip) Windows 2008 R2

    - by Aceth
    hi I've attempted the following guide and in a bit of a pickle. http://techblog.mirabito.net.au/?p=87 My main goal is to have a username / password based wireless authentication with active directory integration. I keep getting the error Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\rhysbeta Account Name: rhysbeta Account Domain: domain Fully Qualified Account Name: domain\rhysbeta Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 00-12-BF-00-71-3C:wirelessname Calling Station Identifier: 00-23-76-5D-1E-31 NAS: NAS IPv4 Address: 0.0.0.0 NAS IPv6 Address: - NAS Identifier: - NAS Port-Type: Wireless - IEEE 802.11 NAS Port: 2 RADIUS Client: Client Friendly Name: Belkin54g Client IP Address: x.x.x.10 Authentication Details: Connection Request Policy Name: Secure Wireless Connections Network Policy Name: Secure Wireless Connections Authentication Provider: Windows Authentication Server: srvr.example.com Authentication Type: EAP EAP Type: - Account Session Identifier: - Logging Results: Accounting information was written to the local log file. Reason Code: 22 Reason: The client could not be authenticated because the Extensible Authentication Protocol (EAP) Type cannot be processed by the server. ` I would love to have it so that non domain devices

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • "Safely remove hardware"...doesn't.

    - by Kev
    I have an external USB harddisk that I have scripted to safely shut down after a backup, so the backup operator can unplug it, and knows not to if the lights are still on for some reason. It's always worked fine using the DevEject command-line utility. This week it failed for some reason: DevEject 1.0 2003 c't/Matthias Withopf Ejecting 'USB Mass Storage Device' [USB\VID_0411&PID_002A\00000704C8D2]...FAILED (23,5) Error ejecting device USB Mass Storage Device, vetoed (15,5)! Worse yet, using the SRH tray icon, I click Stop, click OK, it pauses about 5 seconds with OK and Cancel greyed out, closes the sub-window, and then the main window with the Stop button still shows the device, and Stop is still available. I can keep doing that and it never gets rid of the device. I can still access it in Explorer. LockHunter reports that nothing is locking the drive. I've made no changes to the backup configuration or anything to do with the drive this week. Why the sudden flake-out? Short of a restart, which I can't do today before the backup operator goes home, how do I fix it?

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

  • Managing multiple IMAP accounts in Thunderbird

    - by baritoneuk
    I've been using Thunderbird for years without issues with 20+ pop3 accounts. I'm moving over to imap which will enable me to keep copies of the emails locally and on the server whilst keeping everthing synchronised. However I'm looking for the best way to manage multiple imap accounts on Thunderbird. Currently I have a filter that copies all the emails into a central inbox and into seperate local folders. The reason for this is I go through my inbox daily and delete all emails that don't require any action. I move any emails that require action to my "action" imap account folder. This way I can syncronise all the emails that require action across multiple computers (and mobile devices). This technique is my implemantion of the GTD or Getting things Done philosophy. I also copy over each email into seperate local folders. The reason I do this is just in case any emails on the imap accounts get deleted, or something drastic happens on the server which means I lose all the emails. My business partner has access to some of these emails and still uses pop3 (with "leave copy on server" checked), but I know sometimes Thunderbird can still delete emails off the server sometimes. The problem with the above is that thunderbird gives me the dreaded error dialogue saying that the emails cannot be filtered due to another process. I find the folder list in Thunderbird hard to manage. Here is a screenshot of part of my folder list- as you can see it's a bit of a complicated list and not easy to manage: What would be the best way of me managing multiple imap accounts whilst allowing me to have copies put in a central folder and emails in local folders? It would be useful if people think this is necessary, as perhaps there is a betterway? How do people manage multiple imap accounts in a way that allows them to keep on top of actionable emails? I'd be interested in how others manage this. I've never used the Thunderbird-based client "Postbox", does this handle multiple imaps better?

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Access denied error 3221225578 with file sharing to Windows server

    - by Ian Boyd
    i'm trying to access the shares on a server. The credential box appears, and i enter in a correct username and password, and i get access denied. The silly thing is that i can Remote Desktop to the server (using the same credentials), and i can check the Security event log for the access denied errors: Event Type: Failure Audit Event Source: Security Event Category: Account Logon Event ID: 681 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: The logon to account: Administrator by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 from workstation: HARPAX failed. The error code was: 3221225578 and Event Type: Failure Audit Event Source: Security Event Category: Logon/Logoff Event ID: 529 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: Logon Failure: Reason: Unknown user name or bad password User Name: Administrator Domain: stalwart Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: HARPAX Looking up the error code (3221225578), i get an article on Technet: Audit Account Logon Events By Randy Franklin Smith ... Table 1 - Error Codes for Event ID 681 Error Code Reason for Logon Failure 3221225578 The username is correct, but the password is wrong. Which would seem to indicate that the username is correct, but the password is wrong. i've tried the password many times, uppercase, lowercase, on different user accounts, with and without prefixing the username with servername\username. What gives that i cannot access the server over file sharing, but i can access it over RDP?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >