Search Results

Search found 6818 results on 273 pages for 'ulrike schwinn (dba community)'.

Page 227/273 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • Can I have damaged a CPU ?

    - by Pascal
    Hey guys, This is a question I asked on Anantech forums, but I just discovered this community and think this question would fit right in. Here goes: I built a PC around a Q9550 1 1/2 years ago (MoBo is an Asus P5Q deluxe). Specs give 2.83GHz, OC'ed it to 3.40 GHZ without any problem (or so I thought) till 2 months ago. Cooling is provided by stock Intel Fan... 2 Months ago, I began to see random crashes, bios saying CPU overheat error... PC would reboot at OC'ed speed without any problem. Since last saturday and a few more crashes, PC won't reboot at 3.40 GHz, and even at stock speed (2.83 GHz), I got core temperatures of (60 C idle, 95 C under load) on the first two cores. This is the 4 core temperatures I am talking about, not the T-CPU which obvioulsy is lower. Fan is running at a steady 2000 RPM. Questions for you : 1. Is 2000 RPM the normal speed of the Intel fan or is my fan somehow broken (which could explain the overheating). In this case, any recommendation for a good fan for OCing ? 2. Hypothesis I fear is the right one: can the CPU have been slowly damaged over time by this OCing, meaning there is nothing much to do except waiting for it to die ? (As a side note, I am surprised that the 9550 is still around 300 $CDN here... Thought it would have been cheaper with all those i3/i5/i7 around). Any help or advice would be more than welcome...

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

  • linux Firewall question

    - by bcrawl
    I have few generic questions about firewalls and I thought the community up here could help me out. 1) So I recently installed Ubuntu server barebones. I checked for open ports, none were open which was great. Is that because there was a firewall installed or was it because there were no applications installed? 2) I installed some applications, (Apache, postgres,ssh, Java app and some few). Between these, I ended up opening a few ports (~10). Now I have a list of all the ports I would need open. So, how do I go about protecting them? [Is this the right question to ask? does the process go like this, Install Firewall - Allow Said needed ports - deny rest using IPtables rules] This is going to be open to the internet. Hosting low traffic ecommerce sites. 3) What do you think is the easiest way for me to quasi-secure the server, [low maintenance overhead/simplicity. Any open source "software" which can make my life easier?] 4) Finally, of the said open ports [2], I have 2 ports I need to close because they are telnet ports. Can I close these ports without installing a "firewall" Thanks all for the help and Merry Christmas!!!!!!!

    Read the article

  • Need Info on the Hidden Switch in SET - "/S" How to implement

    - by ttyl
    I am having some problems doing a proper search of "SET/S" or "SET /S" on google and other search providers. The difficulty arises with the SLASH "/", it is commonly used in search engines to add a "nearness" to the search parameter. I have found no way to escape the SLASH when searching for a SLASH. For those on this community, try searching this domain with the two search terms listed above. It just doesn't work, it ends up looking for SET S instead. But I digress. So Im asking the uber-guru's on this board to help me find out about the documentation of /S and how to implement SET /S in a batch file. SET is an internal DOS/cmd commandand allows many things incuding prompting the user, integer math and writing environment strings. in looking at this link: http://www.robvanderwoude.com/os2set.php it appears that the /S is only for OS2 but im thinking that this might not be the case, due to this: http://www.dostips.com/forum/viewtopic.php?t=2704, apparently used with substings and macros. any help is much appreciated

    Read the article

  • Microsoft , Hotmail , Live , MSN, Outlook , unable to send emails and no support received from microsoft in 3 months we are trying asking for that

    - by bombastic
    Ok this is somenthing unbelievable, we have a website, users sign up and receives links to confirm they signed up BUT: 1 - microsoft blocked our IP (no one with microsoft email account can receive our emails) 2 - we tryed contacting microsoft submitting the detailed form about our problem 3 - we posted 3 times in their community about our problem 4 - we tweeted they about our problem 5 - we tryed finding out some telephone support number (the few there are arent' helping at all) Do you think we solved? the answer is NO :/ We still unable to send emails from our IP to microsoft email accounts, since 3 months back. Our emails are perfect we checked all the email headers following microsoft guidelines but it seems not enought, checking our IP reputation it seems everythings ok, indeed we can send email easly to any other provider , gmail, yahoo, etc Do you know any other way to try to get help ? FULL STACK ERROR FROM MICROSOFT: host mx1.hotmail.com[65.55.37.120] said: 550 SC-001 (COL0-MC4-F28) Unfortunately, messages from 94.23.***** weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command) We are running a Virtual Private Server , so no HOSTING SITE, using NGINX too

    Read the article

  • Is StoreJet Transcend (0x2329) an Advanced Format drive?

    - by Graham Perrin
    I use a 640 GB StoreJet Transcend (0x2329) with ZEVO Community Edition 1.1.1 on OS X 10.8.2. Question Is this drive Advanced Format? Background I submitted a request for technical support to Transcend but the first response was gibberish so I don't expect a reasonable follow-up. Models at http://www.transcend-info.com/Products/CatList.asp?LangNo=0&ModNo=293 are similar but different sizes (not 640 GB). Mine is probably 25M2 (TS640GSJ25M2): Unless I'm missing something, nothing currently in the Transcend support area tells me whether the drive is Advanced Format. From System Information in OS X 10.8.2: StoreJet Transcend: Capacity: 640.14 GB (640,135,028,736 bytes) Removable Media: Yes Detachable Drive: Yes BSD Name: disk3 Product ID: 0x2329 Vendor ID: 0x152d (JMicron Technology Corp.) Version: 0.00 Serial Number: 322549FBA004 Speed: Up to 480 Mb/sec Manufacturer: JMicron History for the ZFS pool shows creation in March 2012 –  macbookpro08-centrim:~ gjp22$ zpool history zhandy | grep create 2012-03-14.17:29:37 zpool create -f -O compression=off -O copies=1 -O casesensitivity=insensitive -O snapdir=visible zhandy /dev/dsk/GPTE_1928482A-7FE4-482D-B692-3EC6B03159BA 2012-06-22.15:51:16 zfs create zhandy/Pocket Time Machine At that time I almost certainly used ZEVO Setup Assistant to create the pool. macbookpro08-centrim:~ gjp22$ zpool get ashift zhandy NAME PROPERTY VALUE SOURCE zhandy ashift 0 default If I discover that the drive is Advanced Format, a different ashift value will be appropriate.

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • Different approaches to share files over local network & playlists "collaboration"

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

  • Different approaches to share files over local network

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

  • KVM and JBoss Java Application Server

    - by Jason
    We have a large Xen deployment running on both RHEL and CentOS and have recently started looking at KVM since this is where it looks like the future of VM's are on Linux. We can load the server and get everything running without an issue. However when we load up a new one with JBoss (4.2 Community edition, Sun JDK 6) and load a large EAR the server goes a little crazy. The %sy will jump to 80-99% and just hang for large periods of time we see a similar jump in %us on the host machine. We though at first this might be I/O as it seems to happen at start of JBoss but then would "cool down" after everything got loaded. We did some tests by extracting some large tar.gz files and using jar -xvf on the ear but could not re-create. Then we starting thinking this might be some type of memory access issues. We loaded a c-program that would generate a lot of memory activity and sure enough we saw the spikes again. Not as high mind you but we did see it jump. We then wrote a small java program to do the same thing and sure enough we saw it jump again. Any thoughts on what might be causing this? Is this just the way KVM works? As a side note we do NOT see this behavior on any other setup. Xen, VMWare and/or native iron. The system does seem a bit slower than our Xen / VMware ones.

    Read the article

  • Processing-time billing in Amazon EC2

    - by Rafael Almeida
    Hi all! I think my question is fairly basic, but I would like a clarification: in the Pricing part of AWS we can see that Amazon charges people around .10 by the 'instance computing hour'. I've seen in a blog post somewhere (can't remember where exactly, and even if I did I think it was in Portuguese anyway) that this way your minimum monthly payment would be $72 (= .10 $s/hour x 24 hours x 30 days). Is this correct? (I don't think it is!) In my understanding is that this 'virtual computing time' is only used when your machine is actually doing something (serving pages, serving the admin via ssh, whatever), so real billable usage would be less than 720 hours/month in most webserver scenarios. Is my view correct? If it is, then it leads me to another question: is it economically interesting to buy access to one of these instances for testing? I mean, would I have the 'freedom' to 'forget' about it for a month and receive a very-close-to-zero (as in, a few cents) bill? Do you do it/know of anybody who does? Any thoughts on the matter (as in, "yes, it's a good idea", or "yes, but there's this 'gotcha': ...", or "no, nobody does it because of...")? PS: sorry for the loong question text. I highlighted the main questions for easy view. Also, I'm not sure if this question is actually more than one and if it's desirable for the community, so, sorry if it is too! Thanks in advance!

    Read the article

  • Ubuntu Postfix email account with forward

    - by Mika
    I have an Ubuntu 12.04 server with Postfix installed. In Postfix installation I used this guide https://help.ubuntu.com/community/Postfix. I didn't go through all of that, just the sudo dpkg-reconfigure postfix part. I have created user accounts to my server and the users home directories contain a .forward file which have only one row the email address to forward to. I have defined dns A records for the names www.mydomain.com and mydomain.com But if I send an email to [email protected] it doesn't get forwarded. Actually I can't see any sign about any email ever visiting my server. My firewall is defined to allow incoming traffic for ports 80, 443 and 22. For outgoing traffic it allows ports 587 and 22. The exact definitions are below. Should I allow also outgoing http (port 80)? or maybe port 25? # Allow ssh in iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTPS iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing SSH iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing emails iptables -A OUTPUT -o eth0 -p tcp --dport 587 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 587 -m state --state ESTABLISHED -j ACCEPT Edits: I found lines from my syslog telling me that there were incoming traffic for port 25 which was blocked. The sender ip's for those packages were trustworthy, so I opened also port 25. Now I can see some Postfix logging in my syslog. It looks like it is at least trying to forward emails. I haven't yet received any forwarder emails into my gmail mail box.

    Read the article

  • Is UPS worthwhile for home equipment?

    - by Jon Skeet
    Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) One external hard disk still claiming to function, but corrupting data One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually running during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • libsasl2 change paths

    - by mk_89
    I have been following the tutorial https://help.ubuntu.com/community/Postfix for installing Postfix on ubuntu. Im stuck at the Authenication section of the tutorial where you change paths to live in the false root, if you look at the link above I have a file (/etc/default/saslauthd) which is pretty much the same as the one from the tutorial. saslauthd # This needs to be uncommented before saslauthd will be run automatically START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" # You must specify the authentication mechanisms you wish to use. # This defaults to "pam" for PAM support, but may also include # "shadow" or "sasldb", like this: # MECHANISMS="pam shadow" MECHANISMS="pam" # Other options (default: -c) # See the saslauthd man page for information about these options. # # Example for postfix users: "-c -m /var/spool/postfix/var/run/saslauthd" # Note: See /usr/share/doc/sasl2-bin/README.Debian #OPTIONS="-c" #make sure you set the options here otherwise it ignores params above and will not work OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" When I run the following command in ubuntu dpkg-statoverride --force --update --add root sasl 755 /var/spool/postfix/var/run/saslauthd I get the following error dpkg-statoverride: warning: An override for '/var/spool/postfix/var/run/saslauthd' already exists, but --force specified so will be ignored. dpkg-statoverride: warning: --update given but /var/spool/postfix/var/run/saslauthd does not exist I don't why this is happening, I literally followed the tutorial step by step and have installed all the packages necessary, what could be the problem? do I have to manually create

    Read the article

  • Which HDD brand do you ..trust

    - by Shiki
    Okay it says its 'subjective' but I believe it's not. Basically I want to ask the community about your preference. Not really 'preference' but actual experience. Like if you never had a problem with Western Digital, then write that in an answer, or if there is one with WD, just vote it up. And so on. (Heard so many stories, experiences. I only had Samsung, Maxtor, WD, Seagate HDDs. Samsung died with bad blocks, had anomalies. Maxtor died so fast I couldn't even try it really and it's really hot, loud. Seagate is just as loud as a jet plane, and moderately hot. My WD (green) is quiet, really cool and somewhat fast. That's all I have about experiences. So I would say Western Digital in an answer (OR Hitachi. Never had one yet, but every expert I know says I should get one since they even had problems with WD but Hitachi seems to be ok. (My laptop comes with Hitachi hdd but I don't think its really relevant.)) Basically I mean desktop 7200RPM HDDs here. Well.. notebook HDDs are ok also, but no raptor/scsi/server ones. Hope you get what I meant and it won't get closed.

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • Solid State Drive Occasionally Freezes For A Minute While OS Is "Beach Balling"

    - by Boris_yo
    Almost a month ago I bought Intel 330 128GB solid state drive, migrated my data with Intel branded limited-feature Acronis from old HDD to new SSD, optimized with Intel Toolbox and started using it. Occassionally I get close to 1 minute freezes while seeing operating system "beach balls" and animations still work, I can interact and click on something but nothing responds, nothing loads. Recently a couple of such freezes occurred in shorter amount of time in a row. I have noticed that if I stop interacting with laptop, the freeze lasts less time than if I was interacting with laptop. But the bigger problem is when freeze just does not end and computer keeps being stuck unlil it is more than half hour and I run out of patience to keep waiting and feeling I need to restart the system because I am not getting anywhere. Such freeze happened while laptop was cold booting into Windows 7. This is when freeze hang occurred and I had to restart, only later to be greeted with Windows recovery screen stating something about faiure of boot sector and asking to insert Windows repair CD. But after I restarted, Windows booted successfully and all was well. I have filmed video of freeze hang occurring in cold boot which you can see here (on video page look below for description): http://www.youtube.com/watch?v=8b7MQlcDTUs As I have mentioned in the beginning, the SSD is less than a month old but here is S.M.A.R.T statistics just in case (TRIM is enabled btw according to CrystalDiskInfo): I want to emphasize that this SSD is the only drive I have, yet it is working in RAID mode (it was enabled initially in BIOS by previous laptop's owner) on Intel Rapid Storage drivers. I am contemplating about switching to AHCI mode but want to be sure this won't cause data loss. Additionally, the stock firmware is the only firmware available currently, yet Intel does not respond to my posts in their community board. If anyone here has this SSD model or generally has experience with SSD drives, I would love to know your thoughts.

    Read the article

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • LG LW20 Express won't boot after hdd replace

    - by Mika
    My old laptop (LG LW20 Express) got a hdd failure and I replaced the hdd. Now the laptop won't boot from cd or usb. I'm trying to install ubuntu on it. When I turn the laptop on it shows me the startup screen but when it should be the time to load operating system it just gives a black screen and starts over. This loop continues until I shut down the laptop. I created the usb boot drive following this guide https://help.ubuntu.com/community/Installation/FromUSBStick/ I used my boot cd to install ubuntu on this machine I'm using right now. So at least the cd should work. From the BIOS I can see that my newly installed hdd is recognized and put as a secondary master. Also the cd and removable media are in the boot list before hdd. The laptop runs pretty hot. The fan is at full speed pretty soon after the laptop is turned on. Earlier I suspected that it would have been the almost broken hdd that would have produced that heat but there obviously is something else also. Any ideas what to check?

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >