Search Results

Search found 21356 results on 855 pages for 'check digit'.

Page 802/855 | < Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >

  • how to use ajax with json in ruby on rails

    - by rafik860
    I am implemeting a facebook application in rails using facebooker plugin, therefore it is very important to use this architecture if i want to update multiple DOM in my page. if my code works in a regular rails application it would work in my facebook application. i am trying to use ajax to let the user know that the comment was sent, and update the comments bloc. migration: class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.string :body t.timestamps end end def self.down drop_table :comments end end controller: class CommentsController < ApplicationController def index @comments=Comment.all end def create @comment=Comment.create(params[:comment]) if request.xhr? @comments=Comment.all render :json=>{:ids_to_update=>[:all_comments,:form_message], :all_comments=>render_to_string(:partial=>"comments" ), :form_message=>"Your comment has been added." } else redirect_to comments_url end end end view: <script> function update_count(str,message_id) { len=str.length; if (len < 200) { $(message_id).innerHTML="<span style='color: green'>"+ (200-len)+" remaining</span>"; } else { $(message_id).innerHTML="<span style='color: red'>"+ "Comment too long. Only 200 characters allowed.</span>"; } } function update_multiple(json) { for( var i=0; i<json["ids_to_update"].length; i++ ) { id=json["ids_to_update"][i]; $(id).innerHTML=json[id]; } } </script> <div id="all_comments" > <%= render :partial=>"comments/comments" %> </div> Talk some trash: <br /> <% remote_form_for Comment.new, :url=>comments_url, :success=>"update_multiple(request)" do |f|%> <%= f.text_area :body, :onchange=>"update_count(this.getValue(),'remaining');" , :onkeyup=>"update_count(this.getValue(),'remaining');" %> <br /> <%= f.submit 'Post'%> <% end %> <p id="remaining" >&nbsp;</p> <p id="form_message" >&nbsp;</p> <br><br> <br> if i try to do alert(json) in the first line of the update_multiple function , i got an [object Object]. if i try to do alert(json["ids_to_update"][0]) in the first line of the update_multiple function , there is no dialog box displayed. however the comment got saved but nothing is updated. it seems like the object sent by rails is nil or cant be parsed by JSON.parse(json). questions: 1.how can javascript and rails know that i am dealing with json objects?deos ROR sent it a object format or a text format?how can it check that the json object has been sent 2.how can i see what is the returned json?do i have to parse it?how? 2.how can i debug this problem? 3.how can i get it to work?

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Installation of Access Database Engine 32-bit Fails

    - by Rayzor78
    I am trying to install Access Database Engine 2007 32-bit. The splash screen comes up, you click "Next", then it fails with the error: Installation ended prematurely because of an error You click "OK" and another error window says: The installation of the package failed. The exact same situation happens when I try this with Access Database Engine 2010 32-bit. This production server is running Windows Server 2008 R2 SP1 64-bit. Before I tried installing Access Database Engine 32-bit, I first needed to install Microsoft Office 2010 Pro (Excel and Office Tools only). I tried the 32-bit version on the production server since that is how I set it up in our Dev environment. No luck. The 32-bit version would not install. I did NOT get the error "You have 64-bit components of Office installed". I simply received the exact same two errors listed above. So, I knew that 32-bit/64-bit did not really matter for the Office install for my project, so I installed 64-bit of Office Pro 2010 (Excel and Office Tools only) with no problems. I have a requirement that I need to have the 32-bit version of the Access Database Engine installed. 2007 or 2010, doesn't matter. I cannot use the 64-bit version of Access Database Engine 2010 because my SSIS package will not work with it. I require the 32-bit version. I've tried several steps to try to get it installed. I seriously think that the production server has some aversion to installing 32-bit applications. Here's what I've tried: Tried installing via command line with the "/passive" switch....no luck. Tried numerous iterations to copy the install file to the server (downloaded a fresh copy directly to the server, downloaded a fresh copy to my local machine then copied it over, copied it over zipped up) (http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/efd3c1f0-07cd-45ca-a626-2dd0c7ac3e9f). Tried Method 1 from this link. Could not try Method 2 because it requires a server reboot and in my environment that requires a long change management process. I've verified that I am a local administrator on the server. (Evidence, I am able to install other applications (office 64-bit per above)). Verified that there are no other office products that should be blocking the installation. The fore-mentioned install of Excel 2010 64-bit was the first Office product installed on the server. VERY ODD: To test my theory that the production server does not like 32-bit applications, I installed something lightweight. I installed 7-Zip 32-bit on the production server with no problems whatsoever. Here are some things that I have not tried (i will follow-up once I do): Method 2 (as mentioned above). Requires a server reboot. Have not verified that the Dev and Production environments are 100% identical. I've done a cursory check and on the surface they appear to be the same (same OS and SP version). I need to do a deeper dive to be 100% certain. I had no problems in my Dev environment. In Dev, I installed Office 2010 Pro 64-bit (Excel & Office Tools only) then via command line w/ the "/passive" switch, installed Access Database Engine 2010 32-bit. I don't know what else to try. Any suggestions or comments?

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • How do I resolve "conflicting accounts" in google apps without breaking links to online photos on picasa?

    - by lee
    I have been using google apps for some time, and only recently learned I have what google calls "conflicting accounts" which is creating a problem I haven't been able to resolve. Turns out that the apps account really only covers email, google docs, and the calendar and not other features like picasa, blogger, youtube etc. and at some point they gave me a non-apps google account with my same (proprietary non-gmail) email address for the additional apps. This is the "conflicting account." I had noticed that I sometimes had to come in through another door when I went back and forth, between docs, picasa, and mail let's say, but never understood why since it was the same username and password and I didn't get any communication about it at the time. Google is now in the process of giving google apps users access to the additional apps and providing instructions for consolidating the two accounts. But if I want to move my picasa site into the new apps structure I have to download my albums and re-synch them. This would be disastrous for me as I have hundreds of photos embedded in my websites, and new web addresses would break all the connections. The alternative seems to be to rename my "personal" (non-apps) accounts as described at http://www.google.com/support/a/bin/answer.py?answer=185186: Users with conflicting Google Accounts can easily resolve their conflicts by renaming their personal Google Accounts, and the data in their personal accounts will remain safe and accessible to them. Here’s how a user can rename their personal Google Account: * Step 1: Visit www.google.com/accounts and sign in with your personal Google Account * Step 2: Click ‘Change email’ under ‘Personal Settings’ * Step 3: Enter a different email address where you can receive mail, enter your password, and click ‘Save email address’ * Step 4: Check your other email If your users don’t have different email addresses where they can receive mail, they can resolve the conflict by renaming their personal Google Accounts to @gmail.com addresses instead. Sounds easy enough, right? I gave them a gmail address. The wizard said "sorry you can't use a gmail account for this" --which contradicts the last paragraph above but ok, I switched to a new email address I just created for one of my domains. I can send email back and forth between this account and my google apps account with no problem. But when I try to use it as a replacement on the "personal" side I always get "The password you gave is incorrect." I have tried it over and over and know the password is correct. Since I like to get all my emails though one web interface I initially had the new email set up as an add-on to my google apps email account, but noting that the instructions said the "personal account" email could not be associated with any other gmail account I took it off and went back to accessing it via horde so there would be no conflict there, which seemed to make no difference. I can't figure out why it won't accept the password. Does anyone have any thoughts about that? or suggestions for another way to resolve my picasa problem? any help at all is greatly appreciated. Lee

    Read the article

  • AWS Load balancer connection reset

    - by joshmmo
    I have an ELB set up with two instances. The issue I have with it is that when I do not add www. to it, the ELB just hangs. This is some info I get when I spider with wget: Spider mode enabled. Check if remote file exists. --2013-06-20 13:40:54-- http://learning.example.com/ Resolving learning.example.com... 54.xxx.x.x53, 50.xx.xxx.x71 Connecting to learning.example.com|54.xxx.x.x53|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. when I add www. it works great. I have a GoDaddy SSL cert that I added to the listener section that covers 3 domains, www.learning.example.com, files.learning.example.com and learning.example.com. These are my listener settings: - HTTP 80 HTTPS 443 N/A N/A - SSL 443 SSL 443 Change canvasNew (Change) My EC2 instances are running apache2 on Ubuntu 12.04. I will be happy to post my vhosts file if needed. However, when I ran the server with the domains pointing to just one EC2 instance things worked fine. How can I fix this issue for learning.example.com? Why does www work just fine? A second question would be what is the difference between instance protocol and load balancer protocol? EDIT: Here are the dig results for learning.example.com from yesterday. I changed the DNS entry to point to one instance to make sure it was the elb. When I switch it back I will do it for www.learning.example.com ; <<>> DiG 9.9.1-P2 <<>> learning.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20210 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;learning.example.com. IN A ;; ANSWER SECTION: learning.example.com. 2559 IN CNAME canvas-22222222222.us-west-1.elb.amazonaws.com. canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 54.xxx.x.x53 canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 50.xx.xxx.x71 ;; Query time: 83 msec ;; SERVER: 10.x.xx.20#53(10.x.xx.20) ;; WHEN: Thu Jun 20 13:40:47 2013 ;; MSG SIZE rcvd: 137 EDIT 2: Here is some more info that might be helpful. Port Configuration: 80 (HTTP) forwarding to 443 (HTTPS) Backend Authentication: Disabled Stickiness: Disabled(edit) 443 (SSL, Certificate: canvasNew) forwarding to 443 (SSL) Backend Authentication: Disabled So I switched everything to one EC2 IP address to bypass the elb to make sure things are working. It's running great. www and the non-www url work perfectly fine. Its only when I switch things to the ELB that learning.example.com hangs and www.learning.example.com works. Hopefully you can get some ideas flowing.

    Read the article

  • How to change the language of driver interface for Canon Pixma printers?

    - by Sammy
    Is there a way to change the language of the driver interface for Canon Pixma printers? Which language is used seems to be determined by the language of the OS or the Windows localization settings. I really don't want that, I want to be able to set the language manually to my own liking, either during the driver installation or afterwards. I have found a workaround for Pixma IP2770 where you edit the setup.ini file by replacing the language names and the DLL search paths with <SELECT> under the LANGUAGES section. So instead of... 0000=<SELECT> 0001=Arabic,RES\STRING\IJInstAR.ini,RES\DLL\IJInstAR.dll 0804=Simplified Chinese,RES\STRING\IJInstCN.ini,RES\DLL\IJInstCN.dll 0404=Traditional Chinese,RES\STRING\IJInstTW.ini,RES\DLL\IJInstTW.dll 0005=Czech,RES\STRING\IJInstCZ.ini,RES\DLL\IJInstCZ.dll 0006=Danish,RES\STRING\IJInstDK.ini,RES\DLL\IJInstDK.dll 0007=German,RES\STRING\IJInstDE.ini,RES\DLL\IJInstDE.dll 0008=Greek,RES\STRING\IJInstGR.ini,RES\DLL\IJInstGR.dll 0009=English,RES\STRING\IJInstUS.ini,RES\DLL\IJInstUS.dll 000A=Spanish,RES\STRING\IJInstES.ini,RES\DLL\IJInstES.dll 000B=Finnish,RES\STRING\IJInstFI.ini,RES\DLL\IJInstFI.dll 000C=French,RES\STRING\IJInstFR.ini,RES\DLL\IJInstFR.dll 000E=Hungarian,RES\STRING\IJInstHU.ini,RES\DLL\IJInstHU.dll 0010=Italian,RES\STRING\IJInstIT.ini,RES\DLL\IJInstIT.dll 0011=Japanese,RES\STRING\IJInstJP.ini,RES\DLL\IJInstJP.dll 0012=Korean,RES\STRING\IJInstKR.ini,RES\DLL\IJInstKR.dll 0013=Dutch,RES\STRING\IJInstNL.ini,RES\DLL\IJInstNL.dll 0014=Norwegian,RES\STRING\IJInstNO.ini,RES\DLL\IJInstNO.dll 0015=Polish,RES\STRING\IJInstPL.ini,RES\DLL\IJInstPL.dll 0016=Portuguese,RES\STRING\IJInstPT.ini,RES\DLL\IJInstPT.dll 0019=Russian,RES\STRING\IJInstRU.ini,RES\DLL\IJInstRU.dll 001D=Swedish,RES\STRING\IJInstSE.ini,RES\DLL\IJInstSE.dll 001E=Thai,RES\STRING\IJInstTH.ini,RES\DLL\IJInstTH.dll 001F=Turkish,RES\STRING\IJInstTR.ini,RES\DLL\IJInstTR.dll 0021=Indonesian,RES\STRING\IJInstID.ini,RES\DLL\IJInstID.dll You get.... 0000=<SELECT> 0001=<SELECT> 0804=<SELECT> 0404=<SELECT> 0005=<SELECT> 0006=<SELECT> 0007=<SELECT> 0008=<SELECT> 0009=English,RES\STRING\IJInstUS.ini,RES\DLL\IJInstUS.dll 000A=<SELECT> 000B=<SELECT> 000C=<SELECT> 000E=<SELECT> 0010=<SELECT> 0011=<SELECT> 0012=<SELECT> 0013=<SELECT> 0014=<SELECT> 0015=<SELECT> 0016=<SELECT> 0019=<SELECT> 001D=<SELECT> 001E=<SELECT> 001F=<SELECT> 0021=<SELECT> .... in case English is the preferred language. It's a way to force the installation program to only install the English language support. IP2770 is a model for the Asian market, so if you want to check this out you need to go to the Canon India download page (for instance) to get the driver. Unfortunately this method is not possible with my IP4000. There is no driver even available for it to download for Windows Vista. But is there really no way of changing the language of the UI in any normal way, you know... without having to hack it? Besides, the driver for my printer comes with Windows Vista, so I don't even have to install any drivers. And little do I get the chance to set the language, knowing that the installation never happens. Any ideas?...

    Read the article

  • Using different SSDs types (not only SATA based) as system drive

    - by Hubert Kario
    Currently I have a Thinkpad X61s and want to make it both a bit faster and a bit more power efficient. For that reason I thought that adding SSD drive would make most sense. Unfortunately, because of financial reasons, buying SSD of over 200GB capacity is out of reach for me (not only it would be worth more than the rest of the laptop, but also I currently have a 500GB drive in it, so even such a drive would be kind of a downgrade for me). During preliminary testing with a cheap Transcend 4GB Class 6 (14MiB/s streaming, 9MiB/s random read) card I experienced boot times to be reduced by half so putting the OS only on it would already would be an improvement. Unfortunately, my system now is about 11GiB in size so anything less than 16GB would be constraining. In this laptop I can connect additional drives on at least 5 different ways: using SATA-ATA converter caddy in the X6 Ultrabase using internal mini PCIe slot using integrated SDHC slot using CardBus (a.k.a PCMCIA or PC Card) slot using USB Thankfully, because I use only Linux on this PC the bootability of them is irrelevant as I can put the /boot partition on internal HDD and / on any of the above mentioned Flash memories (as I already did for the SDHC test). From what I was able to research and from my own experience those options come with rather big downsides or other problems: SATA-ATA caddy It has three downsides: I have to carry the Ultrabse with me at all times (it's not really inconvenient, but those grams do add) and couldn't disconnect it when I want to disconnect the battery It makes the bay unusable for the optical drive and occasional quick access to other hard drives the only caddies I could buy have rather flaky controllers in them so putting my OS on it would hamper its stability Internal mini PCIe slot This would be an ideal solution, if only I could find real PCIe SSDs, not only devices that could talk only SATA or ATA over PCIe mechanical connection (the ones used in Dell Mini or Asus EEE). Theoretically Samsung did release such devices but I couldn't find them in retail anywhere. Integrated SDHC slot It's a nice solution with a single drawback: the fastest 16GB SDHC card on the market can only do around 35MiB/s read and 15MiB/s write while still costing like a normal 40GB SATA SSD that's 10 times faster. Not really cost-effective. CardBus (a.k.a PCMCIA or PC Card) slot Those cards are much faster than the SDHC option (there are ones that can do well over 50MiB/s read in benchmarks) and from what I could find the PCMCIA controller in my laptop does support UDMA so it should be able to deliver comparable speeds. They still cost similarly to SD cards but at least they provide streaming performance comparable to my current HDD. USB That's the worst option. Not only is it limited to 20-30MiB/s by the interface itself the drive would stick out of the laptop so it's a big no no. The question As such I think that going the "CF in a CardBus adapter" route will be the best option. My question is: did anyone try using CF cards in CardBus adapters as system drives with Linux on Thinkpad laptops? Laptops in general? What was the real-world performance? I don't have any CF cards so I can't check how well does it work with suspend/resume, or whatever it's easy to make it work in initramfs (I'm using ArchLinux and SD card was trivial — add 3 modules in single config line and rebuilding initramfs) so any tips/gotchas on this are welcome as well.

    Read the article

  • RAID controller dropping the wrong drive

    - by bramp
    I've been having an issue with 3ware 9500S-8 RAID 10, and I have contracted their tech support, but I wanted to hear the serverfault community's recommendations. Firstly, all my data is backuped and secure, so I don't mind blowing my RAID away if I have to. But let me describe the problem I've been seeing. A month ago, disk 6 dropped out of the RAID. It is mirrored with disk 7, so I wasn't that bothered. I went to the data centre and replaced it. When I got back to the office, I noticed that disk 6 will still not in the RAID, and in fact the controller was show the name of the old drive still. A week later I went back and replace the drive again, thinking I might have swapped in a bad drive. Still the same problem. I decided to reboot the machine, to see if that would "force" the controller into seeing the new drive. It did, and a rebuild started to happen (from disk 7). Eventually both drives were showing as good. A week later, the MySQL database has flagged the database is corrupt, and is unable to repair it. I don't know what has gone wrong, but I suspected this 6-7 pair. At this point I noticed that the RAID had constantly been verifying itself, over and over. Regardless of this I began to rebuild the database, which took about 19 hours. It's a big database. Near the end of the repair, the RAID controller told me it had dropped disk 7, and that some data was most likely corrupted. I contacted LSI tech support, and they very promptly started to help me. I mentioned that drive 7 had been dropped. They suspect that drive 7 was always at fault, and drive 6 had always been good. I want to know how often a RAID controller would drop the wrong drive (in this case dropping drive 6 a month ago, instead of 7). I foolishly didn't run smartctl on the drives before I started swapping them out. I just assumed the RAID controller knew what it was talking about. I think my plan of action is to replace drive 7, rebuild the array from scratch, double check smartctl on ALL the disks, and then start restoring my data again. I would appreciate anyone's input on what the correct procedure for swapping drives is, and how often failures like this happen. If anyone would like more information then I'd be happy to provide it. thanks in advance. Oh some more information. I'm running CentOS 5.3, with two RAID arrays, a simple RAID 1 for the OS, and RAID 10 for the database. Both arrays are on different controllers. The RAID 10 is made of 10 identical ST3640323AS drives, until I swapped in a SAMSUNG HD103SJ last month.

    Read the article

  • How to diagnose computer lockups and freezes?

    - by Scott Mitchell
    I built a desktop computer a couple years back with the following specs: CPU: Intel Core 2 Quad Q9300 Yorkfield 2.5GHz 6 MB L2 Cache LGA 775 95W Quad-Core Processor BX80580Q9300 Motherboard: EVGA 122-CK-NF68-T1 LGA 775 NVIDIA nForce 680i SLI ATX Intel Motherboard Video Card: Two EVGA 256-P2-N758-TR GeForce 8600GT SCC 256 MB 128-bit GDDR3 PCI Express x16 SLI Supported Video Card PSU: SeaSonic S12 Energy Plus SS-550HT 550W ATX12V V2.3 / EPS12V V2.91 SLI Certified CrossFire Ready 80 PLUS Certified Active PFC Power Supply Memory: Two G.SKILL 4 GB (2 x 2 GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual Channel Kit Desktop Memory Model F2-6400CL5D-4GBPQ Since its inception, the machine has periodically locked up, the regularity having varied over the years from once a day to once a month. Typically, lockups happen once every few days. By "lockup" I mean my computer just freezes. The screen locks up, I can't move the mouse. Hitting keys on my keyboard that normally turn LEDs on or off on the keyboard (such as Caps Lock) no longer turn the LEDs on or off. If there was music playing at the time of the lockup, noise keeps coming out of the speakers, but it's just the current frequency/note that plays indefinitely. There is no BSOD. When such a lockup occurs I have to do a hard reboot by either turning off the computer or hitting the reset button. I have the most recent version of the NVIDIA hardware drivers, and update them semi-regularly, but that hasn't seemed to help. I am currently using Windows 7 x64, but was previously using Windows Server 2003 x64 and having the same lockup issues. My guess is that it's somehow video driver or motherboard related, but I don't know how to go about diagnosing this problem to narrow down which of the two is the culprit. Additional information re: cooling Regarding cooling... I've not installed any after-market cooling systems aside from two regular fans I scavenged from an older computer. The fan atop the CPU is the one that shipped with it. One of the two scavenged fans I added it located at the bottom tower of the corner, in an attempt to create some airflow from front to back. The second fan is pointed directly at the two video cards. SpeedFan installation and readings Per studiohack's suggestion, I installed SpeedFan, which provided the following temperature readings: GPU: 63C GPU: 65C System: 76C CPU: 64C AUX: 36C Core 0: 78C Core 1: 76C Core 2: 79C Core 3: 79C Update #3: Another Lockup :-( Well, I had another lockup last night. :-( SpeedFan reported the CPU temp at 38 C when it happened, and there was no spike in temperature leading up to the freeze. One thing I notice is that the freeze seems more likely to happen if I am watching a video. In fact, of the last 5 freezes over the past month, 4 of them have been while watching a video on Flickr. Not necessarily the same video, but a video nevertheless. I don't know if this is just coincidence or if it means anything. (As an aside, each night before bedtime my 2 year old daughter sits on my lap and watches some home videos on Flickr and, in the last month, has learned the phrase, "Uh oh, computer broke.") Update #4: MemTest86 and 3DMark06 Test Results: Per suggestions in the comments, I ran the MemTest86 overnight and it cycled through the 8 GB of memory 5 times without error. I also ran the 3DMark06 test without a problem (see my scores at http://3dmark.com/3dm06/15163549). So... what now? :-) Any further suggestions on what to check? Is there some way to get a stack trace or something when the computer locks like that? Resolution I have never did figure out the particular problems, but based on the suggestions here and elsewhere, I'm presuming it was a motherboard issue. In any event, I recently upgraded my system, buying a new motherbeard, PSU, CPU, and RAM, and that new rig has been working splendidly the past several weeks. I am using the same graphic cards as in the old setup, so I think it's safe to reason that they weren't the cause of the problem.

    Read the article

  • How to use a different Ethernet connection

    - by SteveC
    I'm running a virtual machine at home which has a VPN connection to our main office, but I also want to connect to a share on another machine at home. When I check with IPCONFIG I can see two ethernet connections ... my work VPN ... Ethernet adapter Local Area Connection* 11: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : XXX.XXX.XXX.XXX Subnet Mask . . . . . . . . . . . : 255.255.254.0 Default Gateway . . . . . . . . . : XXX.XXX.XXX.XXX and home local ... Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : lan Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : 192.168.1.70 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : What's weird is when I've been working before with a plugged-in ethernet cable I've not had any problem getting to the share? I can PING the other machine, but I can't access the share at ... \\othermachine\c$ I tried 'TRACERT` but that disappears off to the work network and eventually gets back to the local other machine after a few time-outs Is there anyway to "force" the connection to stay local ? UPDATE: the VPN is AEP SSL Tunnel

    Read the article

  • .AVI Files randomly cease to open, other strange errors too

    - by Ben Franchuk
    I Recently (a couple weeks ago) downloaded the complete series of Seinfeld, all in varying file type. I Watched them in sequence according to season and to airing date, and all was well. All of the files played fine with my media player of choice ("BS Player"), and once I had finished, I went onto watch some other TV I had previously downloaded (The U.S. Series of "The Office"), and after then, some other film and then some music, over the following weeks (keep in mind all of these files are all on the same Hard Drive). Later then, More recently, I Went back to watching Seinfeld. The episodes played well as they did before- with the exclusion of a few in Season 7. I Have not tested all of the episodes in the season, but upon inspection, the majority of them are experiencing this problem; the problem being simply that they don't open! BS Player says that the files are either damaged or that the codecs to play the files are not on my computer-- however I am certain that the files DO have the codecs, and I am pretty sure that they are NOT DAMAGED either. I Have played the files with other players (such as VLC, Media Player Classic, and Windows Media Player), too, only to the same result; of them not opening. Seemingly the only way that I can differentiate between a damaged file and a non-damaged file are the way that the icon shows in Windows Explorer. For example, the below image is how explorer shows the information of a file that is non-damaged... ...and below is how a damaged file appears... The most disturbing and confusing part of this, though, is the last episode in the season- It opens, but not as a video- Instead, as a 1 Hour, 16 Minute, and 35 Second Audio file! The file plays a song for the first 4 or so minutes, and then is pretty much silent (except for some extremely quiet noise) until the last minute or so, when a random array of chopped up sounds and beeping noises play. I Do not recognise the song at the beginning of the file, but by the sounds of it, it is a song by the artist "Mr. Oizo," who's complete works I downloaded a couple weeks before now; and a bit before then I had finished downloading season 9 (not affected by these problems) of Seinfeld. I'd also like to note that the file I told of earlier (which played audio instead of video) reads as the same size as the other files in the season (around 175 MB) and also opens as a video clip. I Have NEVER experienced any of these problems in the past, and they seem to be only effecting the one season of my downloaded TV. The problems have not arisen with any of the other files on my Hard Drive, or any of the files downloaded around the time or after the time of which I downloaded season 7 of Seinfeld- or at least to my noticing. I Use the hard drive these files are located on almost every day, so could that be the cause of these problems? Is this a sign that my HDD is soon going to die? If it helps, the HDD is a Western Digital MyBook 1.5 TB 7500 RPM. It is connected to the computer via U.S.B. 2.0. EDIT! I noticed that this problem is now occurring with Season 9 of Seinfeld- and, presumably, other files on the drive I have yet to check. Please, If you have ANY IDEA AT ALL on what may be causing this or how to fix it, do tell me!

    Read the article

  • shinken/nagios discriminative between warning alert and critical alert

    - by SWdream
    i using shinken for my monitoring system. Now, i have a problem when i configure shinken notification. My purpose is to discriminative between notification for warning state and critical state of check service: with warning state: + time to send alert from 8h = 18 h everyday, via email and sms + notification_interval is 60 minutes (Re-notify about service problems every hour) with critical state: + time to send alert : all time (24 x 7), via email and sms + notification_interval is 30 minutes Please show me how to solve my problem! I have tried the following: i configured: + contact templates: define contact{ name warning-contact ; The name of this contact template register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL CONTACT, JUST A TEMPLATE! host_notifications_enabled 1 define contact{ service_notifications_enabled 1 email shinken@localhost can_submit_commands 1 notificationways email_warning, sms_warning } define contact{ name critical-contact ; The name of this contact template register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL CONTACT, JUST A TEMPLATE! host_notifications_enabled 1 service_notifications_enabled 1 email shinken@localhost can_submit_commands 1 notificationways email_critical, sms_critical } + time poriod templates: define timeperiod{ timeperiod_name warning alias Normal Work Hours monday 08:00-18:00 tuesday 08:00-18:00 wednesday 08:00-18:00 thursday 08:00-18:00 friday 08:00-18:00 saturday 08:00-18:00 sunday 08:00-18:00 #exclude 24x7 } define timeperiod{ timeperiod_name 24x7 alias 24_Hours_A_Day,_7_Days_A_Week sunday 00:00-24:00 monday 00:00-24:00 tuesday 00:00-24:00 wednesday 00:00-24:00 thursday 00:00-24:00 friday 00:00-24:00 saturday 00:00-24:00 #exclude workhours } + notification way templates: define notificationway{ notificationway_name email_warning service_notification_period warning host_notification_period warning service_notification_options w host_notification_options d,u,r,f,s notification_interval 60 ; Resend notifications every 30 minutes service_notification_commands notify-service-by-email ; send service notifications via email host_notification_commands notify-host-by-email ; send host notifications via email } define notificationway{ notificationway_name email_critical service_notification_period 24x7 host_notification_period 24x7 service_notification_options c,r host_notification_options d,u,r,f,s notification_interval 30 ; Resend notifications every 30 minutes service_notification_commands notify-service-by-email ; send service notifications via email host_notification_commands notify-host-by-email ; send host notifications via email } define notificationway{ notificationway_name sms_warning service_notification_period warning host_notification_period warning service_notification_options w host_notification_options d,u,r,f,s notification_interval 60 ; Resend notifications every 30 minutes service_notification_commands notify-service-by-sms ; send service notifications via sms host_notification_commands notify-host-by-sms ; send host notifications via sms } define notificationway{ notificationway_name sms_critical service_notification_period 24x7 host_notification_period 24x7 service_notification_options c,r host_notification_options d,u,r,f,s notification_interval 30 ; Resend notifications every 30 minutes service_notification_commands notify-service-by-sms ; send service notifications via sms host_notification_commands notify-host-by-sms ; send host notifications via sms } + my contacts define contact{ use warning-contact contact_name thanhwarn email xxxx pager xxxx ; contact phone number } define contact{ use critical-contact contact_name thanhcritical email xxxxx pager 01689xxxx ; contact phone number } + and define service: define service{ use generic-service service_description check_ram host_name graphite contacts thanhcritical, thanhwarn check_command check_nrpe!check_ram } but my shinken system don't send alert. i don't understand this. please show me where I went wrong! thanks all!

    Read the article

  • Rosewill RSV-S5 and it's transferespeeds

    - by DoomStone
    I have just bought a Rosewill RSV-S5, I have installed 5x1,5Tb Western Digital Green disks in it. After that have I created a Raid5 on them all with the software that followed with the hardware. Not the raid it self works fine, but it is SLOW, I can only obtain a maximum of 25 MB/s, and if SABnzbd+ is downloading with 5 MB/s is it having a hard time streaming a normal DIVX (700 mb) movie. Is this normal or is there something wrong? Edit: should be able to handle 3 Gbps = 384 megabytes / second Edit 2: As you can see am I only downloading with 3,76 MB/s and I'm trying to watch V s02e08 (720p), but it is completely unwatchable, as I can see 30 sec, and the it buffers for 20 sec. Edit: Other information there might be required I'm running Windows Server 2008 R2, optimized for program performance. Windows is installed on a 60GB SSD. I have a 50 Mb/s internet connection and a 1 Gb/s LAN, all connected with Cat6 Ethernet cables. The MCE is using a Gigabyte EP35C-DS3R motherboard with 2 GB DDR2 ram. Edit 3: I have used chunk sizes for 128 KB Edit 4: I found this on newegg Pros: Enclosure for 5x2TB hard drive is fine. This is basically a rebranded San Digital TR5M-B product. For support Rosewill tells you to contact San Digital. No direct support from Silicon Image for the computer raid card. Cons: Includes computer Silicon Image 3132 raid card, extremely slow raid 5 write (our tests ~10MB/s). Compare to regular internal local drive write 30-60MB/s. We basically dumped the Sil3132 card and replaced with High Point RocketRaid 622 card for extra $69.99. Note for RR622, turn off ECRC (end to end CRC check) for card to work on IBM xserver. What took 12hrs to copy now took 2-3hrs. San Digital realized the problem and has the newer model TR5M-BP TowerRaid Plus that comes with High Point RocketRaid 622 card. Rosewill should discontinue this product and go with TR5M-BP. Could not get Silicon Image raid management software to work with complicated 2008R2 server with 10 NICs, application doesn't know how to talk to localhost port with all those NICs. No updates from Silicon Image and support from San Digital ignored. Gave up on Sil3132 card. Save yourself from a lot of headaches, get the RR622 card too if you are going to buy this product. Other Thoughts: The newer model is TR5M-BP TowerRaid Plus, comes with High Point RocketRaid 622 raid card for the PC instead of Silicon Image Sil3132. According to San Digital, raid 5 performance for Sil3132 read 80MB/s write 19MB/s, and RR622 read 154MB/s write 149MB/s. Our RR622 tests gave (8TB raid 5) write ~80-110MB/s copying 40GB file took 8mins. So I have now ordered a HighPoint RocketRAID 622 2P ext SATA III and hopes that it will solve my problems.

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • My smtp server is spammed?

    - by Milos
    I have a server and the postfix client on it. Since several days, I noticed a lot of processes running there. When checked, there are a lot of emails sent. Here is an example from the mail log: Aug 18 11:54:56 mem postfix/smtpd[9963]: connect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/smtpd[9301]: connect from unknown[186.113.45.4] Aug 18 11:54:56 mem postfix/smtpd[9963]: 525E7114012D: client=dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/cleanup[9970]: 525E7114012D: message-id=<B55835C9027BFA9D16CCBB556DB2F48BB82DF004000480BA-db0c3ce8aa74446411898d0d2feb3001@email.filmforthoughtinc.com> Aug 18 11:54:56 mem postfix/qmgr[2581]: 525E7114012D: from=<[email protected]>, size=10702, nrcpt=1 (queue active) Aug 18 11:54:56 mem postfix/smtpd[9301]: EC52711401DC: client=unknown[186.113.45.4] Aug 18 11:54:57 mem postfix/smtpd[9963]: disconnect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:57 mem postfix/cleanup[8597]: EC52711401DC: message-id=<4C905D97606B436FE50C6F738DE014D9D84F2185BA815D81-1a4dbe6fc2bfcc8183f5faf901cfa15e@email.manguerasespecializadas.com> Aug 18 11:54:57 mem postfix/smtp[9971]: 525E7114012D: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.55/0/0.45/0.16, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:57 mem postfix/cleanup[10067]: 8B1E11140268: message-id=<[email protected]> Aug 18 11:54:57 mem postfix/bounce[10001]: 525E7114012D: sender non-delivery notification: 8B1E11140268 Aug 18 11:54:57 mem postfix/qmgr[2581]: 8B1E11140268: from=<>, size=12693, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/qmgr[2581]: 525E7114012D: removed Aug 18 11:54:57 mem postfix/qmgr[2581]: EC52711401DC: from=<[email protected]>, size=10978, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/smtp[10013]: connect to aspmx.l.google.com[2607:f8b0:400d:c03::1b]:25: Network is unreachable Aug 18 11:54:57 mem postfix/smtpd[9301]: disconnect from unknown[186.113.45.4] Aug 18 11:54:58 mem postfix/smtp[10013]: 8B1E11140268: to=<[email protected]>, relay=aspmx.l.google.com[74.125.22.26]:25, delay=0.5, delays=0.06/0/0.28/0.16, dsn=5.1.1, status=bounced (host aspmx.l.google.com[74.125.22.26] said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 l7si24621420qad.26 - gsmtp (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/qmgr[2581]: 8B1E11140268: removed Aug 18 11:54:58 mem postfix/smtp[9971]: EC52711401DC: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.66/0/0.44/0.12, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/cleanup[9970]: 414361140254: message-id=<[email protected]> Aug 18 11:54:58 mem postfix/bounce[10001]: EC52711401DC: sender non-delivery notification: 414361140254 Aug 18 11:54:58 mem postfix/qmgr[2581]: 414361140254: from=<>, size=13057, nrcpt=1 (queue active) Aug 18 11:54:58 mem postfix/qmgr[2581]: EC52711401DC: removed Aug 18 11:55:01 mem postfix/smtp[10002]: 414361140254: to=<[email protected]>, relay=manguerasespecializadas.com[99.198.96.210]:25, delay=2.9, delays=0.04/0/2.1/0.84, dsn=2.0.0, status=sent (250 OK id=1XJPGs-0007BE-OI) Aug 18 11:55:01 mem postfix/qmgr[2581]: 414361140254: removed IS my server attacked, spammed? How to check that? Thank you.

    Read the article

  • How to free up space on RHEL6 /boot safely?

    - by ams
    I am trying to do yum update on RHEL 6 box and I am getting this error message Transaction Check Error: installing package kernel-2.6.32-279.9.1.el6.x86_64 needs 10MB on the /boot filesystem installing package grub-1:0.97-77.el6.x86_64 needs 10MB on the /boot filesystem Error Summary ------------- Disk Requirements: At least 10MB more space needed on the /boot filesystem. My /boot has the following # ls -lah /boot total 74M dr-xr-xr-x. 5 root root 2.0K Jun 10 08:05 . drwxr-xr-x. 23 root root 4.0K Aug 27 03:08 .. -rw-r--r-- 1 root root 99K Apr 26 12:53 config-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 99K Feb 10 2012 config-2.6.32-220.7.1.el6.x86_64 -rw-r--r--. 1 root root 99K Nov 9 2011 config-2.6.32-220.el6.x86_64 drwxr-xr-x. 3 root root 1.0K Mar 29 2012 efi drwxr-xr-x. 2 root root 1.0K Jun 10 07:53 grub -rw-r--r-- 1 root root 15M Jun 10 07:53 initramfs-2.6.32-220.17.1.el6.x86_64.img -rw-r--r-- 1 root root 15M Mar 29 2012 initramfs-2.6.32-220.7.1.el6.x86_64.img -rw-r--r--. 1 root root 15M Mar 29 2012 initramfs-2.6.32-220.el6.x86_64.img -rw------- 1 root root 3.4M Jun 10 08:06 initrd-2.6.32-220.17.1.el6.x86_64kdump.img -rw------- 1 root root 3.5M Jun 10 07:53 initrd-2.6.32-220.7.1.el6.x86_64kdump.img -rw------- 1 root root 3.4M Mar 29 2012 initrd-2.6.32-220.el6.x86_64kdump.img drwx------. 2 root root 12K Mar 29 2012 lost+found -rw-r--r-- 1 root root 168K Apr 26 12:55 symvers-2.6.32-220.17.1.el6.x86_64.gz -rw-r--r-- 1 root root 168K Feb 10 2012 symvers-2.6.32-220.7.1.el6.x86_64.gz -rw-r--r--. 1 root root 168K Nov 9 2011 symvers-2.6.32-220.el6.x86_64.gz -rw-r--r-- 1 root root 2.3M Apr 26 12:53 System.map-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 2.3M Feb 10 2012 System.map-2.6.32-220.7.1.el6.x86_64 -rw-r--r--. 1 root root 2.3M Nov 9 2011 System.map-2.6.32-220.el6.x86_64 -rwxr-xr-x 1 root root 3.8M Apr 26 12:53 vmlinuz-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 171 Apr 26 12:53 .vmlinuz-2.6.32-220.17.1.el6.x86_64.hmac -rwxr-xr-x 1 root root 3.8M Feb 10 2012 vmlinuz-2.6.32-220.7.1.el6.x86_64 -rw-r--r-- 1 root root 170 Feb 10 2012 .vmlinuz-2.6.32-220.7.1.el6.x86_64.hmac -rwxr-xr-x. 1 root root 3.8M Nov 9 2011 vmlinuz-2.6.32-220.el6.x86_64 -rw-r--r--. 1 root root 166 Nov 9 2011 .vmlinuz-2.6.32-220.el6.x86_64.hmac here is the disk usage on boot # du -h 13K ./lost+found 282K ./grub 247K ./efi/EFI/redhat 249K ./efi/EFI 251K ./efi 75M . Problem is that when I got this severer at my ISP I used their default image for RHEL 6 which only allocates 100MB for /boot clearly this is not enough. How can I get around this problem, is it safe to delete any of the above files some of them seem to be on the disk more than once? Is there some way of expand /boot without re-imaging the machine?

    Read the article

  • Installing .NET application on IIS 7.5 issues

    - by Juw
    Really need some help here. I am at a loss. I am trying to install a webservice that some other guy wrote in .NET. I have some basic IIS understanding. The webservice works just fine on my dev computer. But now i try to move the webservice to a production server and bad things happens. The webservice has been located in C:\inetpub\wwwroot\ dir on the dev server. But on this production server it is to be located in D:\services\ I have managed to install an application on the production server and everything seems fine and dandy. But when i "Test Settings" in the initial setup i get "Invalid application path" error. But i can just close it down and still install it. But when i try to access the webservice with: http://myserver.com/webservice/GetData nothing happens. Just a blank page and when i check the response headers...500 error. I don´t know what is going on here or where the problem is. I post the config file here so someone hopefully might notice something odd. Thanx in advance! EDIT: The config file is from my dev server. I just copied it to my production server...but that obviously didn´t work :-) UPDATE: I noticed that my dev server run in an Application pool with Net 4 and in "classic" "mode". On the production server it was in NET 4 but in "integrated" mode. So i changed it to "classic". I still get a blank page. But checking the log will output this: 2012-10-03 14:57:00 ip removed GET /boo/GetData - 80 - ip removed Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:15.0)+Gecko/20100101+Firefox/15.0.1 404 2 1260 203 <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <identity impersonate="true" /> <!-- Impersonate NT AUTHORITY/IUSR --> <compilation targetFramework="4.0"> <assemblies> <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b7735c561131e089" /> </assemblies> </compilation> <pages controlRenderingCompatibilityVersion="3.5" clientIDMode="AutoID" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <httpErrors existingResponse="PassThrough" /> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> <directoryBrowse enabled="false" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <!-- Configure the WCF REST service base address via the global.asax.cs file and the default endpoint via the attributes on the <standardEndpoint> element below --> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> <connectionStrings> <add name="Entities" connectionString="metadata=res://*/DataModel.csdl|res://*/DataModel.ssdl|res://*/DataModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=someip;initial catalog=db_90;User ID=user1;Password=access2;multipleactiveresultsets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration>

    Read the article

  • Configuring nginx server to handle requests from multiple domains

    - by KillABug
    Use Case:- I am working on a web application which allows to create HTML templates and publish them on amazon S3.Now to publish the websites I use nginx as a proxy server. What the proxy server does is,when a user enters the website URL,I want to identify how to check if the request comes from my application i.e app.mysite.com(This won't change) and route it to apache for regular access,if its coming from some other domain like a regular URL www.mysite.com(This needs to be handled dynamically.Can be random) it goes to the S3 bucket that hosts the template. My current configuration is: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; charset utf-8; keepalive_timeout 65; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay off; Default Server Block to catch undefined host names server { listen 80; server_name app.mysite.com; access_log off; error_log off; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; client_max_body_size 10m; client_body_buffer_size 128k; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; } } } Load all the sites include /etc/nginx/conf.d/*.conf; Updates as I was not clear enough :- My question is how can I handle both the domains in the config file.My nginx is a proxy server on port 80 on an EC2 instance.This also hosts my application that runs on apache on a differnet port.So any request coming for my application will come from a domain app.mysite.com and I also want to proxy the hosted templates on S3 which are inside a bucket say sites.mysite.com/coolsite.com/index.html.So if someone hits coolsite.com I want to proxy it to the folder sites.mysite.com/coolsite.com/index.html and not to app.syartee.com.Hope I am clear The other server block: # Server for S3 server { # Listen on port 80 for all IPs associated with your machine listen 80; # Catch all other server names server_name _; //I want it to handle other domains then app.mysite.com # This code gets the host without www. in front and places it inside # the $host_without_www variable # If someone requests www.coolsite.com, then $host_without_www will have the value coolsite.com set $host_without_www $host; if ($host ~* www\.(.*)) { set $host_without_www $1; } location / { # This code rewrites the original request, and adds the host without www in front # E.g. if someone requests # /directory/file.ext?param=value # from the coolsite.com site the request is rewritten to # /coolsite.com/directory/file.ext?param=value set $foo 'http://sites.mysite.com'; # echo "$foo"; rewrite ^(.*)$ $foo/$host_without_www$1 break; # The rewritten request is passed to S3 proxy_pass http://sites.mysite.com; include /etc/nginx/proxy_params; } } Also I understand I will have to make the DNS changes in the cname of the domain.I guess I will have to add app.mysite.com under the CNAME of the template domain name?Please correct if wrong. Thank you for your time

    Read the article

  • Vista 64-bit, DISK BOOT FAILURE

    - by weka
    So I have this Acer Aspire AX3200-U3600A with Windows Vista (64-bit). Every night I turn it off and turn it back on in the morning. Around three weeks ago, I did a fresh factory reimage. Good as new. Then around two days ago, when I turned it on, I noticed it was running extremly slow. As in, it would often freeze up while I had multiple applications open when it usually never froze up. So I decided to restart my computer. Big mistake. My computer froze right after I clicked shut-down. I waited a while. Nothing. Waited some minutes. Nope. I decided to shut it down by pressing the power button. Here is where the problems begin. When I turned it back on, I saw the Windows logo and loading bar and then it loaded to black. I turned it off again forcefully by power button and then once more... then I got: AMD Data Change... Update New Data to DMI! then later the screen clears and I get: AHCI Option ROM BIOS Revision: 01.05.92 Date: 02-19-2008 Copyright (c) 2006-2008 Phoenix Technologies, LTD Port 01: Reset Port Error!! Port 02: then the screen clears again but this time, this loads from the bottom: Nvidia Boot Agent 249.0542 (copyright stuff... blah blah) PXE-E61: Media test failure, check cable. PXE-M0F: Exiting Nvidia Boot Agent DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER. So I try to go into Safe Mode. Well, first of all it doesn't load as fast. After it loads disk.sys from windows/drivers, it will wait a while (2-3 mins) THEN load. However it loads the Acer eRecovery Management Tool. I have three options: Reset computer to factory default, Restore computer from user's backup, or Exit. However, the top two options are gray and disabled where as the Exit is in blue and definitely clickable. So obviously safe mode is not there... A strong thing to note: In the beginning when all of this started, I did a Boot Windows Normal from pressing f8 and I got to my desktop! It logged me in. I could see the icons on my files. However my desktop was extremely slow as in when I clicked on the Start menu, it would wait a while, then load up the menu with JUST the gradient, no text or icons... so as you can see... it saw my HDD? Also, before anyone says, I have NO USB plugged in. My mouse and keyboard are not USB inputs, I assure you. And this came without a recovery CD AND when I went in BIOS, to change the BOOT ORDER, I did NOT see a CD-ROM option. And when I tried pressing ALT+F10 to get into Acer eRecovery Management, the top two options were disabled as well. But sometimes on start-up, I get: Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removeable storage is properly connected and then restart your computer. If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occured. Then I tried Last Known Good Configuration Settings, that gives me a BSOD. What should I do/

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • Apache SSL reverse proxy to a Embed Tomcat

    - by ggarcia24
    I'm trying to put in place a reverse proxy for an application that is running a tomcat embed server over SSL. The application needs to run over SSL on the port 9002 so I have no way of "disabling SSL" for this app. The current setup schema looks like this: [192.168.0.10:443 - Apache with mod_proxy] --> [192.168.0.10:9002 - Tomcat App] After googling on how to make such a setup (and testing) I came across this: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/861137 Which lead to make my current configuration (to try to emulate the --secure-protocol=sslv3 option of wget) /etc/apache2/sites/enabled/default-ssl: <VirtualHost _default_:443> SSLEngine On SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLProxyEngine On SSLProxyProtocol SSLv3 SSLProxyCipherSuite SSLv3 ProxyPass /test/ https://192.168.0.10:9002/ ProxyPassReverse /test/ https://192.168.0.10:9002/ LogLevel debug ErrorLog /var/log/apache2/error-ssl.log CustomLog /var/log/apache2/access-ssl.log combined </VirtualHost> The thing is that the error log is showing error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol Complete request log: [Wed Mar 13 20:05:57 2013] [debug] mod_proxy.c(1020): Running scheme https handler (attempt 0) [Wed Mar 13 20:05:57 2013] [debug] mod_proxy_http.c(1973): proxy: HTTP: serving URL https://192.168.0.10:9002/ [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2011): proxy: HTTPS: has acquired connection for (192.168.0.10) [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2067): proxy: connecting https://192.168.0.10:9002/ to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2193): proxy: connected / to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2444): proxy: HTTPS: fam 2 socket created to connect to 192.168.0.10 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2576): proxy: HTTPS: connection complete to 192.168.0.10:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection to child 0 established (server demo1agrubu01.demo.lab:443) [Wed Mar 13 20:05:57 2013] [info] Seeding PRNG with 656 bytes of entropy [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/connect initialization [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: unknown state [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1897): OpenSSL: read 7/7 bytes from BIO#7f122800a100 [mem: 7f1230018f60] (BIO dump follows) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1830): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1869): | 0000: 15 03 01 00 02 02 50 ......P | [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1875): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1903): OpenSSL: Exit: error in unknown state [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] SSL Proxy connect failed [Wed Mar 13 20:05:57 2013] [info] SSL Library Error: 336032002 error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 0 with abortive shutdown (server example1.domain.tld:443) [Wed Mar 13 20:05:57 2013] [error] (502)Unknown error 502: proxy: pass request body failed to 172.31.4.13:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [error] [client 192.168.0.10] proxy: Error during SSL Handshake with remote server returned by /dsfe/ [Wed Mar 13 20:05:57 2013] [error] proxy: pass request body failed to 192.168.0.10:9002 (172.31.4.13) from 172.31.4.13 () [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2029): proxy: HTTPS: has released connection for (172.31.4.13) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: SSL negotiation finished successfully [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 6 with standard shutdown (server example1.domain.tld:443) If I do a wget --secure-protocol=sslv3 --no-check-certificate https://192.168.0.10:9002/ it works perfectly, but from apache is not working. I'm on an Ubuntu Server with the latest updates running apache2 with mod_proxy and mod_ssl enabled: ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" ~# dpkg -s apache2 ... Version: 2.2.22-1ubuntu1.2 ... ~# dpkg -s openssl ... Version: 1.0.1-4ubuntu5.7 ... Hope that anyone may help

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

  • Windows 8.1 Update 1 Disk Usage 100%

    - by Gookjin Jeong
    Background Information / Computer Specs I have a 14-inch Samsung Series 5 Ultra. Core i5 CPU, 750GB HDD, 8GB RAM, Intel HD Graphics 4000. I've had the computer for about 1.5 years with no major problems. Problem The issue appeared at the beginning of April this year, when I updated the OS to Windows 8.1 Update 1 (not from 8 to 8.1). After being on continually (except for at night, when I put it on sleep mode) for about 48 hours, the disk usage as seen by Task Manager hits 100%. When this happens, everything from opening/closing applications to typing and even bringing up the start screen by pressing the Windows key becomes extremely slow. The only way to make the disk usage decrease is to restart the computer. Then the problem repeats. I've used my current laptop (as well as my previous laptops) this way -- putting it on sleep mode at night and restarting it only when Windows needs to install updates -- for a long time. So I know the 100% disk usage is not due to the way I use the computer. The thing that causes the spike varies. Sometimes it's System, sometimes it's one of the various applications I installed (e.g. Chrome, Evernote, Spotify, Wunderlist, iTunes, etc.), and sometimes it's Antimalware Service Executable, etc. Tried Solutions I think I tried almost every solution out there for this problem: Running the check disk command (chkdsk /b /f /v /scan c:) from Admin Command Prompt Running Windows Memory Diagnostic Disabling Superfetch and Windows Search from services.msc Running "Fix problems with Windows Update" from Control Panel -- Troubleshooting Updating and rolling back the graphics driver (Intel HD 4000) Disabling "Use hardware acceleration when available" from Chrome settings Disabling Intel Rapid Storage Technology Running the SFC /SCANNOW command as recommended here Running a quick scan & a full scan from Windows Defender (no threats found) Taking the hard drive out and putting it back Refreshing the computer, from the Update and recovery -- Recovery option in Windows settings NONE of the above worked for me. I was about to give up but then noticed that one of the main culprits of the disk usage spike, as shown in the "Disk Activity" section of the Resource Monitor, was C:\System (pagefile.sys). I googled around and found that one of the recommended solutions was to disable pagefile. I then went to **Control Panel -- System and Security -- System -- Advanced system settings -- Advanced tab -- Performance settings -- Advanced tab -- "Change" under Virtual memory and discovered that the number for "Currently allocated" at the bottom was 1280MB, although the number for "Recommended" was 4533MB. I immediately changed it to 4533MB and checked my family members' computers to see what the numbers were like. All of theirs had a currently allocated space that was only slightly smaller than the recommended space. See screenshot below: This might fix the problem. I'll have to wait a couple more days.But if it doesn't, what in the world should I do next? I'm guessing the hard drive isn't failing because This computer is less than 2 years old; and Speccy says that the status of the HDD is good. Update 5/27/2014 The "4533MB" solution did not work. I had to reboot the computer about 30 minutes ago because the disk usage again hit 100%. When I opened Resource Monitor the C:\System (pagefile.sys) again was shown to be the culprit. I have now disabled pagefile entirely via the same window shown above in the screenshot. The number for "currently allocated" is now 0MB. Will update again in a couple days, or if the problem occurs again, whichever comes sooner.

    Read the article

< Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >