Search Results

Search found 4447 results on 178 pages for 'visible'.

Page 154/178 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Disable a control inside a gridview

    - by saeed talaee
    Hi i want to disable link-bottoms control in a grid view with the condition of a special value . for example if the count for a row become 0 ,the link bottom for that row should be invisible . what should i do? where should i write the code? here is cod that i write in row command grid view but it works only of i push the link bottom!! but i want to apply this cod to my page before loading. please guide me int idx = Convert.ToInt32(e.CommandArgument); idx = idx - (GridView1.PageSize * GridView1.PageIndex); int ID = (int)GridView1.DataKeys[idx].Value; string connStr = ConfigurationManager.ConnectionStrings["dbconn"].ConnectionString; SqlConnection sqlconn = new SqlConnection(connStr); SqlCommand sqlcmd = new SqlCommand(); sqlcmd = new SqlCommand("SELECT count(ID) FROM ReviwerArticle where ArticleID=@ArticleID", sqlconn); sqlcmd.Parameters.AddWithValue("@ArticleID", ID); sqlconn.Open(); int count = ((int)sqlcmd.ExecuteScalar()); sqlconn.Close(); if (count == 0) { ((LinkButton)GridView1.Rows[idx].Cells[0].FindControl("LinkButton4") as LinkButton).Visible = false; }

    Read the article

  • SCVMM 2012 R2 - Installing Virtual Switch Fails with Error 2916

    - by Brian M.
    So I've been attempting to teach myself SCVMM 2012 and Hyper-V Server 2012 R2, and I seem to have hit a snag. I've connected my Hyper-V Host to SCVMM 2012 successfully, and created a logical network, logical switch, and uplink port profile (which I essentially blew through with the default settings). However when I attempt to create a virtual switch on my Hyper-V host, I run into an issue. The job will use my logical network settings I created to configure the virtual switch, but when it tries to apply it to the host, it stalls and eventually fails with the following error: Error (2916) VMM is unable to complete the request. The connection to the agent vmhost1.test.loc was lost. WinRM: URL: [h**p://vmhost1.test.loc:5985], Verb: [GET], Resource: [h**p://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/v2/Msvm_ConcreteJob?InstanceID=2F401A71-14A2-4636-9B3E-10C0EE942D33] Unknown error (0x80338126) Recommended Action Ensure that the Windows Remote Management (WinRM) service and the VMM agent are installed and running and that a firewall is not blocking HTTP/HTTPS traffic. Ensure that VMM server is able to communicate with econ-hyperv2.econ.loc over WinRM by successfully running the following command: winrm id –r:vmhost1.test.loc This problem can also be caused by a Windows Management Instrumentation (WMI) service crash. If the server is running Windows Server 2008 R2, ensure that KB 982293 (h**p://support.microsoft.com/kb/982293) is installed on it. If the error persists, restart vmhost1.test.loc and then try the operation again. Refer to h**p://support.microsoft.com/kb/2742275 for more details. I restarted the server, and upon booting am greeted with a message stating "No active network adapters found." I load up powershell and run "Get-NetAdapter -IncludeHidden" to see what's going on, and get the following: Name InterfaceDescription ifIndex Status ---- -------------------- ------- ----- Local Area Connection* 5 WAN Miniport (PPPOE) 6 Di... Ethernet Microsoft Hyper-V Network Switch Def... 10 Local Area Connection* 1 WAN Miniport (L2TP) 2 Di... Local Area Connection* 8 WAN Miniport (Network Monitor) 9 Up Local Area Connection* 4 WAN Miniport (PPTP) 5 Di... Ethernet 2 Broadcom NetXtreme Gigabit Ethernet 13 Up Local Area Connection* 7 WAN Miniport (IPv6) 8 Up Local Area Connection* 9 Microsoft Kernel Debug Network Adapter 11 No... Local Area Connection* 3 WAN Miniport (IKEv2) 4 Di... Local Area Connection* 2 WAN Miniport (SSTP) 3 Di... vSwitch (TEST Test Swi... Hyper-V Virtual Switch Extension Ada... 17 Up Local Area Connection* 6 WAN Miniport (IP) 7 Up Now the machine is no longer visible on the network, and I don't have the slightest idea what went wrong, and more importantly how to undo the damage I caused in order to get back to where I was (save for re-installing Hyper-V Server, but I really would rather know what's going on and how to fix it)! Does anybody have any ideas? Much appreciated!

    Read the article

  • Real server, Multiple IP Addresses, HyperV Virtual Server, How to partition IPs across real and Virtual NICs

    - by Steven_W
    This is a slightly difficult problem to explain without same basic background information - I'll try and refine the question later as necessary Originally, I have a single hosted server (Win 2008R2) with the following range of 8 IP addresses. - Single NIC - IP: x.x.128.72 -> x.x.128.79 - Subnet: x.x.255.192 - GW: x.x.128.65 After installing Hyper-V and setting up a single virtual server on the same box, I then wanted to assign one of the IP addresses to the virtual server, leaving everything else running normally. -- Firstly, I tried using the "External" network, but (even after setting IPs on the "Virtual Adapter" similar to Here but struggled to get networking running at all. I needed to keep the server running (otherwise I would have spent more time pursuing this approach) Q1 ... Was this a sensible thing to do ? Should I have carried on down this route ? -- I then decided to try different approach - Set the HyperV network to "Internal" (visible to Management OS) - Physical NIC - IP: x.x.128.72 -> x.x.128.75 - Subnet: x.x.255.192 - GW: x.x.128.65 - Virtual NIC - IP: x.x.128.78 - Subnet: x.x.255.252 - GW: x.x.128.72 ... { The same as the IP of the physical NIC ) - Virtual OS-NIC - IP: x.x.128.77 - Subnet: x.x.255.252 - GW: x.x.128.78 ... { The same as the IP of the host virtual-NIC ) -- Surprisingly enough, this approach actually worked, and I was able to connect from all the following: - Internet to/from physical NIC (x.x.128.72) - physical NIC (x.x.128.72) to virtual-OS-NIC (x.x.128.77) e.g. testing via ping + FTP - Internet to/from virtual-OS-NIC (x.x.128.72) -- The problem I have is that this approach seems to only last for a short while (a few hours). After this time, it seems that I lose the ability to connect from Virtual-OS-NIC to/from the internet (but I can still connect from the host-OS to the virtual-OS and from the host-OS to the internet) I have re-tested this a couple of times with the same results ... I leave the server on for a few hours (e.g. overnight), and when I come back in the morning, the Virtual-OS loses the ability to route to the internet -- I'm not quite sure what to look at next (or whether I'm going about this completely the wrong way ) One "possible relevant item" is that the host-OS is also running RRAS (Routing and Remote Access), but this is only to run a simple VPN -- Q2 - Wheat should I be looking at next ? (Any good references / recommendations of what to try) Would appreciate any thoughts or comments (even if you tell me I'm going about this the wrong way)

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

  • Fedora 13 post security update boot problem

    - by Alex
    Hello. About a month ago I installed a security update that had new Kernek 2.6.34.x from 2.6.33.x), this is when the problem occurred for the first time. After the install my computer would not boot at all, black screen without any visible hard drive activity (I gave it good 30 minutes on black screen, before took actions)... I poped in installation DVD and went in rescue mode to change back the boot option to old kernel (was just a guess where the problem was). After restart computer loaded just file, took a long time for it to start because of SELinux targeted policy relabel is required. Relabeling could take very long time depending on file size. I assumed that the update got messed up somehow and continued working with modified boot option. Couple of days ago, there was another kernel update. I installed it and same problem as before. This rules out corrupted update theory... Black screen right after 'BIOS' screen before OS gets loaded. I had to rescue system again... Below is copy of my grub.conf file. I am fairly new to LINUX (couple of years of experience), mostly development and basic config... nothing crazy. # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_obalyuk-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=2 timeout=0 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.34.6-54.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-54.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-54.fc13.i686.PAE.img title Fedora (2.6.34.6-47.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-47.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-47.fc13.i686.PAE.img title Fedora (2.6.33.8-149.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.33.8-149.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.33.8-149.fc13.i686.PAE.img I like my system to be up to date... Let me know if I can post any other files that can be of help. Has anyone else had this problem? Does anyone has any ideas how to fix this problem? p.s. Anything helps, you ppl are great! thx for ur time.

    Read the article

  • Windows XP machine not seeing external FAT32 partitions correctly

    - by Rob_before_edits
    About 8 months ago my Windows XP machine stopped being able to see FAT32 external drives when I plug them in... mostly. I will explain... It happens with all my FAT32 drives, whether they be unpowered external hard drives, powered external hard drives, SDHC cards plugged directly into the machine's card reader, or SDHC cards plugged in via a separate USB card reader. All of these drives/cards used to work fine on this machine. They all stopped working at about the same time. NTFS volumes are not affected. If I plug in NTFS external drives they are recognized right away. I even have one external drive with two partitions on it, one is NTFS which is recognized, the other is FAT32, which is not recognized. If I attach a FAT32 drive, then reboot, then the drive almost always becomes visible to the machine after the reboot. Sometimes I can plug in a FAT32 drive and it works right away. Not often though. I'd say I get lucky more often with SDHC cards than hard drives. I'm developing a theory that I only get lucky with hard drives if I'm running Acronis Disk Director when I plug them in, though that usually doesn't work either - I need more data here, this may be a red herring. Getting lucky with a hard drive is really rare, usually I have to reboot. When a FAT32 is recognized, either because I got lucky or because I rebooted, I can almost never safely disconnect it. It tells me "The device 'Generic volume' cannot be stopped right now. Try stopping the device again later". I can't seem to get around this. IIRC, I've tried closing every open window, and still no luck. Since I care about my data usually the only way to disconnect a FAT32 drive is to shut down the machine. As you can imagine, two reboots just to read a drive is getting pretty old... When the machine fails to see a FAT32 drive it usually comes up with the appropriate drive letter and the words "Local Disk" in Windows Explorer instead of the correct partition name. If I click on it I get "J:\ is not accessible. The parameter is incorrect." Before this problem arose I always clicked the "safely remove" button for everything, including SDHC cards where I think it's not necessary. I've known for a long time that this is the correct procedure for hard drives, so I don't think failing to do this was the cause of this problem (before someone asks :) Any answers or suggestions most welcome.

    Read the article

  • Win XP error 0x80041003 using GetObject/winmgmts

    - by John Lewis
    My computer is called "neil" and I want to set some values using WMI in vbScript. I adapetd the script below from one supplied by Microsoft. When I run it in my browser I get Error Type: (0x80041003) /dressage/30/pdf2.asp, line 8 I suspect it is some registry/security setting. Any advice? John Lewis FULL SCRIPT call Print_HTML_Page("http://neil/dressage/ascii.asp", "ascii") Sub SetPDFFile(strPDFFile) Const HKEY_LOCAL_MACHINE = &H80000002 strKeyPath = "SOFTWARE\Dane Prairie Systems\Win2PDF" strComputer = "." Set objReg=GetObject( _ "winmgmts:{impersonationLevel=impersonate}!\\" & _ strComputer & "\root\default:StdRegProv") strValueName = "PDFFileName" objReg.SetExpandedStringValue HKEY_LOCAL_MACHINE,_ strKeyPath,strValueName,strPDFFile End Sub Sub Print_HTML_Page(strPathToPage, strPDFFile) SetPDFFile( strPDFFile ) Set objIE = CreateObject("InternetExplorer.Application") 'From http://www.tek-tips.com/viewthread.cfm?qid=1092473&page=5 On Error Resume Next strPrintStatus = objIE.QueryStatusWB(6) If Err.Number 0 Then MsgBox "Cannot find a printer. Operation aborted." objIE.Quit Set objIE = Nothing Exit Sub End If With objIE .visible=0 .left=200 .top=200 .height=400 .width=400 .menubar=0 .toolbar=1 .statusBar=0 .navigate strPathToPage End With 'Wait until IE has finished loading Do while objIE.busy WScript.Sleep 100 Loop On Error Goto 0 objIE.ExecWB 6,2 'Wait until IE has finished printing WScript.Sleep 2000 objIE.Quit Set objIE = Nothing End Sub Update: Thanks for your reply. The line breaks seem to have been introduced in the process of paasting into this form. Well spotted - I was using a PDF file name "ascii". I added a .pdf extension but still get the error. I suspect you're right that it's to do with admin rights. Here's more about the setup and what I'm trying to achieve. Win2pdf is a product for writing PDFs by works by simulating a Windows printer. You "print" the page, select win2pdf in the print dialog and it then asks for a file name. I have it installed on my pc (called Neil) and it works fine in this conventional way. My aim is to write an html page to a PDF file using win2pdf - but via ASP/vbscript/javascript rather than with manual intervention. The script for doing this was provided by win2PDF's tech support but when it did not work, that was the limit of their understanding. In the sample script the file ascii.asp just produces a table of ascii codes/characters. The URL given is on my own PC which has IIS set up to run scripts which it does fine. The error I get occurs on about the fourth line executed. I am logged in with full admin rights - I think! But I'm no expert. I hope this helps to give some more specific suggestions about how to check/fix the admin rights.

    Read the article

  • Remove DRM from *.pdb e-book that I own - while maintaining footnotes, etc.?

    - by ziesemer
    Background: I've already reviewed Remove DRM from ePub Files? and How can I remove DRM from Kindle books? - the answers to which have already brought me partial process. The challenge is that I have a few purchased *.pdb e-books that I purchased in years past, e.g. 2006. In particular, they were purchased from the palm eBook Store (ebooks.palm.com - now defunct, possibly part of http://www.ereader.com / Barnes & Noble?) - originally for use on a Palm Treo that has since died. Of particular note is that I have a revision / publication of a book that is no longer published, and not available as an e-book from anywhere else that I've been able to find. (I feel fortunate to have even found the *.pdb files on backup.) I have a copy of the electronic invoice for it - which includes the details necessary for unlocking - the "Purchaser's Name" and the "Unlock Code" - which is the digits of my credit card # that I had used to purchase it. Given the above information, I was surprised to be able to open the book using the Windows eReader software and unlock it. Here I am able to view the complete contents and functionality of the book as I had done on the Palm Treo - including viewing of linked annotations / footnotes, etc. Following the full spirit of Remove DRM from ePub Files?, I want to ensure that I can access this on any device of my choosing - especially now and in the future, and as new technologies arrive and disappear. Ideally, I'm just looking to accomplish the minimum necessary to allow import into calibre. Outstanding Issue: I've found a few solutions that have given me "90%" success - all based on various versions of some Python scripts - including versions 0.21 and 0.11 of "erdr2pml.py" (based on "ereader2html"). Unfortunately, unless I'm missing something, these programs are attempting to also "convert" - instead of just "decrypting". As such, the outputs are missing embedded images and/or footnotes. I.E., there is a linked, underlined, and super-scripted "a" after some text - but the content of the footnote no longer exists. I can validate this by inspecting the generated *.pmlz file, and nowhere does it contain the original footnotes that are still visible in the original *.pdb file. I'm hoping to find a process that focuses on the decryption only, instead of attempting any type of a content conversion - or if a content conversion is required / involved, that it maintains all of the features and content of the original. (Again, I'm confident that if/once a version is obtained that calibre can import, I'll be able to fulfill the rest of my requirements.)

    Read the article

  • How Do I Properly Run OfflineIMAP in a Crontab

    - by alharaka
    Installed Fedora. # cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }' Fedora release 14 (Laughlin) Installed offlineimap from yum, cuz I'm lazy these days. # yum info offlineimap | awk ' { print F "> " $0; print ""; }' Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Installed Packages Name : offlineimap Arch : noarch Version : 6.2.0 Release : 2.fc14 Size : 611 k Repo : installed From repo : fedora Summary : Powerful IMAP/Maildir synchronization and reader support URL : http://software.complete.org/offlineimap/ License : GPLv2+ Description : OfflineIMAP is a tool to simplify your e-mail reading. With : OfflineIMAP, you can read the same mailbox from multiple : computers. You get a current copy of your messages on each : computer, and changes you make one place will be visible on all : other systems. For instance, you can delete a message on your home : computer, and it will appear deleted on your work computer as : well. OfflineIMAP is also useful if you want to use a mail reader : that does not have IMAP support, has poor IMAP support, or does : not provide disconnected operation. And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc. [general] ui = TTY.TTYUI accounts = Personal, Work maxsyncaccounts = 3 [Account Personal] localrepository = Local.Personal remoterepository = Remote.Personal [Account Work] localrepository = Local.Work remoterepository = Remote.Work [Repository Local.Personal] type = Maildir localfolders = ~/mail/gmail [Repository Local.Work] type = Maildir localfolders = ~/mail/companymail [Repository Remote.Personal] type = IMAP remotehost = imap.gmail.com remoteuser = [email protected] remotepass = password ssl = yes maxconnections = 4 # Otherwise "deleting" a message will just remove any labels and # retain the message in the All Mail folder. realdelete = no [Repository Remote.Work] type = IMAP remotehost = server.company.tld remoteuser = username remotepass = password ssl = yes maxconnections = 4 I have tried TTY.TTYUI, NonInteractive.Quiet and NonInteractive.Basic with different variations. With or without redirection, the crontab entries I try cause problems. $ crontab -l | awk ' { print F "> " $0; print ""; }' */5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1 */5 * * * * offlineimap I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?

    Read the article

  • Macintosh computers cannot connect to router unless we re-start the modem and router

    - by dwwilson66
    We have a small office network with DSL and a Netgear WNR-2000 wireless router acting as a DHCP server. There are nine devices connected to the router, wirelessly and wired. Whenever a Mac computer tries to connect, it's unsuccessful until we restart the router. Each of the possible devices that can connect to the network is listed in a table to assign certain IP addresses to certain MAC addresses. I am running WPA-PSK security. I can view the router status and see that the Mac's MAC address is visible to the router, but with a 169.* IP address, even though I'm assigning its MAC address to an IP address within my subnet. All non-Mac devices attached to the network connect properly, and can access the network properly even AFTER the Mac has not successfully connected. The network includes Windows devices, Roku boxes, printers and internet ready TVs. This to me, would point to a DHCP issue with how Mac communicates with my network. One interesting thing to note is that if a Mac connects and is prevented from sleeping, it will stay connected indefinitely; reissuing the security cert from the router works fine. I'm not sure if that's supposed to sever & re-establish a connection with the updated credentials or not, but I do stay connected. If the Mac sleeps and is awakened while the security cert is still valid, it connects fine. If the security certificate expires while the Mac is asleep, we need to restart the router. Restarting the router will ALWAYS assigns the proper IP addresses to the Mac equipment. I have heard anecdotally that Mac doesn't play well with 802.11n; I have not tested any other Wireless protocols. There's a couple issues here: First, I found this on Stack, Mac laptop crashing wireless router, but it's not rally applicable since the router isn't crashing. But, it does give some clues about Mac's accessing the network. I did change my encryption from WEP to WPA-PSK, but after about a week, we're still experiencing the issue. I'm not really sure if there's anything else useful in that question. Second, I'm considering getting a 802.11c router and hooking it up to the wireless N router. the 802.11c router would handle all the Mac traffic, and would be set up as a Mac-only subnet. Everything else would remain as is. However, I'm not sure if this is doable on a technology level...do I need a bridge or is this some way to do this with regular consumer gear?

    Read the article

  • Firefox: non-Vimperator way to do mouseless browsing?

    - by Peter Mortensen
    Is it possible to do efficient browsing with Firefox using only the keyboard (like in Opera)? By efficient I mean something faster than using TAB - this takes far too long. The arrow keys should be for navigation (in Opera it is Shift + arrow key). It can done with the Vimperator add-on, but isn't there a simpler way? Update 2: the closest to Opera's way is to enable caret navigation (F7 toggles this mode). It doesn't jump between links so it is a little bit slower, but the normal navigation (arrow keys, page up, page down, etc.) works and the focus/caret/cursor follows (in contrast to a text editor for page up/down). And text can be selected and copied like in a text editor. The biggest drawback is that in practice it is necessary to switch in and out of caret mode. And there is no indication of which mode is currently active. Update 1: a work-around (proposed by several but is not really what I am looking for) can be used if 3 settings are changed (to make it practical). After these changes the first few letters of a link text can be typed and that link will selected so pressing Enter will open it. Using the work-around the screen will jump around if it is a long page as it does not restrict itself to the current visible page, but it is usable. First settings change: menu Tools/Options/Advanced/tab General/Accessibility/Search for text when I start typing Turn this option on. Second settings change: set option to only go to links; in address bar enter: about:config followed by Enter. Then: press "I'll be careful, I promise", find the line accessibility.typeaheadfind.linksonly, select it and change the value to True by either hitting Enter or Shift+F10/Toggle (accessibility.typeaheadfind.linksonly is on line 11 when I tried). Third settings change: turn off case-sensitivity. Set accessibility.typeaheadfind.casesensitive to 0 (same procedure as for accessibility.typeaheadfind.linksonly, see above. When Enter is pressed a dialog box will appear with the current value. Type 0 and press Enter). To use: type some part of the link. If there are several possibilities use Ctrl+G (or F3) to jump between them. Use Ctrl+Enter to open in a new tab. Platform: Firefox 3.0.6, Windows XP 64 bit SP2.

    Read the article

  • dd-wrt router firmware QoS troubleshooting

    - by Jeff Atwood
    I've been using the dd-wrt firmware on my router and I like it a lot! But -- I'm not sure the quality of service (QoS) is working on it. I have it set up as follows: http, port 80 -- Premium bittorrent, port 6969 -- Bulk https, port 443 -- Premium dns, port 53 -- Premium Per the QoS documentation, these levels are: bandwidth is allocated based on the following percentages of uplink and downlink values for each class: Exempt: 100mbps - ignores global limits. Premium: 75% - 100% Express: 15% - 100% Standard: 10% - 100% Bulk: 1.5% - 100% This doesn't entirely seem to work, though -- with busy torrents going I get major pauses in my web browsing which sucks! The QoS documentation gives some steps to check the QoS ... What you'll be interested to look at will be the first set of source and destination IP, including the port numbers. Next the presence of l7proto and the "mark" field. The entries indicate the current live connection QoS priority applied on them based on the "mark" field. The "mark" values correspond to the following Exempt: 100 Premium: 10 Express: 20 Standard: 30 Bulk: 40 (no QoS matched): 0 You may see "mark=0" for some l7proto service even though they are in configured in the list of QoS rules. This may mean that the layer 7 pattern matching system didn't match a new or changed header for that protocol. Custom service on port matches will usually take care of these. On port 6969 (bittorrent) I see a weird mixture of stuff with mark=0 and mark=40 like so cat /proc/net/ip_conntrack udp 17 105 src=98.162.182.42 dst=1.2.3.4 sport=64512 dport=6969 packets=3 bytes=290 src=10.0.0.2 dst=98.162.182.42 sport=6969 dport=64512 packets=4 bytes=202 [ASSURED] mark=0 secmark=0 use=1 tcp 6 117 TIME_WAIT src=98.248.173.174 dst=1.2.3.4 sport=51114 dport=6969 packets=12 bytes=704 src=10.0.0.2 dst=98.248.173.174 sport=6969 dport=51114 packets=10 bytes=440 [ASSURED] mark=40 secmark=0 use=1 tcp 6 598 ESTABLISHED src=165.132.128.201 dst=1.2.3.4 sport=57218 dport=6969 packets=8024 bytes=9919881 src=10.0.0.2 dst=165.132.128.201 sport=6969 dport=57218 packets=4211 bytes=239607 [ASSURED] mark=0 secmark=0 use=1 tcp 6 586 ESTABLISHED src=68.46.9.24 dst=1.2.3.4 sport=64688 dport=6969 packets=6 bytes=490 src=10.0.0.2 dst=68.46.9.24 sport=6969 dport=64688 packets=8 bytes=944 [ASSURED] mark=40 secmark=0 use=1 udp 17 45 src=222.254.228.38 dst=1.2.3.4 sport=25438 dport=6969 packets=5 bytes=454 src=10.0.0.2 dst=222.254.228.38 sport=6969 dport=25438 packets=3 bytes=154 [ASSURED] mark=0 secmark=0 use=1 ( full file visible at http://pastebin.com/AZE6EtWm ) I've been playing around with this log for a little while and I can't see any patterns! Why is some port 6969 bittorrent traffic tagged mark=0 (not matched) by dd-wrt's QoS while others are tagged mark=40 (Bulk) .. any ideas?

    Read the article

  • Is Samba Server what I'm looking for, and if so, what do I need? (currently on DD-WRT Micro)

    - by Anthony
    I am really confused as to what Samba actually does and how it works. Here's what I'm hoping it does: I set up a Samba server on my LAN, and everyone will be able to see each other's shared files and swap them. But some of the documentation makes it sound like it will just allow Mac/Linux computers to see Windows computers. Other bits of the documentation make it sound more like a local server, where a Linux machine would install Samba and they would see everyone and be visible to everyone, but that won't change if anybody else can see each other. While still other things I've read make it seem more like a file-server, where everyone sees each other but file transfers are not peer-to-peer but instead need a host disk for files to act as go between. So, assuming I'm even in the right ballpark of what Samba does in terms of my goal of total cross-visibility on the network, I am left with needing to know what I'd need to set up the server and whether it can be done and is worth it... DD-WRT's article on Samba is a bit ambiguous. One second it sounds as if I can run the server on micro as long as it's set up on a usb drive, but then it also sounds like micro can't run it at all, etc. If I can run it from a usb-connected drive, I still need to know if the files are actually stored on that drive. The dd-wrt article mentions: You can run a Samba server on your main computer and run a client on your router (thus gaining writable storage for the router) or you can use Samba to share a drive connected (typically by USB) to the router among all the computers connected to your network. That one part "to share a drive...among all the computers" makes it sound like the only benefit I get from Samba is a share drive that any OS on the network can see, but they still won't see each other. But I'm very hopeful I'm misreading this. If the computers can see each other but still need the disk, how much space is generally a good idea? I'm basing this on the idea that the drive is a temporary store point. Obviously I'd have to get a drive big enough to store everything people wanted to share if the drive is a full-on file server. If I do have this all wrong, is there any software that achieves what I have in mind? Something that connects to the main router to bridge all clients?

    Read the article

  • Transient network dropout for Xen DomU's

    - by Stephen C
    We've got a CentOS server running a cluster of virtuals. Occasionally the cluster's internal network drops out for a minute or so ... and then comes back. The problem is somehow related to the actual network traffic, but it is not a simple load issue. (The system is generally lightly loaded, and the problem occurs irrespective of actual load.) The setup: CentOS 5.6 on Dom0, various CentOS on the DomU's Hardware - a Dell R710 with a BroadCom NextXpress 2 NIC (sigh) using the latest drivers for the NIC from BroadCom Xen configured to use network-bridge and vif-bridge Some iptable tweaks to route an unrelated port to one of the virtuals. The system has one externally visible IP address, and Dom0 runs an Apache httpd configured with a number of virtual hosts each of which reverse proxies to web servers running on the virtuals. (The virtuals have to be NAT'ed, primarily because we don't have enough allocated public IP addresses.) The symptoms: Works fine most of the time. When someone tries to UPLOAD a large file to one virtuals, the internal network drops out ... for all virtuals: The Dom0 httpd sees a network timeout talking to the backend server on the virtual and reports a 502. A previously established ssh connection from Dom0 to any of the DomU's freezes. Our monitoring shows ping failures for traffic between virtuals. The Xen consoles to the DomU's do not freeze. No log messages in any log files that I can see, on either Dom0 or the DomU's ... apart from the Dom0 httpd logs. After a minute or so, the problem clears by itself. This is 100% reproducible. What we've tried: Downloading, building and installing the latest BNX2 driver on Dom0 Turning off MSI on the NIC - adding "options bnx2 disable_msi=1" to /etc/modprobe.conf Turning off tcp segmentation offload - "ethtool -K eth0 tso off". Sacrificing a black rooster at midnight. I've exhausted all my options apart from switching to KVM ... or slaughtering more roosters. Any suggestions?

    Read the article

  • Broadcom HT1100 SATA controller not working properly with 1TB drives

    - by Jeff C
    I've been using RHEL distro's for several years and always managed to find the answers until now. I know this is more of a hardware issue, but I've been working on this for over a week and trust Linux and the IT community to help more then HP. I have CentOS 6.3 installed on an HP ProLiant DL145 G3 server with the BroadCom HT1100 IO controller and ServerWorks SATA Controller MMIO BIOS v3.0.0015.6 Firmware. This controller does not support large drives fully. Here's what I've tried and the results; Stock setup - Freezes on the ServerWorks POST screen. Can't even enter CMOS without disconnecting the drives. If I simply disconnect the SATA cables before it gets to the ServerWorks screen and reconnect afterwards I can boot from a CD, USB, PXE fine. However fiddling with cables at ever boot isn't practical. If I enter the BIOS config I can set it to not try booting the drives but leave the controller enabled. This lets me boot normally but the drives are not visible in the OS (live CDs or USB installed). I used method #2 to install and update CentOS. I have the /boot partition on a USB drive (everything else is on the SATA drives in software RAID1) hoping that would work around the issue but I get this Kernel panic - not syncing:Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32-279.9.1.el6.x86_6 #1 Call Trace: [<ffffffff814fd6ba>] ? panic+0xa0/0x168 [<ffffffff81070c22>] ? do_exit+0x862/0x870 [<ffffffff8117cdb5>] ? fput+0x25/0x30 [<ffffffff81070c88>] ? do_group_exit+0x58/0xd0 [<ffffffff81070d17>] ? sys_exit_group+0x17/0x20 [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b panic occured, switching back to text console I'm sure it should be possible to talk to the drives without the BIOS boot check since the BIOS doesn't see them in method #2 either, their disconnected when it checks, but Linux sees them fine. If anyone could help figure out how I would greatly appreciate it! The other possible option I've come across is a complex firmware update. Tyan has a few boards on their website with the HT1100 and a ServerWorks v3.0.0015.7 update which says "adds support for TB drives" in the release notes. If someone could help me get the Tyan SATA firmware into the HP ROM file so I could just reflash that would also be very much appreciated. Thanks for any help you guys can offer!

    Read the article

  • Linux Software RAID1 Rebuild Completes, but after reboot, its degraded again

    - by zimmy6996
    I have been beating my head with an issue here, and I'm now turning to the internet for help. I have a system running Mandrake Linux, with the following configuration: /dev/hda - This is a IDE drive. Has some partitions on it that boot the system and make up most of the file system. /dev/sda - This is drive 1 of 2 for a software raid /dev/md0 /dev/sdb - This is drive 2 of 2 for a software raid /dev/md0 md0 gets mounted but fstab as /data-storage, so it is not critical to the systems ability to boot. We can comment it out of fstab, and the system works just fine either way. The problem is, we have a failed sdb drive. So I shut the box down, and have pulled the failed disk and installed a new disk. When the system boots up, /proc/mdstat shows only sda as part of the raid. I then run the various command to rebuild the RAID to /dev/sdb. Everything rebuilds correctly, and upon completion, you look at /proc/mdstat and it shows 2 drives sda1(0) and sdb1(1). Everything looks great. Then you reboot the box ... UGH!!! Once rebooted, sdb is missing again from the RAID. It is like the rebuild never happened. I can walk through the commands to rebuild it again, and it will work, but again, after reboot, the box seems to make sdb just vanish! The real odd thing is, if after reboot, I pull sda out of the box, and try to get the system to load with the rebuilt sdb drive in the system, and when I do, the system actually throws and error just after grub, and says something about drive error, and the system has to shut down. Thoughts??? I'm starting to wonder if grub has something to do with this mess. That the drive isn't being setup within grub to be visible at boot? This RAID array isn't necessary for the system to boot, but when the replacement drive is in there, without SDA it won't boot system, so it makes me believe there is something to that. On top of that, there just seems to be something wonky here the drive falling off of RAID after reboot. I've hit the point of pounding my head on the keyboard. Any help would be greatly appreciated!!!

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • Subdomains, folders, internationalization, and hosting solutions

    - by justinbach
    I'm a web developer and I recently landed a gig to develop the US / international version of a site for a company that's big in Europe but hasn't done much expansion into the US yet. They've got an existing site at company.com, which should remain visible to European customers after the new site goes up, and an existing (not great) site at company.us, which I'm going to be redeveloping (the .us site will be taken down when my version goes up--keep reading for details). My solution needs to take into account the fact that there are going to be new, localized versions of the site in the fairly near future, so the framework I'm writing needs to be able to handle localizations fairly easily (dynamically load language packs, etc). The tricky thing is the European branch of the company manages the .com site hosting (IIS-based) and the DNS, while I'll be managing the US hosting (and future localizations), which will likely be apache-based. I've never been a big fan of the ".us" TLD--I think most US users are accustomed to visiting the .com--so the thought is that the European branch will detect the IP of inbound traffic and redirect all US-based addresses to us.example.com (or whatever the appropriate localized subdomain might be), which would point to the IP address of my host. I'd then serve the appropriate locale-specific content by pulling the subdomain from the $_SERVER superglobal (assuming PHP). I couldn't find any examples of international organizations that take a subdomain-based approach for localization, but I'm not sure I have any other options as a result of the unique hosting structure here (in that there's not a unified hosting solution for the European and US sites). In my experience, the US version of an international site would live at domain.com/us, not at us.domain.com, and I'd imagine that this has to do with SEO (subdomains are treated as separate sites, so improved rankings for the US site wouldn't help the Canadian version if subdomains are used to differentiate between them). My question is: is there a better approach to solving this problem than the one I'm taking? Ideally, I'd like to use a folder-based approach (see adidas.com as an example of what I'm talking about), but I'm not sure that's a possibility given that the US site (and other localizations) will not be hosted on the same server as the rest of the .com. Can you, in IIS, map a folder (e.g. domain.com/us) to a different IP address? What would you recommend? Thanks for your consideration.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Email client wont connect to SMTP Authentication server

    - by Jason
    Im having trouble installing SMTH Auth for my ubuntu email server. I have followed ubuntu own guide for SMTH AUT (https://help.ubuntu.com/14.04/serverguide/postfix.html). But my email client thunderbird is giving this error " lost connection to SMTP-client 127.0.0.1." I cant add new users to thundbird either because of this connection problem. Do i have to alter any setting on my Thunderbird perhaps since ? I did try to make thunderbird use SSL for imap as well but that neither works. I restarted postfix and dovecot to find errors but both run just fine. Prior to SMTP auth changes thunderbird could connect just fine to my server and send mails. This is my main.cf file in postfix. It looks just like the one on ubuntu guide above. readme_directory = no # TLS parameters #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mysite.com mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir/ mailbox_command = #SMTP AUTH smtpd_sasl_type = dovecot smtpd_recipient_restrictions=permit_mynetworks, permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_auth_only = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes This my dovecot configuration at 10-master.conf service imap-login { inet_listener imap { #port = 143 } inet_listener imaps { #port = 993 #ssl = yes } # Number of connections to handle before starting a new process. Typically # the only useful values are 0 (unlimited) or 1. 1 is more secure, but 0 # is faster. <doc/wiki/LoginProcess.txt> #service_count = 1 # Number of processes to always keep waiting for more connections. #process_min_avail = 0 # If you set service_count=0, you probably need to grow this. #vsz_limit = $default_vsz_limit } service pop3-login { inet_listener pop3 { #port = 110 } inet_listener pop3s { #port = 995 #ssl = yes } } service lmtp { unix_listener lmtp { #mode = 0666 } # Create inet listener only if you can't use the above UNIX socket #inet_listener lmtp { # Avoid making LMTP visible for the entire internet #address = #port = #} } service imap { # Most of the memory goes to mmap()ing files. You may need to increase this # limit if you have huge mailboxes. #vsz_limit = $default_vsz_limit # Max. number of IMAP processes (connections) #process_limit = 1024 } service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service auth { unix_listener auth-userdb { #mode = 0600 #user = #group = } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } service dict { # If dict proxy is used, mail processes should have access to its socket. # For example: mode=0660, group=vmail and global mail_access_groups=vmail unix_listener dict { #mode = 0600 #user = #group = } } I did add auth_mechanisms = plain login to 10-auth.conf as well.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g

    - by John-Brown.Evans
    JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g ol{margin:0;padding:0} .c5{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 2pt 0pt 2pt} .c7{list-style-type:disc;margin:0;padding:0} .c4{background-color:#ffffff} .c14{color:#1155cc;text-decoration:underline} .c6{height:11pt;text-align:center} .c13{color:inherit;text-decoration:inherit} .c3{padding-left:0pt;margin-left:36pt} .c0{border-collapse:collapse} .c12{text-align:center} .c1{direction:ltr} .c8{background-color:#f3f3f3} .c2{line-height:1.0} .c11{font-style:italic} .c10{height:11pt} .c9{font-weight:bold} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:12pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This example shows the steps to create a simple JMS queue in WebLogic Server 11g for testing purposes. For example, to use with the two sample programs QueueSend.java and QueueReceive.java which will be shown in later examples. Additional, detailed information on JMS can be found in the following Oracle documentation: Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 http://docs.oracle.com/cd/E23943_01/web.1111/e13738/toc.htm 1. Introduction and Definitions A JMS queue in Weblogic Server is associated with a number of additional resources: JMS Server A JMS server acts as a management container for resources within JMS modules. Some of its responsibilities include the maintenance of persistence and state of messages and subscribers. A JMS server is required in order to create a JMS module. JMS Module A JMS module is a definition which contains JMS resources such as queues and topics. A JMS module is required in order to create a JMS queue. Subdeployment JMS modules are targeted to one or more WLS instances or a cluster. Resources within a JMS module, such as queues and topics are also targeted to a JMS server or WLS server instances. A subdeployment is a grouping of targets. It is also known as advanced targeting. Connection Factory A connection factory is a resource that enables JMS clients to create connections to JMS destinations. JMS Queue A JMS queue (as opposed to a JMS topic) is a point-to-point destination type. A message is written to a specific queue or received from a specific queue. The objects used in this example are: Object Name Type JNDI Name TestJMSServer JMS Server TestJMSModule JMS Module TestSubDeployment Subdeployment TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue 2. Configuration Steps The following steps are done in the WebLogic Server Console, beginning with the left-hand navigation menu. 2.1 Create a JMS Server Services > Messaging > JMS Servers Select New Name: TestJMSServer Persistent Store: (none) Target: soa_server1  (or choose an available server) Finish The JMS server should now be visible in the list with Health OK. 2.2 Create a JMS Module Services > Messaging > JMS Modules Select New Name: TestJMSModule Leave the other options empty Targets: soa_server1  (or choose the same one as the JMS server)Press Next Leave “Would you like to add resources to this JMS system module” unchecked and  press Finish . 2.3 Create a SubDeployment A subdeployment is not necessary for the JMS queue to work, but it allows you to easily target subcomponents of the JMS module to a single target or group of targets. We will use the subdeployment in this example to target the following connection factory and JMS queue to the JMS server we created earlier. Services > Messaging > JMS Modules Select TestJMSModule Select the Subdeployments  tab and New Subdeployment Name: TestSubdeployment Press Next Here you can select the target(s) for the subdeployment. You can choose either Servers (i.e. WebLogic managed servers, such as the soa_server1) or JMS Servers such as the JMS Server created earlier. As the purpose of our subdeployment in this example is to target a specific JMS server, we will choose the JMS Server option. Select the TestJMSServer created earlier Press Finish 2.4  Create a Connection Factory Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Connection Factory  and Next Name: TestConnectionFactory JNDI Name: jms/TestConnectionFactory Leave the other values at default On the Targets page, select the Advanced Targeting  button and select TestSubdeployment Press Finish The connection factory should be listed on the following page with TestSubdeployment and TestJMSServer as the target. 2.5 Create a JMS Queue Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Queue and Next Name: TestJMSQueueJNDI Name: jms/TestJMSQueueTemplate: NonePress Next Subdeployments: TestSubdeployment Finish The TestJMSQueue should be listed on the following page with TestSubdeployment and TestJMSServer. Confirm the resources for the TestJMSModule. Using the Domain Structure tree, navigate to soa_domain > Services > Messaging > JMS Modules then select TestJMSModule You should see the following resources The JMS queue is now complete and can be accessed using the JNDI names jms/TestConnectionFactory andjms/TestJMSQueue. In the following blog post in this series, I will show you how to write a message to this queue, using the WebLogic sample Java program QueueSend.java.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >