Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 645/889 | < Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >

  • Windows desktop virutalization instead of replacing work stations

    - by Chris Marisic
    I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine. It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server. Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this. How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this? What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop. Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation. Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

    Read the article

  • Toshiba Satellite P755D USB 3.0 Drivers Missing - Windows 7 Professional

    - by nicorellius
    I bought a Toshiba Satellite P755D recently and installed Windows 7 Professional on the machine. It runs great. But I noticed the exclamation point in the yellow triangle icon in the Device Manager next to the Universal Serial Buss (USB) Controller (I'm assuming this is the USB 3.0 controller because mine doesn't recognize devices). Normally, when this kind of thing happens I go to the manufacturer's website and download appropriate drivers and call it a day. But not this time... I browsed to my model and found no driver for the USB 3.0 controller. I tried other HW and Utility drivers, thinking they would be bundled. No luck. I tried looking up the motherboard in my machine. Generic name, no luck. I then called Toshiba technical support and they tried basic troubleshooting, eg, uninstall device, reboot, for auto-installation; no luck. I popped the Windows 7 disk back in and tried to get information that way, no luck. Finally, the technical support guy said he would look into the engineer's system to see if there was a specific driver available and that's where I'm at. The technician told me that these USB 3.0 drivers come within the native driver pack in windows but that doesn't seem to be the case. Any ideas? EDIT - See attached screen shots.

    Read the article

  • Looking for a small, portable, port-mirroring ethernet switch.

    - by user37244
    I recently had a mac go haywire, taking half a minute or more to get www.google.com loaded. Getting its owner to give up the machine for repair was like pulling teeth - they were insisting that it must be something to do with the network, since so much had changed with the local configuration at about the same time their box went haywire. I eventually set up a port mirror to a box that I could remote to so I could show that the mac was only irregularly getting packets onto the network. Demonstrating this faced an additional challenge: the latency of the remote desktop software I was using meant that I had to point to timestamps instead of just the moment the packet flashed up on the screen as my evidence. This particular user was the reason this was so challenging this time around, but I would like to have a box that I can cart from desk to desk to use wireshark on my laptop at any station where I need it. 3com, cisco, netgear, etc. (ad nauseum), all make switches that can be configured for port mirroring, but in my case, the smaller, the better. For the sake of my sanity, I'll probably end up running it off a battery anyway. If my laptop had two ethernet ports, this would be easy. So, whaddya recommand for a device that requires 0 configuration at each powerup (though I'm fine with poking at it for a while to set it up initially.) Small, light, and cheap enough to get it past purchasing? Thanks,

    Read the article

  • Did I miss anything when checking for passwords? [migrated]

    - by Keltari
    There's a bit of a story to this, so bear with me... I am looking for a new job and came across a posting for a computer forensics position. Its not really my field, but I thought I would apply anyway, just for fun. To make a longer story shorter, they want you to uncover as many passwords as you can find. I downloaded an image and dd'd it to a thumbdrive. The only thing visible was a text file, which contained a password. I knew there had to be more, so I used an undelete utility and found 2 deleted files. First there was another text file with a password - easy. The other was a .pst file which I mounted into outlook. There were some emails with passwords, as well as an email with an image. Another email has a link to a stegenography site. Obviously, there was a file hidden in the image, so I went to the website and downloaded the stegenography decoder. I had to try some of the passwords I had found to get the file to decrypt, and sure enough, there was another text file with a password. I called it a day at that point. Did I miss any other methods?

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Does my Oracle DBA need root access?

    - by Dr I
    I'm currently discussing with my Oracle DBA Collegue that request a root access on our production servers. I'm not so hot to let him use the root access on our production servers. He is arguing that he need it to perform some operations like restarting the server and some other obscure arguments. The point is that I'm not agree with him because I've set him a Oracle user/group and a dba group where Oracle user belong. Everything is running smoothy and without any root permissions for now. I also think that all administrative tasks like scheduled server restart and so one need to be operated by the proper administrator (The Systems administrator on our case) to avoid any kind of issues related to a misunderstanding of the infrastructure interactions. So, I need the help of both, sysadmins and Oracle DBAs to lead me on the correct direction. If my collegue really need this rights I'll give him, but I'm just basically quite affraid of that because of security and integrity concerns. I know that my collegue is really good as a Oracle DBA and he know is work very well, but I also know that I've very few cases where a software and its admin really need root access. Once again, I'm not looking for pros/cons but rather an advice on the way that I should take to deal with this situation.

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • Apache inflate application/ with mod_filter

    - by BGT
    I need to prevent pdf objects from being gzipped. Really, this only needs to take place if the request is from the Mozilla browser (but since I can't get something as seemingly simple as no-gzip for application/pdf, I figure it's wiser to start there). From looking at the apache documentation on mod_filter, I've got the following: <Location /> FilterDeclare gzipDeflate CONTENT_SET FilterDeclare gzipInflate CONTENT_SET FilterProvider gzipDeflate deflate req=User-Agent $Mozilla/ FilterProvider gzipInflate inflate resp=Content-Type $application/ FilterChain +gzipDeflate +gzipInflate </Location> From my testing, the gzipDeflate filter is doing its job and all the pages without the Content-Type starting with application are being gzipped. But, the gzipInflate doesn't seem to be working at all. I've inspected the response in Firebug and verified that the Content-Type being sent down is application/pdf. I'll go ahead and ask a potentially stupid question though: The response's Content-Type header in its entirety read "application/pdf; charset=Windows-1252". Does that make any sort of difference or is $application/ presumably enough to catch that? Any help is greatly appreciated. One other point, the URL that returns the pdf object does not have the .pdf extension. The pdf itself is stored in an Oracle database as a blob and appended to the page when appropriate (all the urls in the system use the same baseline). This was part of an original inquiry by a helpful member at stackoverflow who pointed me towards mod_filter and suggested I post the question here.

    Read the article

  • Why does Windows 7 have three system partitions?

    - by Ben
    I am using Windows 7, and I wanted to make a System image (using Windows 7), but Windows 7 checked three partitions as System (100 MB + C (install partition) + D (my partition for my files, all programs are installed at C)). I don't want to backup my D partition, but that is not really the point. I don't want Windows messing with my other partitions and making them system. Is there a way to limit Windows 7 just to partition C (install partition)? If there is no way to stop Windows from making other partitions system, can I at least delete the files that make partition D system? PS: All these three partitions are on one physical disk, partitions from other disks aren't treated as System. FACTS: desktop PC, no OEM partitions, I personally have installed Windows 7 (many times) on the C partition. Why is my D partition checked as System partition when I try to create a System Image (using Windows 7 Ultimate built in tool), even though Windows (and all the software) are installed on the C partition? Is there a way to make D "normal" or non-system partition? Here is a picture of how it looks like if I try to create a system image. Once again, why is D also a system partition?

    Read the article

  • How do I restore to a delta file (disk) on Vmware ESXi

    - by Oscar
    Using VMware Server ESXi (freebie version) I have a Virtual Machine (win 2k3 r2 server). When I first provisioned it I took a snapshot of it. I recently tried to clone the primary drive using my standard hardware-based method to grow a windows disk. (using knoppix, clone drive to a new drive, make it bootable, then I intended to extend the partition via diskpart from within windows). This process failed; I tried setting the cloned drive (via the vmware gui) to replace the original drive, boot and be done. This didn't work out so well. The machine never booted. I checked the boot order, the disk location and all the basics I usually do. As a failsafe, I then tried changing all the settings back so the machine would boot to the original drive and I could figure out (as I eventually did) a better way of growing the disk. However when I powered on the machine with the original drive, it reverted back to that initial snapshot I created; It lost all the changes since. I looked in the file system and found a few files, I think the keyfile here is one named "delta" and I'm assuming that's the disk I want, but I can't find a way to have the Virtual Machine actually use that drive/file. It isn't available to add when I go to add an existing drive. Do I need to somehow commit that delta to the original drive and then boot from it again? Can you point me in the right direction? I've since discovered the proper way of growing drives using "vmkfstools" but I need to get back to the original state of the machine to try this out. Any help would be greatly appreciated.

    Read the article

  • Cookieless Domain redirect in WHM/cPANEL

    - by Patrick Lanfranco
    I am currently trying to get my head around in understanding how to set-up a "cookieless" domain using WHM / Cpanel - unfortunately without any success at this moment. I have a Magento store and I would like to use "cookieless domains" for my media, skin (template) and js files. Magento has a nice feature to define URL for those folders. My current setup is as follows: www.mydomain.com <- main store media.mydomain.com <- subdomain to the media folder (mydomain.com/media/) skin.mydomain.com <- subdomain to the media folder (mydomain.com/skin/) js.mydomain.com <- subdomain to the media folder (mydomain.com/js/) I think it's poinless to have them used as "cookieliess domains" since my Magento installation uses .mydomain.com as cookie domain, so what I would like to achieve is to register a new additional domain and have it point via WHM / cPanel to those specific locations. I have tried to change the A and CNAME records although without any success as they were just simply redirecting from one page to another in the browser (newdomain.com - jump to old.com). What kind of records do I have to set to have this working properly? Some advice would be highly appreciated.

    Read the article

  • Certificate enrollment request chain not trusted

    - by makerofthings7
    I am working on a MSFT lab for Direct Access, and need to create a Web certificate. The instructions ask be to do the following: On EDGE1, click Start, type mmc, and then press ENTER. Click Yes at the User Account Control prompt. Click File, and then click Add/Remove Snap-ins. Click Certificates, click Add, click Computer account, click Next, select Local computer, click Finish, and then click OK. In the console tree of the Certificates snap-in, open Certificates (Local Computer)\Personal\Certificates. Right-click Certificates, point to All Tasks, and then click Request New Certificate. Click Next twice. On the Request Certificates page, click Web Server, and then click More information is required to enroll for this certificate. On the Subject tab of the Certificate Properties dialog box, in Subject name, for Type, select Common Name. In Value, type edge1.contoso.com, and then click Add. Click OK, click Enroll, and then click Finish. In the details pane of the Certificates snap-in, verify that a new certificate with the name edge1.contoso.com was enrolled with Intended Purposes of Server Authentication. Right-click the certificate, and then click Properties. In Friendly Name, type IP-HTTPS Certificate, and then click OK. Close the console window. If you are prompted to save settings, click No. In production, our company has overridden the Web Server template and it doesn't seem to be issuing certificates with the full CA chain. When I look at the issued certificate properties then both tiers of the 2 tier CA hierarchy are missing. How can I fix this? I'm not sure where to look outside the GUI.

    Read the article

  • VNC Address Book - Window Invisible? Working as intended?

    - by user1445967
    I have VNC Server and VNC Viewer on the same PC with Windows 7 Ultimate x64. Is VNC Address Book supposed to have a main application window? I can't tell if this is bugged or working as intended. When I start VNC Address book, it creates an item in the task list and also an icon in the notification area. If I click the task list item once, nothing happens. If I click again, it disappears from the task list but remains in the notification area**. If I right-click the icon on the notification area, there is an option 'open address book', which if chosen, creates the item in the task list again, with the exact same behavior. If I left or right click on the address book icon, I have ways to connect to servers in the address book. However I have no way to add/remove/edit servers in the address book. **Note that this behavior is identical to that of an application with a main window where there is an option to 'minimize to system tray / notification area'. Only here, no main window is visible, ever. If fixing this problem is beyond the scope of this site, that's fine, but at this point I have no real way to tell if the program is malfunctioning on my pc or if it is far more minimalist than I expected. There appears to be no forum on realvnc.com where I can ask about this, perhaps this is to prevent me from getting any help unless I pay to register?

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Importing long numerical identifiers into Excel

    - by Niels Basjes
    I have some data in a database that uses ids that have the form of 16 digit numbers. In some situations i need to export the data in such a way that it can be manipulated in excel. So i export the data into a file and import it into excel. I've tried several file formats and I'm stuck. The problem I'm facing is that when reading a file into excel that has a cell that looks like a number then excel treats it as a number. The catch is that (as far as i can tell) all numerical values in excel are double precision floating point which have a precision of less than 16 digits. So my ids are changed: very often the last digit its changed to a 0. So far I've only been able to convince excel to keep the Id unchanged by breaking it myself: by adding a letter or symbol to the Id. This however means that in order to use the value again it must be "unbroken". Is there a way to create a file where i can specify that excel must treat the value as a text without changing the value? Or its there a way to let excel treat the value as a long (64bit integer)?

    Read the article

  • Why is this setting for Name-based Virtual Host settings not working?

    - by Kave
    I have two domains (siteA.com & SiteB.com) that point to the same webserver and I would like to show different web pages for each. The steps I have taken so far are: Copy the default site (siteA) to siteB 1) sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/siteB 2) sudo vim /etc/apache2/sites-available/siteB <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/siteB <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/siteB> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo Indexes Order allow,deny allow from all </Directory> </VirtualHost *:80> Then I created under /var/www/siteB and created a sample index.html in there. However when I load my domain siteB.com I still get directed to /var/www/siteA. Why is that? Do I have to rename the /etc/apache2/sites-available/default to /etc/apache2/sites-available/siteA as well? UPDATE: Thanks to the answer below it seems I had forgotten next to enabling the site also another entry: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias www.siteB.com </VirtualHost *:80> in order to include all subdomains as well then do: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias *.siteB.com </VirtualHost *:80> Same goes for siteA.

    Read the article

  • ESXi 5 VM Putty session hangs, vSphere client timing out

    - by user192702
    First of all I believe this is a ESXi issue but let me know if you have seen this. It started about a year ago when I noticed occasionally when I putty via SSH to my VM guests, if I do anything that makes it to display a lot of things at once, the session will hang and I have to start a new one quite often only to find the same behaviour. What I meant by display a lot of things can be any of the following: 1) tail -f filename 2) Paste a long command 3) less filename If I type in one character at a time this won't happen. I tried searching online and it always point me to flow control settings and the various suggestions I've tried have never been able to resolve the issue. Since last week, I've noticed I'm not able to connect to my POP3 server from Outlook (it's timing out from Outlook's perspective). Today I tried to connect to the ESXi via vSphere client and it gives me a time out also. Exact behavior and error I saw is similar to the one posted at the following URL but the suggested technique also failed to resolve the issue. http://davidcocke.blogspot.hk/2012/02/unable-to-login-with-vsphere-client.html Has anyone experienced this before? Any suggestions on how to troubleshoot this?

    Read the article

  • Strange RDP / Remote Desktop problem

    - by John Landheer
    I'll try to be as specific as I can be: Server is running SBS 2008 R2 (with all updates) Server is connected to the internet Server has 2 NIC's, one is disabled Server is running RDP Service (accessible directly from the internet, I know, not as secure as it should be) Computers A and B are on the same local net. Computers A and B are both Windows 7. Users X and Y are both admins on the server Computer A can connect as user X to the server with mstsc Computer A can connect as user Y to the server with mstsc Computer B can connect as user X to the server with mstsc Computer B CANNOT connect as user Y to the server with mstsc! Error that username/password is incorrect. The last point is the problem, I get an authentication error. This used to work flawlessly for the last year. The server and desktops have been rebooted. EDIT: I tried: prefixing domain to the username prefixing the server computer name to the username change the password copy/paste the password from notepad to make sure it was correct I find it very strange.... EDIT: The computers are not on the same subnet as the server. The server is at my hosting provider. All computers as all users can reach the web app that is running on the server.

    Read the article

  • How should a small company administer their web server?

    - by John Isaacks
    We currently have our website hosted by a small company that is actually a reseller for Rackspace. They act as our server administrators. They configured the servers, handle the backups, if there is a problem, we call them and they fix it. We are growing and want to move away from our shared server to either a cloud or dedicated server. I am thinking cloud myself but I am open to either. The current company doesn't seem to want to offer us anything more than a shared hosting plan. I looked into cloud solutions at vps.net, with them I would have to be the server administrator myself. I am the website programmer but administering the server is outside my comfort zone. vps.net does have a $99/month plan for Pro-Active Managed Support but I am not sure if this is the equivalent on a server admin that is there when you need them. We could hire someone in house, but I think that would be overkill for our needs. I am not exactly sure what we need, I do know we need as close to 100% uptime as we possible can. and we need the ability to add/remove/change the server configuration/software/etc. when needed (though changes shouldn't be very often once everything is setup right). Can someone point me in the right direction? What do other companies do?

    Read the article

  • Joining new DC to AD - DNS name does not exist

    - by Andrew Connell
    I had a DC fail on me recently and trying to add a new one to my domain, although I'm sensing I might have other issues in my domain. I'm a dev at heart and know just enough about AD to be dangerous so looking for some assistance. My working DC is RIVERCITY-DC12. I'm trying to promote RIVERCITY-DC14 as a DC to the RIVERCITY domain, but when I run DCPROMO, at the NETWORK CREDENTIALS step where I point to the name of the domain (rivercity.local), I get "An AD DC for the domain rivercity.local cannot be contacted" and in the details see "The error was DNS name does not exist" Looking at RIVERCITY-DC12, I can see DNS is working, I've been able to query it from other machines in my domain, and no errors are reported in the DNS category within the Event Viewer. When I checked the FMSO roles, it shows RIVERCITY-DC12 is the machine for all listed roles. Not sure what I should do next or how to troubleshoot/investigate after searching around for a solution... ideas? Environment: Domain: rivercity (rivercity.local) Forest functional level: Windows 2000 (I'm more than happy to raise this) Windows Server 2008 All servers are Windows Server 2008 R2 SP1 (fully patched)

    Read the article

  • Site Goes Offline Every Day At Midnight - No One Knows Why

    - by HollerTrain
    0 down vote favorite Seems today a website I manage has been going online and offline between 12a and 12:25a. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I have a pingdom account which alerts me when the site goes offline so we can see every day, like clockwork, the site goes on/off. At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Sparc Solaris 2.6 will not boot

    - by joshxdr
    I have a very old Sparc Solaris network that was working fine last week, but after a power outage none of the workstations will boot. The network looks like this: host A: solaris 2.6, shares /export/home to network by NFS host B: solaris 8, runs NIS server. Mounts /export/home/ by NFS. host C: RHEL5, shares /share to network by NFS. Mounts /export/home/ by NFS. I figured that the main problem was host A, since you need the home directories available for the other workstations to boot(?). Host A does not mount anything by NFS as far as I know. However, this workstation will NOT boot. The OBP bootup sequence looks like this: Boot device <blah> configuring network interface le0 Hostname <hostname> check file system <everything ok> check ufs filesystem <everything ok> NIS domainname is <name> starting router discovery starting rpc services: rpcbind keyserv ypbind done setting default interface for multicast: add net 224.0.0.0: gateway <hostname> <HANGS at this point> Is there some kind of debug mode so that I can get more detail as to why the workstation won't boot? Is my network structure inherently susceptible to power outage? Is there a way I can boot up to command line so I can at least turn off the NFS mounting?

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Apache can't get viewed from outside of my LAN

    - by Javier Martinez
    I fixed it in PORTS TRIGGER menu of my router. Thanks you anyway I have a weird problem related with (i think) my cable-router and my configured vhosts in Apache2. The point is I can't access from outside of my LAN to any of my configured vhosts if I set the http port of Apache to 80 and i add a NAT rule for it. Otherwise, if I set my Apache port to 81 (or any else) with its respective NAT rule on my router it works. My router is an ARRIS TG952S and I am using Apache/2.2.22 (Debian) ports.conf NameVirtualHost *:80 Listen 80 vhost1.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost1.mydomain.net ServerAlias vhost1.mydomain.net www.vhost1.mydomain.net vhost2.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost2.mydomain.net ServerAlias vhost2.mydomain.net www.vhost2.mydomain.net DNS records (using FreeDNS) are: mydomain.net --> pointing to another server vhost1.mydomain.net --> pointing to my server vhost2.mydomain.net --> pointing to my server iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-apache-noscript tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-apache tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-apache (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-apache-noscript (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Thanks you

    Read the article

  • How to avoid sshfs freezing?

    - by Andreas Hagen
    So the issue is this: I've installed sshfs on Ubuntu 12.04 and I'm trying to connect to a couple of remote servers. So initially the mount seams successful. Sometimes Gnome even picks it up and displays the "new device found" box at the bottom of the screen. but from here on there is not much that works. Or at least not any more. The first couple of times i connected it seamed to work fine, and I was able to transfer some files, then i disconnected using fusermount -u <folder> and after reconnecting a little later the trouble started. Now after executing sshfs -o ServerAliveInterval=15 -o reconnect -C -o workaround=all -o idmap=user root@<host>:/ <folder>, when I change directory into the mount-point, the shell just freezes. Strangely ls -al <folder> works when listing just the root of the remote system, but nothing more. Also every file-explorer I've tried freezes just like cd <folder>. To me it seamed like there was some kind of zombie thread or something hanging around my system, due to the fact that it did work the first time, so I have tried rebooting but no luck. sshfs -V gives this: SSHFS version 2.3 FUSE library version: 2.8.6 fusermount version: 2.8.6 using FUSE kernel interface version 7.12 So yea, any ideas?

    Read the article

< Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >