Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 166/1209 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Android emulator performance on linux

    - by Rado
    I installed the android SDK and eclipse plugin on my laptop, but I was surprised to find out that the emulator eats up 100% of one of my cpu cores. I have exactly the same setup on a desktop machine that does not have this issue. Both computers are running arch linux and both were updated yesterday. Granted, the desktop has better hardware than the laptop, but I was expecting to get closer to 50% cpu usage than 100% on the laptop. Both android virtual devices have the same specs: CPU: ARM Target: Android 2.3.3 - API Level 10 Skin: WVGA800 SD Card: 512M hw.lcd.density: 240 vm.heapSize: 24 hw.ramSize: 256 Laptop host has Intel Core 2 T7200 @ 2GHz cpu with 2Gb RAM. Desktop host has AMD Phenom II X4 940 @ 3GHz cpu with 8Gb RAM. The android emulator uses only 1 core and here are the CPU usage results: Laptop: Cpu0 : 22.8%us, 76.5%sy, 0.0%ni, 0.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu1 : 11.2%us, 2.4%sy, 0.0%ni, 86.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055484k total, 1860304k used, 195180k free, 5276k buffers Swap: 2000088k total, 106872k used, 1893216k free, 350780k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2026 xyz 20 0 396m 207m 7192 R 100 10.3 4:11.58 emulator-arm Desktop: Cpu0 : 0.7%us, 0.0%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 1.3%us, 0.0%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 5.0%us, 1.3%sy, 0.0%ni, 91.9%id, 1.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7666324k total, 6506808k used, 1159516k free, 1650960k buffers Swap: 8988348k total, 0k used, 8988348k free, 2867300k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2811 xyz 20 0 392m 220m 6276 S 8 2.9 0:33.58 emulator-arm Is there any way I can improve the emulator performance on the laptop? [UPDATE] I ran the emulator with the same settings, on the same laptop under Win7 and after starting up, it didn't use 100% of a CPU core unlike under linux. Also, I tried running the emulator from a terminal in Linux and I get this message when I don't get it under the desktop Linux host: Could not configure '/dev/hpet' to have a 1024Hz timer. This is not a fatal error, but for better emulation accuracy type: 'echo 1024 /proc/sys/dev/hpet/max-user-freq' as root. I'm not really familiar with rtc or hpet, but it doesn't seem that max-user-freq setting does anything, I still get the same warning.

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • is there anyway to know if your supposedly fully dedicated server is really a virtually resource-sha

    - by siran
    Hi, sometimes I feel my server not responding as smoothly as I would expect (i have a Intel(R) Xeon(TM) CPU 2.80GHz Quad Core), given that for example, the 'top' commands reports a low load < 0.5, CPU are almost completely idle ... I maybe have internet connectivity issues, so I don't really know if it's me or if it's the server itself. Is there anykind of benchmarking script (or something analogous) I could run and see the actual performance of the server ?

    Read the article

  • SAS Array with or without expander

    - by tegbains
    Is it better to use a SAS Expander backplane for 12 drives via one SAS connection or is it better to use a SAS backplane with 3 SAS connections? This is in terms of performance, rather than expansion. This array will be setup using ZFS on a OpenSolaris via a LSI SAS controller as an iSCSI target. The two products being considered are the SuperMicro SuperChassis 826A-R1200LPB or the SuperChassis 826E2-R800LPB

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • Speed of TrueCrypt whole disk encryption

    - by Gareth
    I'm getting a new development laptop soon, and I'm thinking of using TrueCrypt to encrypt the whole disk. What kind of performance drop can I expect? 10%? 30%? more? Also, assuming the workload has an effect, would compiling/using Visual Studio be affected much? I cannot seem to find anything like this on the web.

    Read the article

  • Any reason not to disable Windows kernel paging?

    - by Nathaniel
    So I'm planning on eventually going to 2 GB (mobo max) RAM from 1 GB, and I want to disable kernel paging once I do, because I've heard it can give a performance boost (and that I believe). Any reason not to do it or any general thoughts about it? Edit: for clarification, this is not disabling general RAM paging. This is disabling having kernel memory paged (or at least parts of it, as Charlls noted).

    Read the article

  • How to run a website domain without redirecting if IP is already used for another website? [duplicate]

    - by SSpoke
    This question already has an answer here: Hosting multiple distinct folders for distinct domains 1 answer I bought a VPS Host that gave me only 1 IP Address which I used on my first domain name and it works without any problems. Now my second domain name I can't use the same ip address as it points to the first domain name. So I figured my only option was to use a GoDaddy hosted iframe redirection which redirects to a sub folder on my first domain which worked so far. Now I'm trying to load paypal from <?php headers() ?> and I get a permission error because of that iframe Refused to display 'https://www.paypal.com/cgi-bin/webscr?notify_url=&cmd=_cart&upload=1&business=removed&address_override=1' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'. How do I avoid the Iframe solution for my second domain while not messing up my first domain? Somebody I forgot once told me it doesn't matter if you have 1 IP Address you could host multiple websites on it? how it that possible the DNS doesn't seem to work off ports afaik, yes I could host multiple websites on different folders but that's not what I call hosting a real website it has to be pointed by a domain name, so this iframe issue doesn't happen My server configuration is httpd (apache) that comes with CentOS 6 (Linux) operating system

    Read the article

  • How to monitor the total number of SQL Server logins

    - by Shiraz Bhaiji
    We have an SQL Server 2005 that is the backend of a web application. The application is partly SharePoint and partly web services accessing the database via Entity Framework. In the performance monitor I am seeing average SQL Logins is ca, 60 per second (max 170), but the average logouts is less than 1. Where can I see the total number of SQL Server logins? Anyone have an idea what could be causing this?

    Read the article

  • Why are there many processes listed under the same title in htop?

    - by javanix
    Can anyone explain to me why there are sometimes 10 or 15 processes with the same title and "stats" listed in htop? I'm guessing there are multiple threads running - but that many of them obviously couldn't be running concurrently. Is there any sort of performance hit taken if a process uses say, 15 non-concurrent threads vs. 10 non-concurrent threads?

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • Why is my new Phenom II 965 BE not significantly faster than my old Athlon 64 X2 4600+?

    - by Software Monkey
    I recently rebuilt my 5 year old computer. I upgraded all core components, in particular from an Athlon 64 X2 4600+ at 2.4 GHz with DDR2 800 to a Phenom II 965 BE (quad core) at 3.6 GHz with DDR3 1333 (actually 1600, but testing consistently detected memory errors at 1600). The motherboard is also much newer and better. The HDD's (x3), DVD writer and card reader are the same. The BIOS memory config is auto-everything except the base timing which I overrode to 1T instead of 2T. The BIOS CPU multiplier is slightly over-clocked to 3.6 GHz from the stock 3.4 GHz. I noticed compiling Java is slower than I expected. As it happens I have some (single-threaded) Java pattern-matching code which is CPU and memory bound and for which I have performance numbers recorded on a number of hardware platforms, including my old system. So I did a test run on the new equipment and was stunned to find that the numbers are only slightly better than my old system, about 25%. The data set it is operating on is a 148,975 character array, which should easily fit in caches, but in any event the new CPU has larger caches all around. The system was, of course, otherwise idle for the test and the test run is a timed 10 seconds to eliminate scheduling anomalies. A long while ago, when I upgraded only memory from DD2 667 to DDR2 800 there was no change in performance of this test, which subjectively supports that the test cycle does not need to (significantly) access main memory, but yes it is creating and garbage collecting a large number of objects in the process of this test (low millions of matches are found for the pattern set). I am about 99.999% certain the code hasn't changed since I last ran it on 2009-03-17 - but I can't easily retest the old hardware, because it is currently in pieces on my work-bench waiting to be built into a new computer for my kids. Note that Windows (XP) reports a CPU speed of 795 MHz unless I have some thing running. With stuff running it seems to jump all over the place each time I use ALT-Pause to display the system properties, everywhere from 795 MHz to 3.4 Ghz. So why might my shiny new hardware under-performing so badly? EDIT: The old memory was Mushkin DDR2 800 with timings set for auto which should have been 5-5-5-12. The new memory is Corsair DDR3 1600, running at 1333 with timings also auto which are 9-9-9-21. In both cases they are a paired set of dual channel DIMMs. I was waiting to ensure my system was stable before tweaking with memory timings.

    Read the article

  • No architecture vs architecture-specific binaries

    - by Aaron
    From what I understand, the noarch suffix means that it's architecture independent and should work universally. If this is the case, why should I install architecture-specific packages at all? Why not just go straight for the noarch? Are there optimizations in the x86 or x64 binaries that aren't found in the noarch binaries? What's best for high performance applications? Folding@Home does this with their controller:

    Read the article

  • Website does not resolve in browser but traceroute is successful

    - by Colum
    I am trying to figure out an issue. My internet is working fine, but this one website is not resolving. It works via a proxy, traceroute works: 1 192.168.1.1 (192.168.1.1) 4.205 ms 0.568 ms 0.510 ms 2 * * * 3 67.59.255.13 (67.59.255.13) 10.583 ms 7.949 ms 7.557 ms 4 67.59.255.61 (67.59.255.61) 10.256 ms 9.576 ms 13.083 ms 5 64.15.8.126 (64.15.8.126) 9.943 ms 11.929 ms 11.452 ms 6 64.15.0.217 (64.15.0.217) 14.655 ms 14.092 ms 13.771 ms 7 64.15.0.118 (64.15.0.118) 33.201 ms 34.875 ms 36.544 ms 8 xe-6-0-3.ar1.ord1.us.nlayer.net (69.31.111.169) 34.027 ms 34.957 ms 34.231 ms 9 ae1-30g.cr1.ord1.us.nlayer.net (69.31.111.133) 82.683 ms 35.138 ms 37.592 ms 10 xe-3-0-0.cr2.iad1.us.nlayer.net (69.22.142.26) 41.657 ms 34.063 ms 34.519 ms 11 ae2-30g.ar2.iad1.us.nlayer.net (69.31.31.186) 35.780 ms 36.361 ms 33.968 ms 12 as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 35.086 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.234) 38.031 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 36.833 ms 13 cr1.iad2.inforelay.net (66.231.176.246) 32.595 ms cr2.iad1.inforelay.net (66.231.176.10) 31.771 ms cr1.iad2.inforelay.net (66.231.176.246) 32.622 ms 14 cr1.iad2.inforelay.net (66.231.176.246) 32.956 ms 33.625 ms !X 41.058 ms 15 * cr1.iad2.inforelay.net (66.231.176.246) 35.312 ms !X * 16 * cr1.iad2.inforelay.net (66.231.176.246) 32.814 ms !X * 17 cr1.iad2.inforelay.net (66.231.176.246) 35.459 ms !X * 53.137 ms !X Ping returns this: Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 But what I can not figure out is why my browsers (Firefox, Safari, Opera) can not resolve the domain. I am on a Wifi connection. What could be the problem? BTW I am on a Mac (10.6.5)

    Read the article

  • SQL Server Reporting Services - website blank, builder works

    - by Keith
    We have a few reports in SQL Server Reporting Services. For some reason when we run the report from the website, it doesn't return any data. When I run the same report from the Report Builder, it returns data. I looked in the logs and the only errors I could find is: ReportingServicesService!library!8!6/15/2012-08:12:33:: i INFO: Current DB Version Unknown, Instance Version C.0.8.54. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights., ;Info: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Exception caught while starting service. Error: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. I'm not really sure why it would be a different version. It's all SQL Server 2008 R2 and I haven't made any changes to it since it's been running.

    Read the article

  • Any simple tools for re-loading a specified web page periodically (Windows)?

    - by Luke
    I've got a website that is exhibiting slow performance on the first load and would like to attempt to load it every 5 minutes or so to keep the cache fresh. Are there any simple tools to accomplish this? Scheduled tasks doesn't have quite the time resolution I need. The tricky thing is that this site uses Windows authentication so a wget script won't work. I'm also worried about instantiating a bunch of copies of internet explorer or attempting to kill iexplore.exe tasks blindly.

    Read the article

  • Copy a website and preserve the file & folder structure

    - by DrStalker
    I have an old web site running on an ancient version of Oracle Portal that we need to convert to a flat-html structure. Due to damage to the server we are not able to access the administrative interface, and even if we could there is no export functionality that can work with modern software versions. It would be enough to crawl the website and have all the pages & images saved to a folder, but the file structure needs to be preserved; that is, if a page is located at http://www.oldserver.com/foo/bar/baz/mypage.html then it needs to be saved to /foo/bar/baz/mypage.html so that the various Javascript bits will continue to function. None of the web crawlers I've found have been able to do this; they all want to rename the pages (page01.html, page02.html etc) and break the folder structure. Is there any crawler out there that will recreate the site structure as it appears to a user accessing the site? It doesn't need to redo any of teh content of the pages; once rehosted the pages will all have the same names they did originally so links will continue to work.

    Read the article

  • How to clear Windows disk read cache?

    - by Sebastiaan Megens
    For performance testing I need to clear Windows' disk read cache. I tried googling but I couldn't find anything other than rebooting or other manual stuff. Before I give in and do that, I'd like to know if anyone knows of a way to clear Windows disk read cache. I'm testing on Windows 7, but I'm also interested in Windows XP solutions.

    Read the article

  • Can anybody recommend a Windows system monitoring tool similar to iPulse for the Mac?

    - by John MacIntyre
    Occasionally, my PC grinds to a halt, and by the time I get any monitoring tools open (don't forget my PC is slow at this point), performance has picked up a bit. A friend recently told me he uses iPulse, which is awesome since it's always running, and you can just glance at it when there's an issue to see what is happening. Unfortunately it's only for the Mac. Does anybody know of a good Windows system monitoring tool similar to iPulse for the Mac?

    Read the article

  • Can I enable discards on a LUKS-encrypted ssd drive in RHEL6 (and do I need to)?

    - by Dan Nestor
    I have a RHEL 6.4 workstation, running on a LUKS-encrypted LV residing on a SSD. I found RedHat documentation stating that dm_crypt does not currently support TRIM passthrough, however I also found other sources that state the opposite (albeit for other distributions) and even that discards are not needed for recent SSD drives which use some sort of automatic garbage collection. So: 1) Can I enable TRIM/discards with my setup? 2) Do I need to, for optimal disk performance? Thanks for your thoughts.

    Read the article

  • New Static Website with Hosted DNS alternating 502, 503 and Page Does Not Exist Errors

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • Are there any reasons to duplicate table in the same database ?

    - by bob
    Let says we have several MySQL server, one master and some slaves. A member table which contains more than 5.000.000 peoples. Are there any reasons (performance, atomicity, etc..) to use duplicate tables like member_1, member_2, member_3 and then switch randomly when doing operation on it ? (especialy SELECT query) ?

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >