Search Results

Search found 8284 results on 332 pages for 'trusted sites'.

Page 216/332 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Server Requirement and Cost for an android Application [duplicate]

    - by CagkanToptas
    This question already has an answer here: How do you do load testing and capacity planning for web sites? 3 answers Can you help me with my capacity planning? 2 answers I am working on a project which is an android application. For my project proposal, I need to calculate what is my server requirements to overcome the traffic I explained below? and if possible, I want to learn what is approximate cost of such server? I am giving the maximum expected values for calculation : -Database will be in mysql (Average service time of DB is 100-110ms in my computer[i5,4GB Ram]) -A request will transfer 150Kb data for each request on average. -Total user count : 1m -Active user count : 50k -Estimated request/sec for 1 active user : 0.06 -Total expected request/second to the server = ~5000 I am expecting this traffic between 20:00-1:00 everyday and then this values will decrease to 1/10 rest of the day. Is there any solution to this? [e.g increasing server capacity in a specific time period everyday to reduce cost]

    Read the article

  • Exchange 07 to 07 mailbox migration using local continuous replication

    - by tacos_tacos_tacos
    I have an existing Exchange Server ex0 and a fresh Exchange Server ex1, both 2007SP3. The servers are in different sites so users cannot access mailboxes on ex1 as from my understanding, a standalone CAS is required for this. I am thinking of doing the following: Enable local continous replication of the storage group on ex0 to a mapped drive that points to the corresponding storage group folder on ex1 At some point when the replication is done (small number of users and volume of mail), say on a late night on the weekend, disable CAS on ex0 (or otherwise redirect requests on the server-side from ex0 to ex1) AND change the public DNS name of the CAS so that it points to ex1. Will my plan work? If not, please explain what I can do to fix it.

    Read the article

  • Tool or website or process to display previews of website templates residing in archive files?

    - by Tony_Henrich
    I have hundreds of website templates in rar or zip files. To view any of them I have to extract the archive to a temporary folder and then view the template in there. It's a time consuming manual process to do this for each template Is there a tool which enables me to quickly preview the templates in the files? OR (if I extract each template into a separate folder off a master folder) A web app which can enable previewing of each template by automatically creating a link or a preview image (similar to template sites) of the home page for? OR any method to preview the templates in the fastest convenient way possible?

    Read the article

  • How to maintain VPS server?

    - by clorz
    Assuming I have no experience in running them, what would be called a good maintenance routine for a VPS server running mail server and LAMP with a couple of sites. I've got one for quite a while now, but was doing what I feel is right without any guidance. It's ubuntu server and the only thing I do is ssh in there once a month and apt-get update, apt-get upgrade. Last year it suggested to update the distro, which I did. Waded through a bunch of diffs, broke mail server in the process and fixed it later on. So it turned out fine. Was this a right thing to do or should I stick with the old version just updating the packages? Is there a difference in the routine if it will be Fedora?

    Read the article

  • Can Apache be configured to specify more than one docroot per virtualhost?

    - by syn4k
    I have a vhost which specifies <VirtualHost *:80> DocumentRoot "/private/var/www/html/cms/sites/" ServerName localhost.com </VirtualHost> I would like to know if localhost.com can also point to /private/var/www/html/wordpress/. This seems like a no brainer but Apache is like black magic; these things are always possible. Anyway, I already know that I could specify a new ServerName entry and set a new docroot. The problem is, both directories need to be available as roots. If I need to provide more info, I will gladly do so.

    Read the article

  • Trying to build a history of popular laptop models

    - by John
    A requirement on a software project is it should run on typical business laptops up to X years old. However while given a specific model number I can normally find out when it was sold, I can't find data to do the reverse... for a given year I want to see what model numbers were released/discontinued. We're talking big-name, popular models like Dell Latitude/Precision/Vostro, Thinkpads, HP, etc. The data for any model is out there but getting a timeline is proving hard. Sites like Dell are (unsurprisingly) geared around current products, and even Wikipedia isn't proving very reliable. You'd think this data must have been collated by manufacturers or enthusiasts, surely?

    Read the article

  • Why do I need to add my application pool identity to the IIS_IUSRS group?

    - by smcolligan
    I'm setting up a .NET v4.0 web application on a Windows 2008 R2/IIS 7.5 server that uses a domain account for the application pool identity. When I access the site, I get the following error: The current identity () does not have write access to 'C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files' According to this: http://learn.iis.net/page.aspx/140/understanding-built-in-user-and-group-accounts-in-iis/ the identity of the worker process is added to the IIS_IUSRS group when the process starts. This seems to work fine for the existing .NET v2.0 applications I have running on the same server (I have not had to add their domain account application pool identities to IIS_IUSRS group). This does not seem to be the case for the first .NET v4.0 web application I'm setting up. Once I add the identity to the group, everything works fine. I suspect something is not configured correctly that is forcing me to do this. I would like to understand this before rolling out more sites/servers. Thanks in advance for your help...

    Read the article

  • Commercial template (theme) sellers which give license to resell the template?

    - by Tony_Henrich
    I came once across a website template seller which granted rights to resell the template to the buyer but forgot to bookmark it. All the commercial template vendors I know of (Template Monster, BoxedArt, Dreamtemplate..etc) don't allow this. Anyone knows of any sellers which allow this? Some of them allow you to resell the template if it's packaged for a different host software. For example, if I buy a website template, I am allowed to sell it as a WordPress or Joomla or Drupal theme. I am aware of public domain and free templates sites but I am not looking for those. I am talking about templates which are being sold.

    Read the article

  • IIS7 - multiple ports for websites, some working, some not.

    - by glasnt
    I have multiple IIS7 websites hanging off 1 IP, using different ports. All three sites use Z.A.B.C:XX, where XX is {100, 200, 300} * There's no web.config settings not making :300 not work, the bindings are set ok. I can even change the ports so 200 becomes 300, but the original 300 still doesn't work. They are all shown by IP, so it's not DNS. There's no SSL setting differences between them. I can't see anything in metabase.xml that would make one behave differently to another. Are there any other settings in IIS7 that I might not be finding, that would fix the issue? * not the real values.

    Read the article

  • Wifi and eth behavior

    - by r00ster
    I have a wireless router 150M Wireless Lite N Router Model No. TL-WR740N / TL-WR740ND. Normally, when I'm connected to the local network using eth0 I can ping other machines by issuing ping name. When I'm connected through wifi I have to issue ping name.domain.com. The machine is only visible in intranet. How to achieve the same behavior with wifi? The second problem is, that I can not connect to some external sites through wifi but through eth everything is ok. I guess that is related to some port forwarding, but I'm not sure. How can I resolve this issue? EDIT: I'm using Linux Mint.

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • How to report abuse to website hosting company (GoDaddy) [closed]

    - by lgratian
    I'm not sure if this is the right place to ask such a question... Let's say that a website posted a picture of me, without my consent, and I want it to be removed (it's something private, could compromise my career if it's seen by someone that shouldn't). I sent them an email asking nicely that they should remove it, but they didn't respond and the picture is still there. Using 'Whois' I found that the website is hosted by GoDaddy. Is there a way (an email address, for ex.) to report to GoDaddy that one of the sites they're hosting does something illegal and to force them to remove the photo? I searched the site and found nothing about such a thing. Thnaks in advance!

    Read the article

  • ip recognition is different

    - by Cougar
    some months ago i bought a dedicated server from usa (http://www.dacentec.com/) datacenter. my ips are look like this : 162.248.243.blo blo blo when i check my ip in this site : http://whatismyipaddress.com/ it shows me : ISP: Dacentec Services: None Detected Country: United States why Services: None Detected and what did they do with this ip block? also when i open some sites like google, yahoo, etc they show me india or china as country. what is the problem about these ips and why i don't have a stable location for them?

    Read the article

  • Shockwave not responding in Google Chrome

    - by Nithish Inpursuit Ofhappiness
    When browsing in Google chrome, I get an error message The exception unknown software exception (0x40000015) occurred in the application at location 0x0025bf18. I get a pop-up message later saying that the shockwave plugin is not responding. I've tried uninstalling and re-installing Chrome (even by clearing my browsing data while uninstalling) and still the error persists. There are 2 instances of shockwave that I could see in my chrome by typing chrome://plugins. I've tried disabling either of those alternatively too. This prevents me from browsing a lot of sites including FB and Stackoverflow through my favorite browser. I dont have any problem when I work from Internet Explorer. Please help!

    Read the article

  • USB flash drive showing empty but half of the capacity is in Used

    - by tamakisquare
    Not sure if I should post my question in superuser, but it looks like the most appropriate place among all StackExchange sites. I have a 16GB Kingston DataTraveler USB drive. When I tried to use it this morning, it showed up nothing in there but yet its details showed that half of the capacity was in used. I tried it with OS X, Ubuntu, and Windows 7 and the results were the same. I tried to create a new folder and it worked. Apparently, the drive is working but somehow not showing my previously stored data. Note that I was still using the drive last night and there wasn't any problems. Following @rob's suggestion, du -h gave me: 16K ./.Trashes 960K ./.Spotlight-V100/Store-V1/Stores/2620683B-A38B-42F4-A247-45CAF4826ADE 976K ./.Spotlight-V100/Store-V1/Stores 1008K ./.Spotlight-V100/Store-V1 1.0M ./.Spotlight-V100 1.1M And, df -h gave me: /dev/sdb1 15G 7.9G 7.1G 53% /media/KINGSTON Confirming what I reported. Anyone got a clue/answer to this issue? Thanks.

    Read the article

  • Windows 2008 server and Redhat with only 1 ip address, can windows route the traffic?

    - by paulcap1
    I have a two home server VMs set up. Windows 2008 server on port 80 and Centos/Redhat on port 8080. Both have separate godaddy domain name A name records pointing to them. But I cant point both domain to the same IP I only have 1 wan ip address at home. So one of my domain is forward to my IP:8080. My question: Is it possible for my windows server to redirect a certain domain name to my Linux server on port 8080? So i Have mysite1.com going to windows and mysite2.com also going to the windows server but windows would redirect mysite2.com traffic to the linux ip address:8080. I want to access both sites at my work and my work firewall is strict and will not allow domain forwarding from godaddy.

    Read the article

  • Is it safe/wise to run Drupal alongside bespoke business web apps in production?

    - by Vaze
    I'm interested to know the general community feeling about the safety of running Drupal alongside bespoke, business critial ASP.NET MVC apps on a production server. Previously my employer's Drupal based 'visitor website' was hosted as a managed service with a 3rd party. While the LoB sites were hosted in-house. That 3rd party is no longer available so I'm considering my options: Bring Drupal in-house Find another 3rd party My concern is that I have little experience with Drupal administration (and no experience securing it) and that the addition of PHP to my IIS server poses a security risk. Is there a best practice that I can follow in this situation?

    Read the article

  • NGINX Document Location

    - by GLaDOS
    I want to be able to access a given url, example.com/str. The problem is that the php file that I want to connect to is in a directory of /str/public/. In my nginx logs, I see that it is trying to connect to /str/public/str/index.php. Is there any way to remove that last 'str' in the document request? Below is my location directive in sites-available/default: location /str { root /usr/share/nginx/html/str/public/; index index.php index.html index.htm; location ~ ^/str/(.+\.php)$ { try_files $uri = 404; root /usr/share/nginx/html/str/public/; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } Thank you all so much in advance.

    Read the article

  • Understanding IUSR_<machine> account

    - by liho1eye
    Namely how is setting read/write permission for this account different from giving read/write access in the IIS (Windows 2003, so it should be IIS6 if I am not mistaken). Here is the issue: It looks like we had a security sweep and as a part of that IUSR account lost write access everywhere. A whole bunch of legacy ASP sites didn't like that at all... My very surfacish understanding is that it is enough to deny write access in the IIS console to protect a website from someone just dropping random files into it, and IUSR access only has effect on the application scripts running server side, and thus can be safely given write access back. edit: The applications in question obviously require write access to their own web folders, otherwise this wouldn't be an issue at all. Question is how to configure IIS/application to both satisfy security and make them work. My first instinct was to change account which is used to run the app pool. However that is already set to NETWORK_SERVICE, and that guy already has full access to folders in question.

    Read the article

  • Learn as much as possible about the setup behind a website.

    - by carrier
    Let's say I'm in the process of planning the setup of a website. I study similar sites that offer similar services or might receive similar traffic model. Is there a way to determine a bit the kind of setup, software and/or hardware. Some things are obvious. If I see .php or .jsp then I already know a bit. But any ideas on how to decipher more? Maybe where the site is hosted, hardware, platforms...

    Read the article

  • Website ocasionally does not load on first click

    - by tfe
    Today I noticed that my website hosted on a virtual server ocasionally does not load on first click. I click on some link, browser starts loading page but nothing loads and does not not appear any error message (like "connection reset by peer etc). Nothing. When I click the same link again, page loads immediately. The same situation on 2 computers in different browsers. It happends not always, maybe on each 20... or 30 click. Sites from other servers load without this problem. Any ideas what can cause this problem?

    Read the article

  • Varnish with multiple hosts/subdomains

    - by jerhinesmith
    I'm new to Varnish, and I'm hoping it already does this "out of the box", but I'd like to clarify before I consider using it in production: Here's my setup: I have multiple sites running off of the same machine that vary by subdomain (i.e. user1.example.com, user2.example.com, etc.) Each "site" has a profile picture that has the same name (i.e. user1.example.com/profile.png, user2.example.com/profile.png) Will Varnish recognize these as separate resources and cache them accordingly? Or will I need to change something in the VCL to tell it include the full host url when looking up cache hits?

    Read the article

  • Logins with only HTTP - are they as insecure as I'm thinking?

    - by JoeCool1986
    Recently I was thinking about how websites like gmail and amazon use HTTPS during the login process when accessing your account. This makes sense, obviously, since you're typing in your account username and password and you would want that to be secure. However, on Facebook, among countless other websites, their logins are done with simple HTTP. Doesn't that mean that my login name and password are completely unencrypted? Which, even worse, means that all those people who login to their facebooks (or similar sites) at a wifi hotspot in public are susceptible to anyone getting their credentials using a simple packet sniffer (or something similar)? Is it really that easy? Or am I misunderstanding internet security? I'm a software engineer working on some web related stuff, and although at the current time I'm not too involved with the security aspect of our software, I knew I should probably know the answer to this question, since it's extremely fundamental to website security. Thanks!

    Read the article

  • Windows NLB + IIS - Stops serving pages

    - by Ye Ol Developer
    We are currently running Windows NLB and IIS7 load balanced across two servers. What happens is randomly and sporadically the servers stop serving web pages. What we have noticed is that if we run the sites on a dedicated IP on either of the servers, these issues do not exist. As soon as we switch back to the load balanced IP, then everything goes awry. When the servers stop serving pages, we can still TS into the server and surf them internally without issues, or switch to the dedicated IP. However the internal network cannot even access the files from the load balanced IP. We are running out of idea's here. Has anyone had a similar problem?

    Read the article

  • How to create a Service Connection Point for Exchange (Manually)

    - by Ionoxx
    I'm being cautious here. Before I remove anything I want to be able to put it back. I'm having issues with a domain joined computer that is using SCP to get exchange autodiscovery information. It's getting information for the now unused internal Exchange through SCP even through the profile is using Office 365 on another domain. According to this conversation, I can simply remove the object from Active Directory Sites and Services. I want to know how to add back in should this create more problems, or if we reinstate the Exchange server. Right clicking on the parent "autodiscover" node doesn't allow me to create a Service Connection Point. Will simply running the cmdlet "Set-ClientAccessServer -identity servername -AutodiscoverServiceInternalUri url" be enough to recreate the object? Thank you!

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >