Search Results

Search found 11362 results on 455 pages for 'big o analysis'.

Page 352/455 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

  • Tunneling a public IP to a remote machine

    - by Jim Paris
    I have a Linux server A with a block of 5 public IP addresses, 8.8.8.122/29. Currently, 8.8.8.122 is assigned to eth0, and 8.8.8.123 is assigned to eth0:1. I have another Linux machine B in a remote location, behind NAT. I would like to set up an tunnel between the two so that B can use the IP address 8.8.8.123 as its primary IP address. OpenVPN is probably the answer, but I can't quite figure out how to set things up (topology subnet or topology p2p might be appropriate. Or should I be using Ethernet bridging?). Security and encryption is not a big concern at this point, so GRE would be fine too -- machine B will be coming from a known IP address and can be authenticated based on that. How can I do this? Can anyone suggest an OpenVPN config, or some other approach, that could work in this situation? Ideally, it would also be able to handle multiple clients (e.g. share all four of spare IPs with other machines), without letting those clients use IPs to which they are not entitled.

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • How to share text, pictures and video to restricted set of users

    - by joaoc
    I want to share pictures and videos of my kid growing up but I don't want it to be open to the public. I just want me and my wife to be able to add material and then share with grandparents and friends who would be given a username/password or some other solution that authenticates them. This content would preferably be available remotely over the internet or downloaded periodically to the allowed viewers computers. The second option most likely means they would have to install a client, which I am not too fond of doing (I would have to ask them to install it, help configure for different platforms, ...) I'm at the start but I would like software that also allows export of the data (if one day I want to migrate to a different solution). Folder sharing I don't know if Windows folder sharing is reliable, robust and safe enough to share over the internet. Dropbox could be a software solution, but it's limited to 2GB and requires receivers to also install and configure software. A folder is just a collection of files, so it would be harder to keep them organized with text describing the pictures/video Email I could email the content every now and then but videos are almost always too big to go as attachments This would also not provide a history of updates to anyone just recently joining in Are there other solutions to achieve this? NOTE: I also thought that software to run a local or remote blog could be an option but we aren't allowed to ask those questions on superuser because one admin doesn't like cloud-computing questions. But if you do think it's the best approach and can suggest a solution, do state it in the answers anyway!

    Read the article

  • Are there compact external USB audio interfaces which are better than a on-board sound?

    - by rumtscho
    I am asking this for a friend. He loves his voice recognition software and dictates a lot of text using a headset. Now he has a new laptop, which only has a combined mic/headphones output, and wanted to buy an adapter. I told him to get an external USB sound interface instead, as the better sound quality will probably increase the hit rate of the voice recognition. He agreed, but when he saw a picture of the SoundBlaster X-Fi, he said that it is way too big, because he wants to carry the thing everywhere. He'd rather have one of these small things which are the size of a flash memory stick, with only one mic and one phones output, period. Now I am not sure whether these mini interfaces would produce a sound better than onboard sound. They all seem to come not from established audio interface manufacturers, but from electronic accessories manufacturers like Speedlink, or just noname brands. Is there a compact audio interface with good A/D quality (it is OK if the price is comparable to that of the bigger interfaces, even if there is no additional functionality like Chinch in-/output etc)?. And if there isn't, will the noname soundcardsticks offer any advantage over a simple adaptor for the onboard sound?

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • Someone from china wants kill my entry bandwidth??

    - by yes123
    Hi guys. Someoen from china with two different ip is downloading the same big file from my server. Their ip are: 122.89.45.210 60.210.7.62 They requesting this file and downloading more than 20 times per minute. What Can I do to prevent this? (I am on gentoo with root access) And WHY they do this to a site that doesn't have nothing to do with china ? ADD1: Other ips: 221.8.60.131 124.67.47.56 119.249.179.139 60.9.0.176 ADD2: the stupid thing is they are requesting only 1 single file lol. Or they want that file removed (tho i don't see why) Or they are pretty stupid ADD3: Situation is getting worse. IP are spreading from other countries too (usa and korea if www.geobytes.com/iplocator.htm it's right) And now they are requesting another file. ADD4: it seems after they realized i removed that file they stopped attacking me. I will monitor the situation. They started again after a sleep of 3-4 mintues with the same file (lucky me). Hard to say why this is happening

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Windows 2003 DC to Windows 2008 R2 DC with same name and same IP

    - by TheCleaner
    Environment = Windows 2003 native domain with 8 DCs I've got an old domain controller that is running 2003, CA Enterprise role, DHCP, DNS, a few GPO scripts that point to shares on it, and some other minor functions. All our servers point to it as their primary DNS, and there's lots of references to its IP or name throughout the domain at this point (8+ years later). I really don't feel like manually changing all of this, it would be a pretty massive undertaking. I want to follow this guide: http://msmvps.com/blogs/acefekay/archive/2010/10/09/remove-an-old-dc-and-introduce-a-new-dc-with-the-same-name-and-ip-address.aspx to hopefully end up with basically an "in-place upgrade" so to say. I considered just doing a P2V of the box, but we don't really want to keep it around running 2003 to be honest. I also considered using a CNAME and adding a 2nd IP (the old one) but again, it seemed like it would be cleaner using the attached link. My actual question: Any gotchas or big caution signs when doing what the link suggests? Anyone gone down this road and have advice on how to proceed?

    Read the article

  • Apache server rewrite rules: how to avoid "implicitly forcing redirect (rc=302)"?

    - by Olivier Pons
    Hi! I've got a very annoying problem: our webserver handles 2 (more actually but let's say 2 for a simpler example): pretassur.fr pretassuragentimmobilier.fr Here's what I want to do: change (whatever1).pretassuragentimmobilier.fr(/whatever2) to (whatever1).pretassur.fr(/whatever2)?theme=agentimmobilier So here's my rewriterule: RewriteCond %{SERVER_NAME} (([a-z]+\.)*)pretassuragentimmobilier.(fr|com) RewriteRule ^(.+) http://%1pretassur.fr$1 [E=THEME:pretassur_agent,QSA] # if THEME not empty, set it : RewriteCond %{ENV:THEME} ^(.+)$ RewriteRule (.*) $1?IDP=%{ENV:THEME} [QSA] The big (huge) problem is: let's have a look at the rewrite logs: [pretassurmandataireimmo.com] (5) => setting env variable 'THEME' to 'pretassur_mandataire' [pretassurmandataireimmo.com] => (2) implicitly forcing redirect (rc=302) with http://pretassur.fr/ Aaaaaaaaarg! "implicitly forcing redirect" = I don't want that ! I want to internally redirect to pretassur.fr, not to make a real redirect! Now if you type: http://pretassurmandataireimmo.com it is redirected to http://pretassur.fr/?IDP=pretassur_mandataire (try it) I don't want that! I want to display this page http://pretassur.fr/?IDP=pretassur_mandataire but without touching the original host! Any idea? Thanks a lot!

    Read the article

  • Windows desktop virutalization instead of replacing work stations

    - by Chris Marisic
    I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine. It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server. Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this. How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this? What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop. Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation. Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

    Read the article

  • Making it Easier for Older Users to Login to Multiple Accounts

    - by Mike Hagstrom
    I currently do consulting for a small business that has multiple applications that they need to login too. I'm trying to get them to start using Basecamp and Zendesk to make all of our lives easier when it comes to collaboration on big projects and quick helpdesk ticket items. However, I have recently been informed that it is difficult for them to remember all of these websites etc... to login too. However the login information is the same. Right now they have to login to: Windows Login Gmail I want them additionally to login to Basecamp Zendesk This is just a generation or two gap between myself and them, so I'm wondering what others do to solve these problems. Is there some way we could configure USB thumbdrives that somehow have Lastpass or something on that when plugged into the computer automatically log them into their Windows account, then when they were to say visit the Basecamp account would automatically log them into that? I think the security risk (of a list thumbdrive) is well worth the ability to use these extra applications. Unless anyone else has any other ways for making it easier for users to login to multiple sites.

    Read the article

  • Deploying a Django application in a virtual Ubuntu Server

    - by mfsaint
    I have a virtualbox machine running Ubuntu Server 10.04LTS. My intention is to this machine to work like a VPS, this way I can learn and prepare for when I get a VPS service. Apache+mod_wsgi for deploying the Django app seems the right choice to me. I have the domain (marianofalcon.com.ar) but nothing else, no DNS. The problem is that I'm pretty lost with all the deployment stuff. I know how to configure mod_wsgi(with the django.wsgi file) and apache(creating a VirtualHost). Something is missing and I don't know what it is. I think that I lack networking skills ant that's the big problem. Trying to host the app on a virtualbox adds some difficulty because I don't know well what IP to use. This is what I've got: file placed at: /etc/apache2/sites-available: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.my-domain.com ServerAlias my-domain.com Alias /media /path/to/my/project/media DocumentRoot /path/to/my/project WSGIScriptAlias / /path/to/your/project/apache/django.wsgi ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> django.wsgi file: import os, sys wsgi_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.dirname(wsgi_dir) sys.path.append(project_dir) project_settings = os.path.join(project_dir,'settings') os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • Reduce "Metafile" memory usage?

    - by Jay Conrod
    My work computer (Windows 7 64-bit) spends a lot of time swapping memory when I switch between programs. This surprises me since I have 4 GB of RAM, and the programs I use aren't particularly RAM hungry (Outlook, Emacs, p4win, Firefox, various build tools). I downloaded RAMMap, and it shows over a gigabyte of memory used by "Metafile". From the Sysinternals blog: Metafile is part of the system cache and consists of NTFS metadata. NTFS metadata includes the MFT as well as the other various NTFS metadata files. ... In the MFT each file attribute record takes 1k and each file has at least one attribute record. Add to this the other NTFS metadata files and you can see why the Metafile category can grow quite large on servers with lots of files. So I understand what the "Metafile" data is... I work on large builds comprising hundreds of thousands of files (none are that big, but they add up to several gigabytes). My question is how can I reduce the amount of memory used by "Metafile"? I'm not actively using all those files at once, so why does Windows need to keep info in RAM? Restarting my machine every time I sync a new build is really annoying.

    Read the article

  • Finding matching columns in excel

    - by fakaff
    I've never used excel before so I need the simplest solution available, and this is a work assignment due this week so I didn't have time read much of the documentation. Basically, I have two tables, A and B, and they are both thousands of rows long. Description of my task: right now (since I don't know better) I'm manually doing this: Go to row i in table B. Select entries in columns B(a, b, c) of that same row. Look for a row in table A where column A(b) matches row B(a). Paste the entries of columns B(a) of row i at the end of the row found in the last step. Repeat for row i + 1. Example: row B(cat, dog, mouse) matches A(mammal, cat, Mr. Whiskers). So I would paste B after A and have A(mammal, cat, Mr. Whiskers, cat, dog, mouse). Note: I am not joining tables. I am merely extending table A by pasting row A(b) if row A(b) matches row B(a). Also, sometimes entries are spelled slightly differently. Using wildcards to search for candidates would be of help. As the description should let on, this task is very tedious and inefficient if I don't know how to automate some operations (there are thousands of entries). Any quick tips as to how to be more productive is a big help.

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • i7 Windows 7 laptop or Macbook Pro 15" (For making 3D model and Animation)

    - by sppdhs
    Hi everyone, I'm thinking of buying a new laptop as my current one will be 5 years old next year and it currently gives me a blue screen everytime I run a heavy software to make 3D models and animation. Question is: Should I buy Macbook Pro or a Windows 7 laptop? I tried to research and some say that Windows would be better, especially because you can buy a very high spec laptop with cheaper price. While some other say Macbook Pro is the better choice as it can run bootcamp with windows 7 100% performance on every software even though it's a mac. Is this true? Which one is actually better? Btw, the software that I usually use is 3Ds Max, Maya, and ZBrush. To note as well, I have never used a Mac before. I've checked around some stores and budget wise, I'll need around AU$3000+ to buy a Macbook pro, and AU$2000+ to buy a windows laptop. Quite a big difference in price range. Thanks in advance for your help.

    Read the article

  • Failure to connect to admin share pops up dialog

    - by Jan
    I'm having an issue with a curious error message when accessing the administrative share on a remote machine. Specifically, the client is logged in as the domain administrator on the machine A, and runs some code that tries to access the admin share on B (a domain member). The access is done in .NET, along these lines (though I am not sure if the method of access makes a difference): string path = @"\\B\admin$"; if (Directory.Exists(path)) { try { path += @"\temp\"; if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } path += "myfile_remote"; File.Copy("myfile", path); Now, on some machines this fails. That is not a big problem as we have a fallback. I'd like to know why but it is not the real issue. The problem is that running this piece of code causes a dialog box to pop up for the logged-in user on B, saying "network error trying to access \\B\admin$\temp\myfile_remote. Contact the network administrator and ask for the correct permissions". Unfortunately, it is a foreign language Windows so I'll spare you all posting a screenshot. It is skinned like a standard Windows dialog box. Why exactly is that dialog box popping up for the user and is there anything I can do about it? Edit to add: B is a Windows 7 Enterprise installation. The client is not aware of any GPO policies being installed. There is AV from Trend Micro installed.

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >