Search Results

Search found 10693 results on 428 pages for 'stay updated'.

Page 196/428 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • How to change mount to grant user write permissions?

    - by nals
    I am on TomatoUSB, and using the feature to have a NAS. The only way I can write to the Samba share is if I force root: [global] interfaces = 127.0.0.1, 192.168.1.1/24 bind interfaces only = no workgroup = WORKGROUP netbios name = TOMATO security = share wins support = yes name resolve order = wins lmhosts hosts bcast guest account = nobody [Public] path = /mnt/sda2 read only = no public = yes only guest = yes guest ok = yes browseable = yes comment = Network share force user = root writeable = yes I dont really like the idea having to use root to allow write access to my share. I have a samba account created already named nobody to allow access to the share. However every time I try to write I get access denied error. fstab: /dev/sda2 /mnt/sda2 vfat defaults 0 0 Further more every time I try to chmod 777 /tmp/mnt/sda2 the permissions are not changed, and no error is produced. They stay 755. drwxr-xr-x 2 root root 4096 Jun 4 01:49 sda2 Basically; how can I give the user nobody write permissions to my mount? dev name: /dev/sda2 dev mount: /tmp/mnt/sda2

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • Combining AD permissions with FTP

    - by user64204
    We're using Windows Server 2008 with Active Directory controlling access to a network share. We've setup FTP so that people can access that share from outside (we used to use the PPTP VPN but for various reasons we need to switch to FTP). So far here is what we've managed to implement on the FTP: -The network share is used as the FTP root (defined as a UNC) and that is working fine. -AD authentication is working fine (wrong password and you stay out, good password you're in, password management in AD correctly synched with the FTP). -AD permissions are failing: the AD permissions on the content of the FTP root are ignored: it's either a user only has read or write access, but this applies to the whole FTP root, which obviously isn't suitable since that FTP root is initially our network share and files/folders have different AD permissions depending on people's groups... Whether we set the permissions through the share OR the FTP management interface, AD permissions are never enforced. Q1: Is that normal? Q2: If so what solutions exist to combine AD permissions with FTP on MS server 2008? Q3: If not, where should I look to fix the configuration?

    Read the article

  • How do I count the times each number appears in columns of numbers?

    - by Andy C.
    I am sure this must be easy, but I am inexperienced. About the best way to think of my problem is to think of it as trying to sort and then count lottery numbers. To stay simple, let's do a Pick 3 game. Let's look at 10 drawings. I would split each drawn number into a separate column: DATE BALL#1 BALL#2 BALL#3 3/1 1 3 5 3/2 3 7 8 3/3 2 2 1 3/4 5 7 6 3/5 2 3 1 3/6 0 5 9 3/7 3 7 0 3/8 6 8 4 3/9 2 4 3 3/10 7 1 2 I would like to be able to build formulas into cells that would tell me how many times each number appeared overall, and how many times each number appeared in the position it occurred. Like this (using the above example): Number Overall Count Ball#1 Count Ball#2 Count Ball#3 Count 0 2 1 0 1 1 4 1 1 2 (That is, The number zero appears twice overall, and came up once as the first number drawn; zero times as the middle ball; and once as the third ball. Likewise, the number 1 was drawn four times in our 10-day period. It was the first ball once, the second ball once and the third ball twice.) And so on. All help appreciated. I have access to Excel and Microsoft Works, or of course if there is a Google Docs way to handle this All thanks for any help.

    Read the article

  • Am I able to forward traffic from an external subdomain to a specific local host?

    - by George Bowman
    I apologise in advance if the question doesn't make sense, please let me know. I've got a small LAN (~10 Virtual Servers) using Win Server 2008 as a DNS server. This is behind a smoothwall express 3.0 firewall with ports forwarded for specific services. I have a domain (123-reg) with the NS's that of afraid.org (DynamicDNS) and subdomains pointed to my (Dynamic) IP address e.g. subdomain1.example.com - 123.456.789.101. I think that adequately explains my set up. My question is, am I able to have subdomains e.g. subdomain1.example.com only point to a specific local host? Like so: subdomain1.example.com:80 - firewall(external facing) - server1.example.com:80 subdomain2.example.com:80 - firewall(external facing) - server2.example.com:80 I don't actually necessarily want to use port 80, otherwise I would just use VirtualHosts on apache, it is just an example port. Currently I can use either subdomain1.example.com OR subdomain2.example.com and they will both point to server1.example.com:80 I do not have to stay using Win Server 2008 for DNS, I am more than happy to move over to BIND if needs be, it was just easier to use Win Server 2008's DNS. I do not know if this is even possible, I have a feeling it isn't as I've only got one external IP address but any information is useful!

    Read the article

  • Ignore non-unicode programs language when installing software

    - by mitya
    This is something that is driving me nuts for a while and I haven't been able to find a solution for this problem anywhere. I am running Windows 7 and my "Language for non-Unicode programs" setting is set to Russian. I need for some non-unicode software that has a Russian UI. However, for most of my software I prefer to use the English UI. A lot of software out there is multilingual and is too smart for my liking. When installing, it switches the UI to Russian and the software UI stays in Russian after the installation without an option to change that, besides setting the "non-unicode language" to English. It switches back to Russian once I revert the setting and reboot. Most of the time it is driver software, i.e: Intel, HP, etc. How can force the installation to run English and stay that way after install, ignoring the "Language for non-Unicode programs" setting? Now, I understand this might be specific to the installer: MSI, Install Shield, etc. But any solution will be good, even if I have to apply it for every software installation. Thanks in advance for any helpful information!

    Read the article

  • How do I keep a table in Sync across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • Family server setup

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • Wake a Mac display from sleep via SSH

    - by MaxGabriel
    I'm using Jenkins as a CI server, where I'm SSHing into an iMac running OS X Mountain Lion (10.8.4) to run some UIAutomation integrations tests on an iOS app. The iMac actually sits 10 ft from me (but across a table) so I'm able to see the screen. However, the tests don't wake up the display, so I often can't see them. Is there a way to wake up the display from the terminal once Jenkins has SSHed in? So far I have tried using Applescript to press an arrow key, and using the Wake Assist application. I also tried setting the wake schedule to be the current date. Finally, I tried using the caffeinate command: caffeinate -t 300 &. The computer's "Wake for Wi-Fi access" checkbox is enabled. So far my best workaround is to just set the iMac to stay awake for atleast 3 hours. However, it'd be nice to keep normal sleep behavior, as I hypothesize that the screen waking from sleep would alert me visually that the integration tests are running. It's also significantly cooler :)

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • No sound out of headphone port

    - by Thanatos
    I cannot get sound out of the headphone port. Headphones are plugged in, and sound comes out of the internal speakers. Windows behaves normally (sound switches to headphones when headphones are inserted). It did work in Linux at one point, but something changed, we're just not sure what. Rebooting doesn't fix. This appears to occur whether or not PulseAudio is running. Things I've tried: Rebooting. No effect. Booting into Windows. It works properly, so probably not a hardware issue. All of alsamixer. My only controls are this: "Master" Volume bar & mutable, unmuted. Controls volume. "PCM" Volume bar only. 100%. "S/PDIF" Mutable only, currently muted, has no effect. "S/PDIF" Default PCM", Mutable only, currently unmuted, has no effect. Killing PulseAudio. No effect. (It also won't stay dead! Something appears to be restarting it, and I can't tell what, but it is annoying as fuck.) alsactl init 0, no effect. sudo rm -f /var/lib/alsa/asound.state, no effect. General system info: Ubuntu 10.04 LTS Toshiba Satellite T135D-S1324 lspci says I have: 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller

    Read the article

  • How to remotely connect using perfmon?

    - by user36914
    Suprised there is not a ton of information on google when i search for this but there is not. Lot of people asking the question but i none of them have any good answers. I have a remote computer running hyper-v (server) running a Windows 7 x64 guest (guest). Occasionally i won't be able to remote desktop to guest. I will then remote to server and see that the guest instance is constantly using about 25% of the cpu. WHen i try to connect directly from server i will get the login screen but as soon as i type the password in it will just stay at the windows 7 login screen but the account names will disappear and it will not log in. It responds to pings though. I don't know how else to diagnose other than trying to run perfmon remotely. It only happens like every 3 weeks and i run it 24/7. So i'm trying to run remote desktop remotely. I tested this out on a local vm i have running under vmware. When i try to connect using perfmon to my local vm i get this error: "when attempting to connect to the remote computer the4 following system error occurred: the network path was not found" I found in another past to start the remote registry service and when i start the service i get this error: "No such interface supported" Anyways, how do i remotely connect to another machine with perfmon or if anyone has a better idea how i can diagnose the problem above then let me know.

    Read the article

  • How can I inject clips [ads] in a playlist every x minutes (even if previous clip is not over)

    - by azera
    Sorry for the title, I find it quite hard to describe what I want to do in a single sentence We have a few TV here linked to our computer, which we use to display clips about what we do in our sport center, what you should do to stay in good form ect ... It's running non stop, actually though vlc + playlist. Those main clips are 1-2 hours long and we have about 20 of them looping randomly all day. We would like to inject ads for some of our sponsiored products every now and then during the playlist, like say one ad every 15 minute. Does any one know how we can do that, while keeping the main clip random order ? I thought about encoding the whole thing as a single movie with ads inserted, but then it's not random. So we can put the ads in the playlist itself right ? Except clips are several hours long and we want that more often than that. Cutting the main clips in several pieces seems to work but that kind of sucks as new clips are made every month. If any one has an idea, thanks a lot

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Shoretel Upgrade Path

    - by Brian
    I currently have a Shoretel Server running Server 2003 x32 as a virtual machine paired with a ShoreGear 90 switch and another unused switch of the same model being reserved for manual failover. I am getting the software mailed to me from my partner, but with limited support since the server is in a relatively remote area. I am tempted to upgrade the OS at the same time as performing the upgrade, but want to know if there are any horror stories or advice I should know about before diving in. I'm upgrading from Shoretel 9.2 Build. I will be upgrading first to version 10.1 then finally to 11.1. The system has been bullet proof since it was installed and we are upgrading mainly to get a client that is a little more modern. My question boils down to: Should I even bother with an OS upgrade or even possibly a fresh install of Windows with an install of Shoretel 11.1 and just transfer the configuration? Should I just stay with Server 2003 since it is supported in my target version of Shoretel and the upgrade will be more than enough to keep me busy as a novice?

    Read the article

  • htaccess hacked - i've deleted code and file - what next?

    - by user1762595
    My website was hacked recently. I think i've found the code that was added to the htaccess file, deleted it and then added script to prevent the htaccess file being accessed again. I've also deleted the php file that the hacked code refers to (common.php). What do i need to do next? I'm not a programmer or website developer but i really wanted to see if i could fix the problem myself as i've spent quite a few hours trying and don't give up easily. Here is the hacked code that i deleted; <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_USER_AGENT} (google|yahoo) [OR] RewriteCond %{HTTP_REFERER} (google|yahoo) RewriteCond %{REQUEST_URI} /$ [OR] RewriteCond %{REQUEST_FILENAME} (shtml|html|htm|php|xml|phtml|asp|aspx)$ [NC] RewriteCond %{REQUEST_FILENAME} !common.php RewriteCond /home/httpd/vhosts/bluestardive.com/httpdocs/common.php -f RewriteRule ^.*$ /common.php [L] </IfModule> this code has to stay in the htaccess file as it redirects my url to seo friendly ones or the website errors, but has this code been hacked as well? # Apache search queries statistic module RewriteEngine On AddHandler php5-fastcgi .php .php5 # <contrexx> # <core_modules__alias> RewriteRule ^about-us$ /index.php?page=883 [L,NC] RewriteRule ^ausfluge-und-aktivitaten$ /index.php?page=800 [L,NC] RewriteRule ^bluestardive-news$ /index.php?page=919 [L,NC] RewriteRule ^bookings$ /index.php?page=911 [L,NC] RewriteRule ^diveresort$ /index.php?page=879 [L,NC] RewriteRule ^diving$ /index.php?page=880 [L,NC] RewriteRule ^excursions-and-activities$ /index.php?page=881 [L,NC] RewriteRule ^galerie$ /index.php?section=gallery [L,NC] RewriteRule ^oceannight$ http://www.bluestardive.com/index.php?page=906 [L,NC] RewriteRule ^philosophy$ /index.php?page=846 [L,NC] RewriteRule ^reservation$ /index.php?page=917 [L,NC] RewriteRule ^reservierung$ /index.php?page=918 [L,NC] RewriteRule ^resort$ /index.php?page=798 [L,NC] # </core_modules__alias> # </contrexx> many thanks for any help Claire

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • What user information is exposed via a browser?

    - by ipso
    Is there a function or website that can collect and display ALL of the user information that can be obtained via a browser? Background: This of course does not account for the significant cross-reference abilities of large corporations to collate multiple sources and signals from users across various properties, but it's a first step. Ghostery is just a great idea; to show people all of the surreptitious scripts that run on any given website. But what information is available – what is the total set of values stored – that those scripts can collect from? If you login to a search engine and stay logged in but leave their tab, is that company still collecting your webpage viewing and activity from other tabs? Can past or future inputs to pages be captured – say comments on another website? What types of activities are stored as variables in the browser app that can be collected? This is surely a highly complex question, given to countless user scenarios – but my whole point is to be able to cut through all that – and just show the total set of data available at any given point in time. Then you can A/B test and see what is available with in a fresh session with one tab open vs. the same webpage but with 12 tabs open, and a full day of history to boot. (Latest Firefox & Chrome – on Win7, Win8 or Mint13 – although I'd like to think that won't make too much of a difference. Make assumptions. Simple is better.)

    Read the article

  • IP Masquerade and forwarding

    - by poelinca
    Hi all , i got a dedicated server running ubuntu server 10.10 with 3 ip adresses on the same eth card ( example: eth0 192.168.0.1 , eth0:0 188.78.45.0 , eth0:1 ... ) with a 3 virtual machines running ( virtualization technologi used is lxc but i don't think this matters too much ) . Now i need to redirect all ports opened ( using ufw to close/open ports ) from the ip 188.78.54.0 ( eth0:0 ) to a virtual machine ip ( let's say for example 192.168.2.3 ) , all requests made by a virtual machine should be redirected back to the virtual machine that made the request ( in this example 192.168.2.3 ) . Lets say the second vm has the ip 192.168.2.4 now i need to redirect all opened ports to from eth0:1 to this ip and viceversa . And so on and so on , what are the iptables/ufw rules to get this done ? and where to save them ( witch config file ) so they stay the same after reboot . In a few words redirect all requests comming from/to eth0:0 to a certan ip , all requests comming from/to eth0:1 to another ip ... Remember i'm saying all ports opened becouse they might be dynamicly changed . p.s. please excuse my bad english

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >