Search Results

Search found 18749 results on 750 pages for 'komodo edit'.

Page 547/750 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • RewriteRule causes POST data to get dumped before I can access it

    - by MatthewMcGovern
    I'm currently setting up my own 'webserver' (a Ubuntu Server on some old hardware) so I can have a mess around with PHP and get some experience managing a server. I'm using my own little MVC framework and I've hit a snag... In order for all requests to make it through the dispatcher, I am using: <Directory /var/www/> RewriteEngine On RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [L] </Directory> Which works great. I read on Stackoverflow to change the [L] to [P] to preserve post data. However, this causes every page to return: Not Found The requested URL <url> was not found on this server. So after some more searching, I found, "Note that you need to enable the proxy module, and the proxy_http_module in the config files for this to work." The problem is, I have no idea how to do this and everything I google has people using examples with virtual hosts and I don't know how to 'translate' that into something useful for my setup. I'm accessing my webserver via my public IP and forwarding traffic on port 80 to the web server (like I'm pretending I have a domain/server). How can I get this enabled/get post data working again? Edit: When I use the following, the server never responds and the page loads indefinately? LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so <Directory /var/www/> RewriteEngine On RewriteCond %{HTTP_REFERER} !^http://(.+\.)?82\.6\.150\.51/ [NC] RewriteRule .*\.(jpe?g|gif|bmp|png|jpg)$ /no-hotlink.png [L] RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [P] </Directory>

    Read the article

  • Concurrent users with Quickbooks?

    - by airietis
    I work in a company with 3 people who regularly use the same Quickbooks file. However, they work remotely on different networks. I need to implement a solution that allows all three of us to access Quickbooks at the same time remotely (and each make changes at the same time). We have a spare desktop PC that can be utilized as a server. So, my question is: what is the cheapest and most hassle-free solution to solving this problem? I've considered using application cloud hosting, however, it is very expensive ($40 per user a month) and we are on a tight budget. Is it possible to install Quickbooks on my own server, and have them connect to it remotely? If so, what is the best way to accomplish this? Remote desktop protocol? Or is there a built in feature for this with Quickbooks Premier 2013? EDIT: As MDMarra mentioned, I am looking for a solution that offers true simultaneous access. Will using a dedicated server and having users connect to a VPN be a viable solution?

    Read the article

  • Multi-partition USB stick

    - by nightcracker
    In my freelance job as "the dude that fixes your computer" I have an extremely handy tool, a bootable USB stick with Ubuntu LiveCD that allows me to recover and investigate in a known, working environment. Now, I want to reformat this USB stick and reinstall with Casper-RW persistance. I did this a few times before with a FAT-formatted USB stick. It was a horror. The USB drive corrupted constantly, by people accidently removing the USB stick, the computer not properly shutting down, ETC. Now what I want to create a multi-partition USB stick so I can put Ubuntu on a ext partition, but still be able to store some Windows stuff in it, by having a secondary FAT partition. However I read somewhere that Windows will only check the first partition on USB sticks, giving a problem with the first bootable linux partition. Is this possible on some way? EDIT Perhaps it wasn't clear what the problem is. The problem is that I read somewhere that Windows will only recognize the first partition on a USB stick. But I want two partitions, a ext partition and a FAT partition. No issues so far, but in order to be bootable the ext partition must be the first one!

    Read the article

  • Our clients site is redirecting to a pill scammy site [closed]

    - by Alex Demchak
    Possible Duplicate: My server's been hacked EMERGENCY We've usually host our clients site, but we aren't hosting this one. The website itself (weddle-funeral.com) works just fine. if you load google and search for weddle funeral stayton oregon - and click that link, the site links to a scammy pill site. I went through the site and there were some php files in the wordpress plugins that got quarantined by my antivirus. I removed ALL non essential files, and uploaded fresh versions of all the plugins, but it's STILL redirecting from google. I tried logging in to the cpanel (on a virtual private server), and the cpanel flashed a red warning screen The site's security certificate is not trusted! You attempted to reach XXXXX.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site. (Keep in mind, that's for the HOSTING accounts CPanel) Is there something in the SERVER probably that's causing the redirect? EDIT: .htaccess file contents # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • What's the safest way to kick off a root-level process via cgi on an Apache server?

    - by MartyMacGyver
    The problem: I have a script that runs periodically via a cron job as root, but I want to give people a way to kick it off asynchronously too, via a webpage. (The script will be written to ensure it doesn't run overlapping instances or such.) I don't need the users to log in or have an account, they simply click a button and if the script is ready to be run it'll run. The users may select arguments for the script (heavily filtered as inputs) but for simplicity we'll say they just have the button to choose to press. As a simple test, I've created a Python script in cgi-bin. chown-ing it to root:root and then applying "chmod ug+" to it didn't have the desired results: it still thinks it has the effective group of the web server account... from what I can tell this isn't allowed. I read that wrapping it with a compiled cgi program would do the job, so I created a C wrapper that calls my script (its permissions restored to normal) and gave the executable the root permissions and setuid bit. That worked... the script ran as if root ran it. My main question is, is this normal (the need for the binary wrapper to get the job done) and is this the secure way to do this? It's not world-facing but still, I'd like to learn best practices. More broadly, I often wonder why a compiled binary is more "trusted" than a script in practice? I'd think you'd trust a file that was human-readable over a cryptic binaryy. If an attacker can edit a file then you're already in trouble, more so if it's one you can't easily examine. In short, I'd expect it to be the other way 'round on that basis. Your thoughts?

    Read the article

  • Many clients on a wireless AP for UDP broadcast packets

    - by distorteddisco
    I asked this question on StackOverflow and was directed over here, so I'd appreciate any advice. I'm deploying a smartphone application as part of a live music performance that depends on receiving UDP broadcast packets from a wireless access point. I'm guessing that between 20 and 50 clients will be connected at any one time. I'm aware that a maximum of 20 clients per access point is advised, but as the UDP broadcast packets are ground through the LAN, how would I be able to link multiple APs together? I'm looking for recommendations on a suitable AP for this. The actual data transmission rates are very low - only a few kB/s - as I'm just sending small messages to the smartphone apps, and there will be no WAN internet connection. I tried it with a few connected peers on an adhoc wireless connection without any problems, but ran into dropped packet issues on an old WRT54G running ddwrt, though it's in pretty rough shape. What's the best way to do this? I suppose I could limit concurrent wireless connections to 20 clients... but more would be nice. EDIT: I should also say that it's purely one-way communication; the smartphone application is only receiving broadcast packets, not sending anything.

    Read the article

  • Linux as a gateway (no NAT)

    - by Hugo
    I'm trying to configure a linux server as a gateway/router, but I can't get it to work, and all information I've managed to find is NAT-related. I have a public IP block for the gateway and devices behind it, so I want the gateway to simply route packets to the internet - again: no NATing! I've managed to get the gateway to access the internet successfully (that was just a matter of configuring the IP and GW), and the computers behind it can communicate with it. [EDIT: more info] This is actually an IPv6 block (2800:40:403::0/48) (but I've found that most utilities and instructions can be easily adapted from IPv4 to IPv6 with little hastle). The server has too ports: wan: 2800:40:403::1/48 lan: 2800:40:403::3/48 One of the computers behind it is connected to it via a switch; 2800:40:403::7/48 The wan interface on the server can ping6 www.google.com without issues. The lan interface on the server and the client can mutually ping each other without issues (as well as SSH, etc). I've tried setting the server as a default gateway for the client, with no luck: client # route -A inet6 add default gw 2800:40:403::3 dev eth1 server # cat /proc/sys/net/ipv6/conf/all/forwarding 1 I don't want any filtering/firewalling/etc, just plain routing. Thanks.

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • How do I connect to and factory reset a Catalyst 3560 Switch?

    - by Josh
    My company just bought another company. In their server room they had some older hardware, which I would like to repurpose. One of these is a Cisco Switch: C3560G-48TS-S. I found some instructions about this switch here but this is not a guide for a beginner. I have no idea how to connect to this thing to begin running the commands. It says Configure the PC terminal emulation software for 9600 baud, 8 data bits, no parity, 1 stop bit, and no flow control. But I can't find anything on how to do this (assuming with telnet?) or even what program to use. I also don't know how to find the IP address of the device to connect to it. My research also says once I get in there, I need to run clear config all Is this the right command? Also, what if I can't get the username and password for these devices? Is there some way to factory reset (my only experience is with devices that have a hardware reset button) EDIT: I should note that when I push the button on the front the three lights blink, which according to the documentation indicated the switch is configured and "not available for express setup"

    Read the article

  • Server redirect

    - by Tyy
    I have asp.net app XYZ which requires SSL. I am supposed to work with IIS that has only one Web Site - DefaultWebSite - containing multiple apps and virtual directories (3rd party). So, my app is located at domain.com/XYZ. I have to meet these conditions: 1.) requesting DefaultWebSite (domain.com) will run my app. It could redirect to ../XYZ, but I would rather not to. If it has to be done this way, requesting DefaultWebSite or my app over both HTTP and HTTPS will always ends up with redirecting to https://domain.com/XYZ 2.) I can't touch any other apps or virtual directories, can't create additional Web Site, can't set DefaultWebSite to require SSL, ... EDIT: 3.) transmitted data (GET or POST) must be preserved I tried to: set Web Site root directory to my app, but it this caused other apps to crash because of my Web.config (not sure why). set up HTTP redirect on DefaultWebSite to https://domain.com/XYZ. This seems to work correctly, but this doesn't work if user requests my app directly (redirected to domain.com/XYZ/XYZ, or redirect loop). set up Default Document, but this seems to work only if it is located in the Web Site root directory. I know I could write simple .aspx with Response.Redirect, but... is there any better solution? Am I missing something?

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • How do you permanently disable the 'This Connection is Untrusted' page on Firefox

    - by TheIronChef9
    I'm going insane. Can someone please help me to COMPLETELY DISABLE the 'This Connection is Untrusted' page on Firefox. Facts: I am running Firefox 23.0 on an Ubuntu machine (downloaded and installed ubuntu today) It is a work computer and I have to use my employer's proxy While visiting Webpages/webapps like Gmail or Google brings up the 'This Connection is Untrusted' page and I have to go through the whole tedious task of selecting 'I understand the Risks' and add Exceptions, etc. etc. The fact is, I don't care about the risks. I would rather this computer melt into the ground than have to see that page ever again. I want to dance naked in untrusted pages and not give a damn about the consequences. I just never want to see that page again. Ever. For some sites (eg. wikipedia), the css doesn't load and I end up seeing them in plain text. As a result these sites are completely useless. Wasted hours trying to solve this for stackoverflow.com. These issues happen on the Firefox on my Windows XP machine as well (also using the same proxy). I don't want to export/import certificates or create exceptions for every site that shows this bloody page. I just want this page gone. I don't want Firefox to tell me what's safe and what's not. Also, my system time and date are correct. I've also tried the lies on this page too with no good results. Edit: I've also tried the whole going into the Advance-Certificates-validation setup page and unchecked 'Use the Online Certificate Status Protocol (OCSP) to confirm the current validity of certificates' checkbox. Nothing happened even after restarting firefox or rebooting. I need help. Thanks.

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • How is it possible to list all folders that a particular user/group has permissions on?

    - by Lord Torgamus
    Is it possible to list all folders/files that a given group has explicit permissions on, for a machine running Windows Server 2003? If so, how? It would be nice to see inherited permissions as well, but I could do with just explicit permissions. A little background: I'm trying to update groups/permissions on a test server. One of the groups, Devs, wasn't implemented correctly when it was created, and my goal is to remove it from the system. It has been replaced by LeadDevelopers, which has permissions on many — but naturally not all — of the same folders. I want to make sure that I don't accidentally orphan any folders or cause any other issues when I remove Devs. It did have some admin-level permissions. EDIT: The answers so far — at least *cacls and AccessEnum — provide a way to find out which groups/users have permissions on known directories/files. I actually want the reverse of this behavior: I know the group, and I'm looking for the directories/files for which the group has permissions. Also, as I noted in a comment, the Devs group is not itself a member of any other group.

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • How to fix UNMOUNTABLE_BOOT_VOLUME (0x000000ED) on my Windows XP DELL laptop?

    - by Neil
    I have a Dell Latitude D410. Running Windows XP. I am receiving the STOP: 0x000000ED (0X899CF030,0XC0000185,0X00000000,0X00000000) Blue screen. Initially, I tried everything specified with the Microsoft KB articles. At this time, I was able to boot into the general safemode. I pulled the hard drive and was able to run chkdsk on it- it noted that it had fixed some errors, but I was still unable to boot. I put a brand new hard drive in the laptop. Windows XP installation worked up until the reboot, at which time the exact same error message came back up. What I have tried (all since the new hard drive was installed): chkdsk /R All suggested solutions in Microsoft KB articles Reseating RAM Opened laptop, reseated all connectors, looked for signs of damage (saw none) Reset BIOS options to default Ran the basic Dell diagnostics I have looked at the current entry:How can I boot XP after receiving stop error 0x000000ED - I am currently in the process of downloading the Ultimate Boot CD to use as a test, but I am not holding out a lot of hope as I really doubt this brand new Hard Drive is bad. Can anyone think of other areas I am missing? Ran MEMTEST86+ V4.10 for 15 passes (overnight). 0 Errors EDIT: FORMATTING

    Read the article

  • Windows 7: Dual Monitor Issue

    - by sdoca
    I have a Dell laptop with a docking station and two external monitors hooked up. I'm running Windows 7 64 bit. Originally, the two monitors were the same - Dell 1908FP. I have replaced one of them with an HP LA2405. I have set the HP 24" (1920 x 1200 connected via DVI port) as montior 1 and extended to use the Dell 19" (12080 x 1024 connected via VGA port) as monitor 2. I have set my power plan so that when the laptop is plugged in, it will turn the monitors off after 10 minutes, but it will not put my laptop to sleep. However, now that I have the HP monitor, when I unlock my laptop and the monitors come back on, all my windows are resized and shifted to the top left corner of the HP monitor as if they had been resized for display on the smaller Dell monitor. A co-worker has the exact same laptop and monitor configuration but doesn't experience this issue, so I figure there's some configuration that's different but we can't find it. I haven't been able to find any mention of a similar problem doing an internet search, but I'm not really sure what terms to use in my search. Anybody have any suggestions as to what may be causing the issue? OS setting? Monitor setting? EDIT: My laptop is using the Intel Graphics and Media Control Panel/drivers.

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • Creating a private wiki

    - by Hand-E-Food
    I want to create a simple, private wiki, but am really struggling to find what I need. I require the following features: Private wiki. Only I will read or write it. Some formatting capability: headings, bold, italic, bullets, block quotes Wiki Viewer for Windows 7. If it comes with an editor, I need to be able to hide it. Page Editor for Windows 7. Page Editor for iPhone. Synchronize by cloud but available offline in Windows. So far, my research has led me to Markdown language. I can easily edit this as plain text using Notepad++ for Windows and Elements for iPhone. I can sync these files through Dropbox and have them available offline. What I can't find is a suitable viewer for Windows. I'd prefer to steer away from using HTML due to its verbose formatting codes. Can anyone recommend a solution for me? If need be, I'll happy to make a small one-off payment for software.

    Read the article

  • Run 3 monitors on two different video cards?

    - by hullot
    Can I run 3 monitors on two different video cards? I have an ATI and Nvidia brand card. The ATI has 2 HDMI connections. They both work. Both cards are also picked up in Windows, one being the ATI and the other one as the Nvidia, but it says VGA Controller, although the card only takes 2 DVi. So, one DVI cable goes into that Nvidia card. 3 Monitors, but only 2 the HDMI ones from the ATI pick up, not the third one which is connected to the Nvidia via DVI. How can I run three monitors then? I suppose I can't install both drivers, so I'm unsure what to do. Is this possible? I just want the Nvidia card to power the third screen, no gaming on it, nothing. Also the ATI is picked up as primary card as well, so no hurdle there. EDIT: Hm, just installed the Nvidia drivers and it picked up the third screen no problem. Hope there aren't any major conflicts. Will post this as an answer as correct when I'm able. Can't as a new user.

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

  • Windows 7 Laptop Randomly Sleeping

    - by Josh Arenberg
    I'm running Windows 7 64-bit on a Lenovo laptop, and I have a problem wherein after I've put it to 'sleep' once, it will seemingly without any reason or warning go back to sleep every 10 to 15 minutes after that. Each time it wakes back up just fine, and you can happily continue on. However, 10 or 15 minutes later, it just drops out into sleep mode. I don't see any errors in the system logs. I don't think it's related to heat ( since it's not rebooting ). When I check the system event log, I can see where the machine goes to sleep, but it simply says "reason: application API", and doesn't indicate which app ( brilliant ). I don't see any errors from hardware or anything relating to sleep in the application log that would point to what is going on either. How can I find out which app is triggering this? EDIT: I confirmed that temperature wasn't the issue. I think it's important to keep in mind that this condition only happens after the first sleep. If I reboot, the problem goes away until I put it to sleep, after which it sleeps on it's own every few minutes. Is there no way to capture which app is calling the sleep routine?

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >