Search Results

Search found 12183 results on 488 pages for 'general news'.

Page 443/488 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • TFS2010 Hangs “Waiting for Build Agent”

    - by Qpirate
    I have asked this question over on SO the link to the question is here but i am hoping this is a better place to ask it. I have 3 VM's each running the TFS Build Host Service 1 has 1 controller and 1 agent 2 have 2 Build Agents each. Most of the time (7\10 builds) it comes back with the following error message TF215097: An error occurred while initializing a build for build definition BUILD_DEFINITION: There was no endpoint listening at http://MACHINE1:9191/Build/v3.0/Services/Controller/14 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. and there is no errors when i do get this message. the following is the config file that i have created <configuration> <appSettings> <add key="traceWriter" value="true"/> </appSettings> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> <add name="API" value="4"/> <add name="Authentication" value="4"/> <add name="Authorization" value="4"/> <add name="Database" value="4"/> <add name="General" value="4"/> <add name="traceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> I do have my own custom activities in my build process but this does not seem to be a problem as sometimes the build actually does go. I have tried refreshing the template as some sites suggest. Has anyone come across a solution for this problem? or can anyone tell me how to catch these errors when they happen?

    Read the article

  • What part of SMF is likely broken by a hard power down?

    - by David Mackintosh
    At one of my customer sites, the local guy shut down their local Solaris 10 x86 server, pulled the power inputs, moved it, and now it won’t start properly. It boots and then presents a prompt which lets you log in. This appears to be single user milestone (or equivalent). Digging into it, I think that SMF isn’t permitting the system to go multi-user. SMF was generating a ton of errors on autofs, after some fooling with it I got it to generate errors on inetd and nfs/client instead. This all tells me that the problem is in some SMF state file or database that needs to be fixed/deleted/recreated or something, but I don’t know what the actual issue is. By “generate errors”, I mean that every second I get a message on the console saying “Method or service exit timed out. Killing contract <#.” This makes interacting with the computer difficult. Running svcs –xv shows the service as “enabled”, in state “disabled”, reason “Start method is running”. Fooling with svcadm on the service does nothing, except confirm that the service is not in a Maintenance state. Logs in /lib/svc/log/$SERVICE just tell you that this loop has been happening once per second. Logs in /etc/svc/volatile/$SERVICE confirm that at boot the service is attempted to start, and immediately stopped, no further entries. Note that system-log isn’t starting because system-log depends on autofs so I have no syslog or dmesg. Googling all these terms ends up telling me how to debug/fix either autofs or nfs/client or inetd or rpc/gss (which was the dependency that SMF was using as an excuse to prevent nfs/client from “starting”, it was claiming that rpc/gss was “undefined” which is incorrect since this all used to work. I re-enabled it with inetadm, but inetd still won’t start properly). But I think that the problem is SMF in general, not the individual services. Doing a restore_repository to the “manifest_import” does nothing to improve, or even detectibly change, the situation. I didn’t use a boot backup because the last boot(s) were not useful. I have told the customer that since the valuable data directories are on a separate file system (which fsck’s as clean so it is intact) we could just re-install solaris 10 on the / partition. But that seems like an awfully windows-like solution to inflict on this problem. So. Any ideas what piece is broken and how I might fix it?

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • Keep source IP after NAT

    - by John Miller
    Until today I used a cheapy router so I can share my internet connection and keep a webserver online too, while using NAT. Users IP ($_SERVER['REMOTE_ADDR']) was fine, I was seeing class A IPs of users. But as traffic grown up everyday, I had to install a Linux Server (Debian) to share my Internet Connection, because my old router couldn't keep the traffic anymore. I shared the internet via IPTABLES using NAT, but now, after forwarding port 80 to my webserver, now instead of seeing real users IP, I see my Gateway IP (Linux Internal IP) as any user IP Address. How to solve this issue? I edited my post, so I can paste the rules I'm currently using. #!/bin/sh #I made a script to set the rules #I flush everything here. iptables --flush iptables --table nat --flush iptables --delete-chain iptables --table nat --delete-chain iptables -F iptables -X # I drop everything as a general rule, but this is disabled under testing # iptables -P INPUT DROP # iptables -P OUTPUT DROP # these are the loopback rules iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # here I set the SSH port rules, so I can connect to my server iptables -A INPUT -p tcp --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT # These are the forwards for 80 port iptables -t nat -A PREROUTING -p tcp -s 0/0 -d xx.xx.xx.xx --dport 80 -j DNAT --to 192.168.42.3:80 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p tcp -s 192.168.42.3 --sport 80 -j ACCEPT # These are the forwards for bind/dns iptables -t nat -A PREROUTING -p udp -s 0/0 -d xx.xx.xx.xx --dport 53 -j DNAT --to 192.168.42.3:53 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p udp -s 192.168.42.3 --sport 53 -j ACCEPT # And these are the rules so I can share my internet connection iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0:1 -j ACCEPT If I delete the MASQUERADE part, I see my real IP while echoing it with PHP, but I don't have internet. How to do, to have internet and see my real IP while ports are forwarded too? ** xx.xx.xx.xx - is my public IP. I hid it for security reasons.

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Postfix issues sending mail to addresses under domain located on server

    - by iamthewit
    I recently installed virtualmin on my nice shiny new rackspace cloud. Everything went seemlessly but I've been having some issues getting emails to send properly. The problem seems to be that the server can not send mail to email addresses where the domain is owned by my server. For example, on my server I run multiple virtual domains, lets call this one test.com. When I run the mail command from shell (mail [email protected]) I get the following back from my maillog: Oct 6 14:55:18 test postfix/pickup[8737]: DC1131612CC: uid=0 from= Oct 6 14:55:18 test postfix/cleanup[8769]: DC1131612CC: [email protected] Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: [email protected], size=353, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/error[8771]: DC1131612CC: [email protected], relay=none, delay=0, delays=0/0/0/0, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Oct 6 14:55:18 test postfix/cleanup[8769]: DD07D1612D1: [email protected] Oct 6 14:55:18 test postfix/bounce[8772]: DC1131612CC: sender non-delivery notification: DD07D1612D1 Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: from=<, size=2268, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: removed Oct 6 14:55:18 test postfix/local[8773]: DD07D1612D1: [email protected], relay=local, delay=0.03, delays=0/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME) Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: removed when I run mail [email protected] the message is sent and received perfectly fine. I'm a bit of a noob when it comes to servers, but I pick things up fairly quickly, so please excuse any incorrect terminology and my general noobiness. Any help would be greatly appreciated, I've been googling for quite a while but I haven't found a solution yet, I'll add a copy of my main.cf file in a response below cheers guys here is the reformatted postconf, do you want the reformatted main.cf file too, or is this enough? alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no mailbox_command = /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man myhostname = server.test.com newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sender_bcc_maps = hash:/etc/postfix/bcc sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • Diff 2 files while ignoring parts of lines

    - by Millianz
    I would like to diff a file system. Currently my bash script prints out the file system recursively into a file (ls -l -R) and diffs it with an expected output. An example for a line in this file would be: drw---- 100000f3 00000400 0 ./foo/ My current diff command is diff "$TEMP_LOG" "$DIFF_FILE_OUT" --strip-trailing-cr --changed-group-format='%' --unchanged-group-format='' "$SubLog" As you can see I ignore additional lines in the current output file, I only care about lines that match with the master output. I now have the problem though that some files may differ in size, or a folder might even have a different name, but due to it's location I know what access rights it should have. For example: Output: ------- 00000000 00000000 528 ./foo/bar.txt Master: ------- 00000000 00000000 200 ./foo/bar.txt Only the size differs here, and it doesn't matter, I would like to just ignore certain parts of the diff, kind of like an ansi c comment. Master: ------- 00000000 00000000 /*200*/ ./foo/bar.txt -- OR -- Master: d------ 00000000 00000000 /*10*/ ./foo//*123123*///*76456546*//bar.txt Output: d------ 00000000 00000000 0 ./foo/asd/sdf/bar.txt And still have it diff correctly. Is this even possible with diff, or will I have to write a custom script for it? Since I'm fairly new to cygwin I might be using the completely wrong tool all together, I'm happy for any suggestions. Update: Taking a step back, here is the general task at hand that I want to achieve. I want to write a script that checks the file system to see if the read/write permissions are set up correctly. The structure of the file system is under my control, so I don't have to worry about it changing too much. Sometimes folders/files might not be present, but if they are their permissions must be checked. For Example assume that the following is a snapshot of the current file system structure drw ./foo drw ./foo/bar -rw ./foow/bar/bar.txt drw ./foo/baz -rw ./foo/baz/baz.txt And this is what the file system structure might dictate, i.e. if these folders / files are present, the permissions must match. drw ./foo drw ./foo/bar -rw ./foo/bar/bar.txt --- ./foo/bar/foobar.txt drw ./foo/baz -rw ./foo/baz/foobaz.txt In this case the file system checked out ok, since all files present match their expected values. The situation becomes more complicated as soon as certain folders might have any arbitrary name, only due to their location I know what their permissions should be. Assume that the directory ./foo/bar in the above example might be such a case, i.e. instead of bar the folder could have any name, but still match the -rw permissions. This seems like a very complicated situation, and I'm not even sure if I can solve it with bash scripting alone. I might have to write an actual application.

    Read the article

  • Can you set up a gaming LAN using OpenVPN installed in a VMware guest OS and be playing the game on the host OS?

    - by Coder
    I would like to setup a gaming VPN. Ie. I have some games that work over LAN and would like to play them with people that are not on my LAN. I know I can do this with OpenVPN. My ultimate goal would be to run OpenVPN portably on my host OS and not even need any virtualization. As such i don't want to install it on my host, but i'm fine with running it portably. I'm even fine with temporarily adding registry keys, and then running a .reg file to remove these entries once i'm done. To this effect i have installed OpenVPN on a virtual machine and diffed the registry. I then manually (using a .reg file) added all the keys that seem important on my host OS and copied the installation folder of OpenVPN onto my host machine. Then i try to run openVPN GUI 1.0.3 as a test and it says "Error opening registy for reading (HKLM\SOFTWARE\OpenVPN). OpenVPN is probably not installed". I verified that that key is indeed in the registry with all subkeys and it looks correct. I have tried running the GUI as an administrator and in compatibility mode with no success. I am running Windows 7. If this fails then i would be happy with installing OpenVPN on a virtual machine in VMWare but they key is that i will be running the game installed on my host machine. The first question for this option is if this is even possible. The second is, that I can't get the VM to have internet access if I use bridging but i can if i use NAT. Is it possible to do this game VPN setup with VMWare guest OS running using NAT? Summary of questions: -Is it possible to run openVPN portably and if so what did i miss above? -If it's not possible to run it portably, then can setup a gaming LAN by installing OpenVPN in a guest OS with NAT and how can i do this? -If the above is not possible then can i install OpenVPN in a guest using bridging and if so how can i set this up with a Windows 7 host and Windows XP guest as currently i can't get the guest to be able to access the internet in bridging mode, but it working in NAT mode. -In general is there any good documentation on setting up a gaming LAN with OpenVPN (i am using 2.1.4) as i have never set up a VPN of any sort before so any help would be much appreciated. Thanks!

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Managing hosts and iptables in scalable architecture

    - by hakunin
    Let's say I have a load balancer in front of 3 app servers. Let's say I also have these services available at certain IPs: Postgres server Redis server ElasticSearch server Memcached server 1 Memcached server 2 Memcached server 3 So that's 6 nodes at 6 different IP addresses. Naturally, every one of my 3 app servers needs to talk to these 6 servers above. Then, to make it a bit funkier, I also have 3 worker servers. And each worker also talks to the above 6 servers, but thankfully workers and apps never need to talk to each other. Now's the kicker. Everything is on Digital Ocean VPS. What that means is: you have no private network, no private IPs. You only have separate, random IP address on each machine. You can't mask them or anything. So in order to build a secure environment I would have to configure some iptables. For example: Open app servers be accessed by load balancer server Open redis, ES, PG, and each memcached servers to be accessed by each app's IP and each worker's IP This means that every time I add an app or worker I have to also reconfigure iptables in those above 6 servers to welcome the new app or worker. Is there a way to simplify this type of setup? I was thinking — what if there was a gateway machine between apps/workers and the above 6 machines. This way all the interaction would always happen via the gateway server, and when I add a new app or worker I wouldn't need to teach the 6 servers to let it in. If I went this route, then I'd hope a small 512mb server could handle that perhaps, and there wouldn't be almost any overhead. Or would there? Please help with best way to handle this situation. I would appreciate an answer as concrete as possible. I don't think this is too specific, because this general architecture is very common, and Digital Ocean is becoming increasingly popular. A concrete solution here would be much appreciated by many.

    Read the article

  • Linux Best Practices

    - by Zac
    I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks!

    Read the article

  • SMBfs mounting OK, listing OK, Read KO, smbclient OK

    - by Kwaio
    I've tried to make the title the most meaningfull I could but it still looks ugly. The premises. We are using RHEL3-U8 as OS on most servers here, don't ask me why or suggest to upgrade, it's not on today's schedule. That means kernel used is 2.4.21 I have no access to the remote server, but I know it is a netApp NAS rack. $> smbclient --version Version 3.0.9-1.3E.9 Here is the /etc/fstab line : //NASHOSTNAME/share /mnt/mydir smbfs ro,uid=123,gid=123,workgroup=XXXX,credentials=/somefile 0 0 Here is the following mount output line //NASHOSTNAME/share on /mnt/mydir type smbfs (0) The symptoms. I can list the share without problems, even cd in there. The issue appears if I try to read any file : $> cat /mnt/mydir/fileX.txt cat: /mnt/mydir/fileX.txt: Input/output error In the system logs (/var/log/kernel for example) the following errors appear. Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_readpage_sync: fileX.txt open failed, error=-5 The ERRHRD code 0x001F error is "General hardware failure" although it seems samba sometimes uses it for a different purpose, see http://www.ubiqx.org/cifs/SMB.html [Strange behaviour Alert] Additionnal informations : There is another SMB mountpoint on the system pointing to a (linux) host using samba and this one works. What I have tried. I have tried adding debug=4 to the mounting options and remounting the share and the logs still look the same. I have tried to mount the share with smbclient and I am able to fetch files with the get command. Both targets are in the same subnet, so network problem should be out, even if the LAN goes through a VPN with optimizers, MTU has already been decreased to 1450. I can also mount the share through NFS but then the files are all root.root 700 and I need to read them with another user...

    Read the article

  • T60 Screen/LCD gets black after some minutes with a highpitched sound rising and fading

    - by edelwater
    Just now my T60 screen got "black" (so no display). On my second monitor: no problems so the VGA output works. Symptom: Screen blanks / no display, but it works on the second monitor Steps to reproduce: - boot - wait (it does not matter what you do you do not have to login or anything) - (now the monitor of the laptop slowly begins to make a ssssssssHHHHHHHHHHHHHHHHHWOEOEssssssss noise of about 10 seconds) - right after the sounds ends, the monitor gets black. Sometimes it seems to be the same each time. Software: Installed no new software before/after, running ZoneAlarm and antivirus. Other: It does not feel hot in any place, there don't seem to be running processes with strange behaviour. Warranty: Out of warranty What was I doing: Typing text on a website and doing some PHP coding in a text editor. What can I do here other than buy a new laptop? Does it sound familiar to known cases? Update 1: Exactly the same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-Screen-Blackout/m-p/288772 and the second poster (garyj), http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/Black-Screen-on-T60/m-p/235053#M48627 And here: "I have that same problem. I replaced the CCRL on mine and it works fine when the screen is not screwed in. Once the frame of the LCD screen (metal portion) touches the metal on the laptop which holds the screen the screen goes black. If the metal is touching the screen when you boot up it boots up with it being very dimmly lit. " from http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-screen-problems/m-p/205047#M44995 (it seems replacing the LCD display is no use, he tried it three times). Same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-black-screen/m-p/80604#M25914 Hmmm... not handy 3 or 4 months ago I ordered and installed a new fan. Now the LCD. Which does not seem the core issue but some electric issue so it seems replacing the LCD is not the thing to do here. If it is not the LCD that needs to be replaced (see other threads), which parts can I order to fix this? Is there any information which could lead me to identify the issue? I have read replacing the "inverter" AND the "backlightning" would that make sense? Update 2: I replaced the inverter with another inverter, but IO have the same problem. I DID notice that the inverter is the component that makes the sssssssssssssHHHHHHHHHH sound AND it becomes very hot in a few seconds. (So both the old and the test one) The problem is hmmm wat is then the thing that makes the inverter hot by (assumption) after which it shuts itself down. Is it either the input or the output? The output seems to me not, because the screen seems to function so it must be the electricity coming in. But what causes it to become so hot would it be the VGA card outputting some unusual high voltage seems unlikely? I am looking for the component to order / replace update 3: Great news. Ewendish gave me the hint to look in the BIOS. While I was in the BIOS I noticed that the screen did not switch off and there was not a high pitched sound. So I lowered some settings in the BIOS. I then noticed that with brightness turned to 0 (via FN End), it does not make a high pitched sound and does not turn off, with brightness turned up just three "stripes" it starts making the sound. So I could from now on work under lowest brightness modus or... see where the problem lies. So as stated below with either power management or display drivers / ATI Catalyst settings / Windows display settings. I'm trying to see where it lies, but I will google some first. Update 4: I wiped clean the Windows XP installation and installed Windows 7 on it. Unfortunately the problem remains: as soon as the brightness goes up the screen starts hissing. This means... back to original thought: it probably IS a hardware problem. Although ... again... if it is NOT the inverter, what is it? Could it be the backlightning component? I could try to switch that with a another T60... but this is quite tricky.

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • What router hardware or software should be used when multiple public IPs are routed into the same LAN?

    - by lcbrevard
    I am looking for recommendations to replace a set of consumer grade (Linksys, Netgear, Belkin) routers with something that can handle more traffic while routing more than one static public IP into the same LAN address space. We have a block of static public IPs, 5 usable, with Comcast Business. Currently four of them are in use for: General office access Web server Mail and DNS servers Download and backup web server for separate business All systems (a mixture of physical and virtual) are in the same LAN address space (10.x.y.0/24) to enable easy access between them inside the office. There are 30 or more systems in use depending on which virtual machines are currently active. We have a mixture of Windows, Linux, FreeBSD, and Solaris. Currently a separate consumer grade router is used for each of the four static addresses, with its WAN address set to the specific static address and a different gateway address for each: uses 10.x.y.1 - various ports are forwarded to various LAN IPs on systems with gateway 10.x.y.1 uses 10.x.y.254 - port 80 is forwarded to a server with gateway 10.x.y.254 uses 10.x.y.253 - ports for mail and dns are forwarded to a server with gateway 10.x.y.253 uses 10.x.y.252 - ports as needed are forwarded to server with gateway 10.x.y.252 Only router 1. is allowed to serve DHCP and address reservation based on the MAC is used for most of the internal "server" IP addresses so they are at fixed values. [Some are set static due to limitations in the address reservation capabilities of router 1.] And, yes, this really does work! But... I am looking for: better DHCP with more capable address reservation higher capacity so I don't have to periodically power cycle the routers One obvious improvement would be to have a real DHCP server and not use a consumer grade router for that purpose. I am torn between buying a "professional" router such as Cisco or Juniper or Sonic Wall verus learning to configure some spare hardware to perform this function. The price goes up extremely rapidly with capabilities for commercial routers! Worse, some routers require licensing based on the number of clients - a disaster in our environment with so many virtual machines. Sorry for such a long posting but I am getting tired of having to power cycle routers and deal with shifting IP addresses afterwards!

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user225556
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • Cooling Server Rack with Water? Sensible? Reuse energy for small installation?

    - by TomTom
    First - this is not a shopping question, this is not so much about concrete prices but about general feasibility. Makes no sense to get looking fo ra manufacturer it the approach is bad. I am moving my company to new Offices in September, and among them we will expand and consolidate our number crunch cluster. It is so far in a data center. I have a nice room in the basement prepared now. I think about cooling. We will likely run up a power usage of around 10kw by end of the year. That is a LOT of stuff, and cooling will be expensive. I am located in south Poland, close to the German border. This is an area where water is available for relatively cheap price - "wasting water" is not a concern here. My situation is thus a lot different for example than in Spain ;) Physics tells me that to heat 1 liter of water by 1 degree I use 1 Calorie (1KCal), and a kwh power is (and we can assume 100% efficiency - water heaters are pretty efficient) 750 Calories. That means that 1 KWH is 750 liter by 1 degree. 10kw and a 20 degree heat would mean that per hour I need 375 liters. That is 6.25 liters per minute and not WHAT much ;) We talk 270 cubic meters here. Even in summer, the significant underground pipes really cool down the water a LOT more ;) Question: This such an approach feasible? Anyone done that? We talk of a 10kw installation for now. Is it feasible to reuse that heat? The alternative is a decent cooling system that WILL use around 2.5kwh for running. Dropping the water would basically (a) get me a quite cold input compared to the outside air even in summer (I.e. a lower temperature medium to drop the heat in) and (b) replace the need to actually have the outside cooling (which may b problematic - if the air is 22 degree, that is a LOT to fight off, but OTOH the water will be quite cold). I also would possibly save the investment for the outside part of the cooling circuit. Now, second question - is there a feasible way to heat a house with that? ;) After all, brutally speaking, it is a LOT of energy in that water ;) If it is a bad idea, I stop here - if it is not, I start looking for suppliers. Maybe my math is wrong?

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user224313
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • [CentOS 4.8] nslookup resolves domains to IPs, but I can't get a response to pings to external servers

    - by Beco
    I have a fresh install of CentOS 4.8 running on an internal development server. I haven't done anything to it besides setting up sudoers and SSH. I can SSH into the server and from there resolve domains to IPs and ping internal servers, but for some reason I don't get any response from pinging external servers. The software firewall is disabled, and the problem is present with both static and DHCP-assigned network configurations. The network domain controller is a Windows Server 2003 box. $ nslookup google.com Server: 10.254.2.5 Address: 10.254.2.5#53 Non-authoritative answer: Name: google.com Address: 74.125.47.147 Name: google.com Address: 74.125.47.99 <etc...> 10.254.2.5 is the Win2K3 server. $ ping google.com PING google.com (74.125.47.106) 56(84) bytes of data. It just hangs here indefinitely. $ cat /etc/resolv.conf ; generated by /sbin/dhclient-script search <...snip...>.local nameserver 10.254.2.5 nameserver 10.254.2.124 10.254.2.124 is the backup DC server, which is currently off and tombstoned by this point. The snipped section is our company name. # ifconfig eth0 Link encap:Ethernet HWaddr <snip> inet addr:10.254.2.101 Bcast:10.254.2.255 Mask:255.255.255.0 inet6 addr: <snip>/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80066 errors:0 dropped:0 overruns:0 frame:0 TX packets:4421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7810133 (7.4 MiB) TX bytes:590550 (576.7 KiB) Interrupt:225 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8104 (7.9 KiB) TX bytes:8104 (7.9 KiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.254.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.254.2.5 0.0.0.0 UG 0 0 0 eth0 And, for good measure, a snapshot of the current ethernet config via the system-config-network GUI. Edit: I don't yet have enough rep to post images, so here's a link. Sorry! system-config-network snapshot I'm pretty green when it comes to setting up *nix dev servers and network configuration in general, so please let me know if I've left out critical information, or posted information I shouldn't have posted. Thanks!

    Read the article

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • Linq to SQL NullReferenceException's: A random needle in a haystack!

    - by Shane
    I'm getting NullReferenceExeceptions at seemly random times in my application and can't track down what could be causing the error. I'll do my best to describe the scenario and setup. Any and all suggestions greatly appreciated! C# .net 3.5 Forms Application, but I use the WebFormRouting library built by Phil Haack (http://haacked.com/archive/2008/03/11/using-routing-with-webforms.aspx) to leverage the Routing libraries of .net (usually used in conjunction with MVC) - intead of using url rewriting for my urls. My database has 60 tables. All Normalized. It's just a massive application. (SQL server 2008) All queries are built with Linq to SQL in code (no SP's). Each time a new instance of my data context is created. I use only one data context with all relationships defined in 4 relationship diagrams in SQL Server. the data context gets created a lot. I let the closing of the data context be handled automatically. I've heard arguments both sides about whether you should leave to be closed automatically or do it yourself. In this case I do it myself. It doesnt seem to matter if I'm creating a lot of instances of the data context or just one. For example, I've got a vote-up button. with the following code, and it errors probably 1 in 10-20 times. protected void VoteUpLinkButton_Click(object sender, EventArgs e) { DatabaseDataContext db = new DatabaseDataContext(); StoryVote storyVote = new StoryVote(); storyVote.StoryId = storyId; storyVote.UserId = Utility.GetUserId(Context); storyVote.IPAddress = Utility.GetUserIPAddress(); storyVote.CreatedDate = DateTime.Now; storyVote.IsDeleted = false; db.StoryVotes.InsertOnSubmit(storyVote); db.SubmitChanges(); // If this story is not yet published, check to see if we should publish it. Make sure that // it is already approved. if (story.PublishedDate == null && story.ApprovedDate != null) { Utility.MakeUpcommingNewsPopular(storyId); } // Refresh our page. Response.Redirect("/news/" + category.UniqueName + "/" + RouteData.Values["year"].ToString() + "/" + RouteData.Values["month"].ToString() + "/" + RouteData.Values["day"].ToString() + "/" + RouteData.Values["uniquename"].ToString()); } The last thing I tried was the "Auto Close" flag setting on SQL Server. This was set to true and I changed to false. Doesnt seem to have done the trick although has had a good overall effect. Here's a detailed that wasnt caught. I also get slighly different errors when caught by my try/catch's. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Util.StringUtil.GetStringHashCode(String s) at System.Web.UI.ClientScriptManager.EnsureEventValidationFieldLoaded() at System.Web.UI.ClientScriptManager.ValidateEvent(String uniqueId, String argument) at System.Web.UI.WebControls.TextBox.LoadPostData(String postDataKey, NameValueCollection postCollection) at System.Web.UI.Page.ProcessPostData(NameValueCollection postData, Boolean fBeforeLoad) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.forms_news_detail_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) HELP!!!

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >