Search Results

Search found 11735 results on 470 pages for 'global variables'.

Page 349/470 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Cherrypy web application won't communicate outside localhost via VPN

    - by Geoffrey Shea
    I'm trying to run a Python2.7/Cherrypy web server on Win 7 which is connected to a VPN to establish a dedicate IP address. (If I run the exact same application on Win XP connected to the VPN it works fine.) On Win 7 I tried configuring it to use port 8080, 8005, or 80 with no improvements. I turned off Windows Firewall altogether to test and there was no improvement. If I run Apache on the Win 7 machine on port 80 it works fine so I'm pretty sure it's not the VPN service or router. If I go to WhatismyIP.com it shows that I have the IP address being provided by the VPN. Here is the Python code, but I suspect the problem is the network configuration: import cherrypy class HelloWorld: def index(self): return "Hello world!3" index.exposed = True cherrypy.root = HelloWorld() cherrypy.config.update({"global":{ "server.environment": "production", "server.socketPort": 8005 } }) cherrypy.server.start() This will return a web page if I go to localhost:8005, but not if I go to the VPN IP address:8005 from another machine. As I said, if I run Apache on the Win 7 machine on port 80 I can see it at localhost:80 AND at the VPN IP address:80 from another machine. Thanks for any light you can shed! Geoffrey

    Read the article

  • Apple: Bind a key to a commandline command?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard?

    Read the article

  • Outlook 2010 not resolving SMTP address to Display Name

    - by Ben
    I have a weird problem where a user (director, naturally) has an odd problem with the way the to field shows in his Outlook. We are using Outlook 2010 (with RPC/HTTP) and Exch 2003 at the back end. Most of his mail shows in his mailbox with the To: field as normal. (e.g. Fred Bloggs in the to field.) However, some mails come in showing in the To: field as [email protected]. (Apparently this is an issue!) For example, most show as: but a few come in as: There doesn't appear to be a pattern to this. I have tried to replicate by: Sending from any specific senders to see if it recurs (it doesn't) Typing his full name in my Outlook and sending (it resolves as normal) Sending programatically (e.g. from script) - (it still resolves OK) Forcing a "[email protected]" entry in my Outlook. It resolves as soon as I hit enter Sending in cached mode Sending disconnected Anyone got any ideas how I can either replicate the problem or fix it! I can't tell at the moment whether it is a problem at his end or the sender's. EDIT This seems to be a global issue, following some more digging. Most people seem to have a few emails in their inbox addressed to their smtp address, rather than display name.

    Read the article

  • Reliable custom Windows shortcut keys?

    - by Peter Baer
    I have global Windows shortcut keys assigned to several different cmd.exe instances. I do this by creating shortcuts to cmd.exe on my desktop, and assigning each one a unique shortcut key (for example, CTRL + SHIFT + U). Pretty basic stuff. I'm using Win2K8 (R1 and R2). This works just fine... most of the time. But with infuriating regularity, sometimes it doesn't. Or it will work with a long delay (many seconds). It doesn't matter what app currently has focus (it can even be one of the command prompts). It doesn't matter what keys I assign (I've tried a few variations of WIN, CTRL and SHIFT). I did notice that this is often, but not always, correlated with explorer.exe struggling in some way or another (say, an explorer window opened to a file share that's unavailable, or an app being unresponsive, or whatever). In other words the shortcut key handling appears to be very sensitive to unrelated system activity. Note that whenever I have this problem I can always successfully ALT + TAB to the window I want to get to, but that's tedious. I use the shortcuts to these command windows hundreds of times a day so even a 1% failure rate becomes really annoying. Is there a way to fix this, or is there some third-party utility out there that will RELIABLY intercept custom key combinations to bring focus to whatever apps I want, in a way that is independent of other system activity? ADDENDUM: There is a property of the Windows shortcuts that I would not want to lose if switching to a third-party hotkey tool: Windows shortcuts are idempotent. Once you've launched a shortcut to some app, pressing the shortcut key combo again takes you to the already launched process - it does not launch a new process.

    Read the article

  • ffmpeg cutting video duration

    - by Steve Spence
    When using ffmpeg on linux, my 4.3GB 2.21 second video is being chopped down to 1.56 duration. I'm trying to reduce file size, but not lose frames. steve@steve-OptiPlex-170L:~/Desktop$ ffmpeg -i microbe.avi microbe.mp4 ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Jun 12 2012 16:37:58 with gcc 4.6.3 * THIS PROGRAM IS DEPRECATED * This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, avi, from 'microbe.avi': Duration: 00:02:21.80, start: 0.000000, bitrate: 242311 kb/s Stream #0.0: Video: rawvideo, bgr24, 1280x960, 10 tbr, 10 tbn, 10 tbc Incompatible pixel format 'bgr24' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x9f861e0] w:1280 h:960 pixfmt:bgr24 [avsink @ 0x9f86440] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x9f7d800] w:1280 h:960 fmt:bgr24 - w:1280 h:960 fmt:yuv420p flags:0x4 Output #0, mp4, to 'microbe.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 10 tbn, 10 tbc Stream mapping: Stream #0.0 - #0.0 Press ctrl-c to stop encoding frame= 1164 fps= 6 q=31.0 Lsize= 3775kB time=116.40 bitrate= 265.7kbits/s video:3765kB audio:0kB global headers:0kB muxing overhead 0.272870% steve@steve-OptiPlex-170L:~/Desktop$

    Read the article

  • How to enable hotlink protection without hardcoding my domain in the Apache config file?

    - by Jeff
    Been surfing around for a solution for a couple days now. How do I enable Apache hotlink protection without hardcoding my domain in the config file so I can port the code to my other domains without having to update the config file every time? This is what I have so far: RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www\.example\.com [NC] RewriteRule \.(gif|ico|jpe|jpeg|jpg|png)$ - [NC,F,L] ... And this is what Apache suggests: SetEnvIf Referer example\.com localreferer <FilesMatch \.(jpg|png|gif)$> Order deny,allow Deny from all Allow from env=localreferer </FilesMatch> ... both of which hardcode the domain in their rules. The closest I came to finding any info that covers this is right here on ServerFault, but the conclusion was that it cannot be done. Based on my research, that appears to be true, but I didn't find any questions or commentary dedicated soley to this question. If anyone's curious, here is the link to the Apache 2 docs that cover this topic. Note that Apache variables (e.g. %{HTTP_REFERER}) can only be used in the RewriteCond text-string and the RewriteRule substitution arguments.

    Read the article

  • Mac Mavericks, ngircd localhost works, private IP doesn't

    - by user221945
    I have configured ngircd to listen on my private ip address. It doesn't. Localhost works fine. Configuration test: ngIRCd 21-IDENT+IPv6+IRCPLUS+SSL+SYSLOG+TCPWRAP+ZLIB-x86_64/apple/darwin13.2.0 Copyright (c)2001-2013 Alexander Barton () and Contributors. Homepage: http://ngircd.barton.de/ This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Reading configuration from "/opt/local/etc/ngircd.conf" ... OK, press enter to see a dump of your server configuration ... [GLOBAL] Name = irc.bellbookandpistol.com AdminInfo1 = Jaedreth AdminInfo2 = San Diego County CA, US AdminEMail = [email protected] HelpFile = /opt/local/share/doc/ngircd/Commands.txt Info = Server Info Text Listen = 10.0.1.5,127.0.0.1 MotdFile = MotdPhrase = "Welcome to irc.bellbookandpistol.com" Password = PidFile = Ports = 6667 ServerGID = wheel ServerUID = root [LIMITS] ConnectRetry = 60 IdleTimeout = 0 MaxConnections = 0 MaxConnectionsIP = 6 MaxJoins = -1 MaxNickLength = 9 MaxListSize = 0 PingTimeout = 120 PongTimeout = 20 [OPTIONS] AllowedChannelTypes = #&+ AllowRemoteOper = no ChrootDir = CloakHost = CloakHostModeX = CloakHostSalt = kBih5mu\kVI!DC6eifT(hd4m/0'zb/=: CloakUserToNick = no ConnectIPv4 = yes ConnectIPv6 = no DefaultUserModes = DNS = yes IncludeDir = /opt/local/etc/ngircd.conf.d MorePrivacy = no NoticeAuth = no OperCanUseMode = no OperChanPAutoOp = yes OperServerMode = no RequireAuthPing = no ScrubCTCP = no SyslogFacility = local5 WebircPassword = [SSL] CertFile = CipherList = HIGH:!aNULL:@STRENGTH DHFile = KeyFile = KeyFilePassword = Ports = [OPERATOR] Name = [REDACTED] Password = [REDACTED] Mask = [CHANNEL] Name = #BBP Modes = tnk Key = MaxUsers = 0 Topic = Welcome to the Bell, Book and Pistol IRC Server! KeyFile = As you can see, it should be listening on 10.0.1.5, but it isn't. After turning on Apache manually, port 80 works on 10.0.1.5, but port 6667 doesn't. It only works on localhost. Is there some terminal command I could use or some config file I could edit to get this to work?

    Read the article

  • Faking a Linux environment without chroot

    - by Pascal
    For a university project I want to test a C++11 program on a 32-core machine. Unfortunately the machine has Ubuntu 12.04 with GCC 4.6 installed (we need GCC 4.7 because of some C++11 threading features). In such an environment I would normally run a chroot with a custom linux (say a debootstrap with Ubuntu 12.10). Since we don't get root access on the machine we can't use chroot. So far I have prepared a run-time environment using debootstrap for our code, I compiled it in the debootstrap environemnt. Then copied it onto the server (using rsync). In order to run our C++ code I set the LD_LIBRARY_PATH to export LD_LIBRARY_PATH=~/debootstrap/usr/lib/:~/debootstrap/lib64/:~/debootstrap/usr/lib/x86_64-linux-gnu/:~/debootstrap/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH and so far our code seems to run. I'm however stuck with our python code. It doesn't seem to be sufficient to set the paths manually. export PYTHONPATH=~/debootstrap/usr/lib/python2.7/dist-packages:~/debootstrap/usr/lib/python2.7:~/debootstrap/usr/lib/python2.7/plat-linux2:~/debootstrap/usr/lib/python2.7/lib-tk:~/debootstrap/usr/lib/python2.7/lib-dynload:~/debootstrap/usr/local/lib/python2.7/dist-packages:~/debootstrap/usr/lib/pymodules/python2.7:~/debootstrap/usr/lib/python2.7/dist-packages/PIL:~/debootstrap/usr/lib/python2.7/dist-packages/gtk-2.0:~/debootstrap/usr/lib/python2.7 Executing our script results in ImportError: No module named _path Is there an easier way to accomplish a "fake"-chroot than just overriding and creating environment variables? Note I need python since we created a custom C++-Python module in order to run our tests. Maybe I should create two questions from this.

    Read the article

  • ssh-add insists on passphrase

    - by Sam Walton
    I have a new ssh key problem. I have successfully used them for years with Heroku, Git and other servers so I can login without having to issue a passphrase. A few weeks ago, I was unable to push a git repository on my machine to my Heroku and it responded with Permission denied (publickey). Hmm. Everything else but this Heroku function still works. So I ssh-keygen -t rsa -C "newHeroku" with no passphrase (hit return so it would be empty). So I enter: sudo chmod 600 ~/.ssh/newHeroku* Then: ssh-add ~/.ssh/newHeroku.pub Returning return for the passphrase asked it exits without error. The next step is to: ssh-add /Users/sam/.ssh/newHeroku.pub To verify that it's "live" I enter: ssh-add -l To which the output is still The agent has no identities. Okay, to eliminate variables, I repeat the key generation process but entering in a passphrase for a new key. I ssh-add the new key and get the "Enter passphrase" as expected. Now this is why I'm posting here and not on a Heroku blog because ssh-add fails because the passphrase I used keeps getting rejected. It appears, even though I have no problem with my keys elsewhere, that something is wrong with passphrase because even though I get no errors, I get errors when on the one that expects a passphrase. One question, should I expect the Passphrase request for ssh-add when I have not generated a passphrase? It's been suggested that this is a clue and I offer it. Or maybe I have a poor understanding of what ssh-add is doing. Wouldn't be the first time I asked a stupid Q. Also, I'm on Lion and have updated no system updates in the few weeks of this period except application updates.

    Read the article

  • Lenovo System Update Breaks Windows Live

    - by wolfvilleian
    Hey everyone, I've been racking my brain (and fingers from typing) trying to solve this issue to no avail. I have a Lenovo computer and I install their system update tool to install all my missing drivers. However after this tool is installed Windows Live 2011 breaks, it will no longer sign in giving error number 8e5e0247 all the solutions online haven't helped. It appears that a language setting somewhere gets set to en_ms, and I'm en_ca. My computer is running Windows 7 x64. When i try to sign onto messenger it gives an error that (with some research) means your locale or language is not supported, I've searched my computer for any reference to en_ms but find none. Also a few other things seem to have broken, When a UAC box comes up it is no longer able to identify the publisher of anything, and also the indexing service does not work (I'm not sure if the indexing issue is related, but the UAC issue happened right after installation), I had this issue before but I don't remember how I fixed it, I believe it had something to do with environmental variables. When it goes to sign in it gets as far as the "Loading contacts" then stops and goes back to the sign in screen. Has anyone seen this before? Thanks

    Read the article

  • Mailgun Is Not Detecting My New MX Records

    - by Tyler Crompton
    When I issue a DiG command to verify my MX records, I get the following output: $ dig example.com MX ; <<>> DiG 9.9.5-3-Ubuntu <<>> example.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47700 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN MX ;; ANSWER SECTION: example.com. 85468 IN MX 10 mxa.mailgun.org. example.com. 85468 IN MX 10 mxb.mailgun.org. ;; REMAINDER OF OUTPUT REMOVED FOR BREVITY However, when I click "Check DNS Records Now" on Mailgun, it verifies the changes to the TXT and CNAME records but says that my MX records have not been changed. Type | Priority | Enter This Value | Current Value -----+----------+------------------+-------------------- MX | 10 | mxa.mailgun.org | 10 mail.example.com MX | 10 | mxb.mailgun.org | 10 mail.example.com I updated these records three to fours ago. I know it said to wait up to twenty-four to forty-eight hours. But I feel that if it detected the other DNS changes, then it should detect the MX record changes. Am I being impatient or is this a legitimate concern? What do you suggest I do? Note: I'd create a Mailgun tag for this; I feel that it'd be appropriate, but I don't have enough reputation to do so.

    Read the article

  • What does this example bash startup script do?

    - by Dimitri
    I am trying to set up GNU Octave on my computer (Mac OS X 10.7.4). I am newbie in using Terminal and I need help to understand what the following script actually does: if [ -f ~/.bashrc ];then<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;. ~/.bashrc<br> fi<br> PATH=$PATH:/usr/local/bin<br> BASH_ENV=~/.bashrc<br> export BASH_ENV PATH<br> export GNUTERM=aqua<br> alias octave="/Applications/Octave.app/Contents/Resources/bin/octave"<br> alias gnuplot="/Applications/Gnuplot.app/Contents/Resources/bin/gnuplot"<br> (taken from here: http://wikibox.stanford.edu/me112/index.php/Main/OctaveMatlabNotes) So this script begins with the simple conditional if statement. I don't understand the conditional expression - what is -f and .bashrc? What the statement . ~/.bashrc actually does? Then 2 variables are defined PATH and BASH_ENV. Why are they exported? Why GNUTERM=aqua is exported even if it's not defined anywhere? All I need is a script that would allow me to run Octave by simply typing octave in the terminal. I don't need an alias for the gnu plot. Thanks

    Read the article

  • Switching from Onboard intel to Nvidia Dedicated GPU

    - by Anarkie
    How can I switch from Intel onboard grpahics to Nvidia Dedicated GPU? When I go to windows screen resolution I see intel. I cant change it. I go to Device Manager, I see both Adapters are there and Nvidia is known.I disabled intel, I didnt see any option to set one as primary so I disabled intel, black screen!Reboot and re-enable intel. I right click on the desktop, choose "Nvidia Control Panel" and on 3D options I chose the desired game I want to play, High performance Nvidia, but it didnt switch when I started the game. Then I made preferred GPU in the global settings High performance Nvidia for everything it still didnt change.I understand to save the battery etc. there is a switch option between these two but I dont see this switch when it is necessary, I cant also switch manually?Is there a manual switch FN key?I looked but couldnt find. Why I want to do this? 1) Better game peformance. 2) I want to play an old game from 2002(Diablo 2 LOD), when I start the game there are black bars on the sides, so screen becomes just smaller which I dislike!I heard this is intel's specification to center the display.But instead I would like to scale or expand it to fit widescreen(fullscreen).Which should be possible with Nvidia. My Notebook Specs: Fujitsu Lifebook AH531, Win7 , 64 bit, i5, intel HD graphics onboard, Nvidia GT 525. I didnt install Nvidia later, it was always installed and ready from the moment I turned on the computer first time. How I determined that the cards werent switched when I am playing the game: with the windows key I exited from the game, then looked at screen resolutions menu, still saw intel, also the game was still with black bars.I know intel GPU should enough for Diablo 2 but I am interested in this answer for further games, I dont always play Diablo, what if I install an up to date game for example?Then Intel will not be sufficient.I would like to learn the switch option.

    Read the article

  • How to configure traffic from a specific IP hardcoded to an IP to forward to another IP:PORT using i

    - by cclark
    Unfortunately we have a client who has hardcoded a device to point at a specific IP and port. We'd like to redirect traffic from their IP to our load balancer which will send the HTTP POSTs to a pool of servers able to handle that request. I would like existing traffic from all other IPs to be unaffected. I believe iptables is the best way to accomplish this and I think this command should work: /sbin/iptables -t nat -A PREROUTING -s $CUSTIP -j DNAT -p tcp --dport 8080 -d $CURR_SERVER_IP --to-destination $NEW_SERVER_IP:8080 Unfortunately it isn't working as expected. I'm not sure if I need to add another rule, potentially in the POSTROUTING chain? Below I've substituted the variables above with real IPs and tried to replicate the layout in my test environment in incremental steps. $CURR_SERVER_IP = 192.168.2.11 $NEW_SERVER_IP = 192.168.2.12 $CUST_IP = 192.168.0.50 Port forward on the same IP /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.11:8080 Works exactly as expected. IP and port forward to a different machine /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Connections seem to timeout. Restrict IP and port forward to only be applied to requests from a specific IP /sbin/iptables -t nat -A PREROUTING -p tcp -s 192.168.0.50 -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Times out as well. Probably for the same reason as the previous entry. Does anyone have any insights or suggestions? thanks,

    Read the article

  • Slow browsing through IE on Windows Server 2012

    - by Volodymyr
    We've run into strange issue on the freshly installed servers. H/W: IBM server X3550 M4 7914; OS: Windows Server 2012 Std. Then we try to browse on the servers thru IE, not all sites are opened or it takes too long time to open the page, i.e. very few of them can be opened. Local FW are disabled. Servers are in a new subnet and traffic is allowed for it. VLAN is configured properly Another Windows Server 2012 host is running OK and Internet access works fine, but it is VM running on Hyper-V 2012. No proxy is used on the network. At the same time, if one tries to establish telnet session to any site on 80/443 ports - it does work. Google works as well. I've tried to configure single Qlogic adapter to check if the issue remains - it does. Teaming is configured with the means of QLogic, not by built-in functionality. IE Enhanced Security is disabled. IE settings were reset, more than once. Why would certain sites work while others not - Idk. I also tried to disable ecncapability and restart server - no luck netsh int tcp set global ecncapability=disabled Any thoughts? UPD1 VMQ is disabled. Servers are not running Hyper-V. UPD2 Servers were rebuilt from scratch, got a mail a few mins ago. Issue still remains. Teaming is now configured with the means of Windows Server 2012.

    Read the article

  • Isolating Apache virtualhosts from the rest of the system

    - by JesperB
    I am setting up a web server that will host a number of different web sites as Apache VirtualHosts, each of these will have the possibility to run scripts (primarily PHP, possiblu others). My question is how I isolate each of these VirtualHosts from eachother and from the rest of the system? I don't want e.g. website X to read the configuration of website Y or any of the server's "private" files. At the moment I have set up the VirtualHosts with FastCGI, PHP and SUExec as described here (http://x10hosting.com/forums/vps-tutorials/148894-debian-apache-2-2-fastcgi-php-5-suexec-easy-way.html), but the SUExec only prevents users from editing/executing files other than their own - the users can still read sensitive information such as config files. I have thought about removing the UNIX global read permission for all files on the server, as this would fix the above problem, but I'm not sure if I can safely do this without disrupting the server function. I also looked into using chroot, but it seems that this can only be done on a per-server basis, and not on a per-virtual-host basis. I'm looking for any suggestions that will isolate my VirtualHosts from the rest of the system. PS I'm running Ubuntu 12.04 server

    Read the article

  • How to change mount to grant user write permissions?

    - by nals
    I am on TomatoUSB, and using the feature to have a NAS. The only way I can write to the Samba share is if I force root: [global] interfaces = 127.0.0.1, 192.168.1.1/24 bind interfaces only = no workgroup = WORKGROUP netbios name = TOMATO security = share wins support = yes name resolve order = wins lmhosts hosts bcast guest account = nobody [Public] path = /mnt/sda2 read only = no public = yes only guest = yes guest ok = yes browseable = yes comment = Network share force user = root writeable = yes I dont really like the idea having to use root to allow write access to my share. I have a samba account created already named nobody to allow access to the share. However every time I try to write I get access denied error. fstab: /dev/sda2 /mnt/sda2 vfat defaults 0 0 Further more every time I try to chmod 777 /tmp/mnt/sda2 the permissions are not changed, and no error is produced. They stay 755. drwxr-xr-x 2 root root 4096 Jun 4 01:49 sda2 Basically; how can I give the user nobody write permissions to my mount? dev name: /dev/sda2 dev mount: /tmp/mnt/sda2

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • Samba share will not connect (was working yesterday)

    - by David Gard
    I have a CentOS websver with a Samba share set up (\\webserver\websites). I was connected to this share just yesterday without issue, but today my Windows 8 PC will not connect to it. I've also tried making a connection from Windows 7 and Windows XP, all without success. I initially tried restarting my computer, but that did not work. I then tried restarting the Samba service on the webserver (service smb restart), and when that failed I restarted the webserver. All of that was to no avail, and I still cannot connect to the share. The webserver is contactable from my PC (and the others I tried), as the websites it hosts work fine and I'm able to Putty to the server. When connected to the webserver, I can see that Samba is running by using service smb status - service smb status smbd (pid 4685) is running... nmbd (pid 4688) is running... Can anyone please help me to get this share working? Here is my full Samba config (/etc/samba/smb.conf) - [global] workgroup = MYGROUP server string = Samba Server %v log file = /var/log/samba/log.%m max log size = 50 security = user encrypt passwords = yes socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 local master = no [websites] comment = Websites browseable = yes writable = yes path=/var/www/html/ valid users = dgard

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Error in mysql "max_allowed_packet" VPS godaddy

    - by focusmantra
    I am getting error "Serious session error detected. Please notify administrator, this problem is most probably caused by small value in max_allowed_packet MySQL setting. " This error generally comes after every 20-25 minutes and when it comes , it logs out the user and then logs in again, starts again and then after sometime the same issue occurs again. I tried changing max_allowed_packet setting but getting error "access denied; You need SUPER privilege for this operation'. I even tried SET SESSION too but error "SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value" I have hosted the website on godaddy VPS centos and access it via putty or cpanel. Website is made in moodle 2.0.3 i.e. php. My developers use to fix this but warned will occur when server restart. As godaddy ppl say move to dedicated and then i can do but as I don't have any money so can't at present. I trying to find how developers used to do for temporary fix that is until server restart.

    Read the article

  • VPS goes slow at more than 20 users online at the same time

    - by hachiari
    I have 512 MB VPS (brustable to 1GB) Somehow, the site goes slow when there are about 10 users, and becomes impossible to load at 20 users online at the same time. I wonder what could be the problem for this. The bandwidth connection of the VPS is 1Gbps. Here is some settings in my VPS: KeepAlive Off <IfModule prefork.c> StartServers 7 MinSpareServers 7 MaxSpareServers 10 ServerLimit 64 MaxClients 64 MaxRequestsPerChild 0 </IfModule> my.cnf settings - calculated Max Memory 300MB Output from UNIXBENCH INDEX VALUES TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 376783.7 13429727.4 356.4 Double-Precision Whetstone 83.1 1137.5 136.9 Execl Throughput 188.3 1637.4 87.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 148868.0 557.1 File Copy 256 bufsize 500 maxblocks 1077.0 79430.0 737.5 File Read 4096 bufsize 8000 maxblocks 15382.0 1410009.0 916.7 Pipe Throughput 111814.6 4419722.0 395.3 Pipe-based Context Switching 15448.6 561505.1 363.5 Process Creation 569.3 10272.7 180.4 Shell Scripts (8 concurrent) 44.8 514.3 114.8 System Call Overhead 114433.5 3537373.8 309.1 ========= FINAL SCORE 295.0 I am afraid that the VPS company limit the number of connection to the VPS... is it possible? The server is in Japan, but the site has global traffic (some of the traffic are from countries with low speed connection). Could this be the problem? This is a serious problem :( my site just cant grow if this keeps on happening... please tell me if you have any idea. Thank You, Bryant

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • why is Mac OSX Lion losing login/network credentials?

    - by Larry Kyrala
    (moved from stackoverflow...) Symptoms So at work we have OSX 10.7.3 installed and every once in a while I will see the following behaviors: 1) if the screen is locked, then multiple tries of the same user/pass are not accepted. 2) if the screen is unlocked, then opening a new bash term may yield prompts such as: `I have no name$` or lkyrala$ ssh lkyrala@ah-lkyrala2u You don't exist, go away! Even when our macs are working normally, everyone here has to login twice. The first time after boot always fails, but the second time (with the same password, not changing anything, just pressing enter again) succeeds. Weird? Workarounds There are some workarounds that resolve the immediate problem, but don't prevent it from happening again: a) wait (maybe an hour or two) and the problems sometimes go away by themselves. b) kill 'opendirectoryd' and let it restart. (from https://discussions.apple.com/thread/3663559) c) hold the power button to reset the computer Discussion Now, the evidence above points me to something screwy with opendirectory and login credentials. Some other people report having these login problems, but it's hard to determine where the actual problem is (Mac, or network environment?). I should add that most of the network are Windows machines, but we have quite a few Macs and Linux machines as well, but I'm not sure of the details of how the network auth is mapped from various domains to others... all I know is that our network credentials work in Windows domains as well as mac and linux logins -- so something is connecting separate systems, or using the same global auth system.

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >