Search Results

Search found 4931 results on 198 pages for 'burnt hand'.

Page 168/198 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Define a positive number to be isolated if none of the digits in its square are in its cube [closed]

    - by proglaxmi
    Define a positive number to be isolated if none of the digits in its square are in its cube. For example 163 is n isolated number because 69*69 = 26569 and 69*69*69 = 4330747 and the square does not contain any of the digits 0, 3, 4 and 7 which are the digits used in the cube. On the other hand 162 is not an isolated number because 162*162=26244 and 162*162*162 = 4251528 and the digits 2 and 4 which appear in the square are also in the cube. Write a function named isIsolated that returns 1 if its argument is an isolated number, it returns 0 if its not an isolated number and it returns -1 if it cannot determine whether it is isolated or not (see the note below). The function signature is: int isIsolated(long n) Note that the type of the input parameter is long. The maximum positive number that can be represented as a long is 63 bits long. This allows us to test numbers up to 2,097,151 because the cube of 2,097,151 can be represented as a long. However, the cube of 2,097,152 requires more than 63 bits to represent it and hence cannot be computed without extra effort. Therefore, your function should test if n is larger than 2,097,151 and return -1 if it is. If n is less than 1 your function should also return -1. Hint: n % 10 is the rightmost digit of n, n = n/10 shifts the digits of n one place to the right. The first 10 isolated numbers are N n*n n*n*n 2 4 8 3 9 27 8 64 512 9 81 729 14 196 2744 24 576 13824 28 784 21952 34 1156 39304 58 3364 195112 63 3969 250047

    Read the article

  • Cryptography for P2P card game

    - by zephyr
    I'm considering writing a computer adaptation of a semi-popular card game. I'd like to make it function without a central server, and I'm trying to come up with a scheme that will make cheating impossible without having to trust the client. The basic problem as I see it is that each player has a several piles of cards (draw deck, current hand and discard deck). It must be impossible for either player to alter the composition of these piles except when allowed by the game rules (ie drawing or discarding cards), nor should players be able to know what is in their or their oppponent's piles. I feel like there should be some way to use something like public-key cryptography to accomplish this, but I keep finding holes in my schemes. Can anyone suggest a protocol or point me to some resources on this topic? [Edit] Ok, so I've been thinking about this a bit more, and here's an idea I've come up with. If you can poke any holes in it please let me know. At shuffle time, a player has a stack of cards whose value is known to them. They take these values, concatenate a random salt to each, then hash them. They record the salts, and pass the hashes to their opponent. The opponent concatenates a salt of their own, hashes again, then shuffles the hashes and passes the deck back to the original player. I believe at this point, the deck has been randomized and neither player can have any knowledge of the values. However, when a card is drawn, the opponent can reveal their salt, allowing the first player to determine what the original value is, and when the card is played the player reveals their own salt, allowing the opponent to verify the card value.

    Read the article

  • Projecting an object into a scene based on world coordinates only

    - by user354862
    I want to place a 3D image into a scene base on world/global coordinates. I have an image of a scene. The image was captures at some global coordinate (x1, y1, z1). I am given an object that needs to be placed into this scene based on its global coordinate (x2, y2, y3). This object needs to be projected into the scene accurately similarly to perspective projection. An example may help to make this clear. Imagine there is a parking lot with some set of global coordinates. A picture is taken of a portion of the parking lot. The coordinates from the spot where the image was taken is recorded. The goal is to place a virtual vehicle into this image using the global coordinates for that vehicle. Because the global cooridnates for the vehicle may not be in the fov of the global coordinates for the image I am assuming that I will need the image coordinates, angle and possibly fov. 3D graphics is not my area so I have been looking at http://en.wikipedia.org/wiki/Perspective_projection#Perspective_projection. I have also been looking at Matrix3DProjection which seems to possibly be what I am looking for but it only works in Silverlight and I am trying to do this in WPF. In my mind it appears I need to determine the (X,Y,Z) coordinates that are in the fov of the image, determine the world coordinate to pixel conversion and then accurately project the vehicle into the image giving it the correct perspective such that is looks 3D i.e smaller the further away bigger closer Is there a function within WPF that can help with this or will I need to re-learn matrices and do this by hand?

    Read the article

  • How do I unbind another jQuery function on .click()?

    - by Mike Barwick
    I have this script that run to fix my menu bar to the browser on scroll. Nothing really needs to change here (works as it should). However, you may need it... var div = $('#wizMenuWrap'); var editor = $('#main_wrapper'); var start = $(div).offset().top; $(function fixedPackage(){ $.event.add(window, "scroll", function() { var p = $(window).scrollTop(); $(div).css('position',((p)>start) ? 'fixed' : 'static'); $(div).css('top',((p)>start) ? '0px' : ''); //Adds TOP margin to #main_wrapper (required) $(editor).css('position',((p)>start) ? 'relative' : 'static'); $(editor).css('top',((p)>start) ? '88px' : ''); }); }); Now for the issue at hand. I have another script function that calls a modal pop-up (which again works as it should). However, it's not slick from a UI perspective when I scroll the page when the modals open. So I want to disable the script above when the modal script below is called. In other words, when I click to open the modal pop-up, the script above shouldn't work. $(function () { var setUp = $('.setupButton'); // SHOWS SPECIFIED VIEW $(setUp).click(function () { $('#setupPanel').modal('show'); //PREVENTS PACKAGE SELECT FIXED POSITION ON SCROLL $(setUp).unbind('click',fixedPackage); }); }) As you can see above, I tried to unbind the scroll function (the first code snippet), but this is not correct. These two scripts are in two separate js libraries.

    Read the article

  • BlackBerry Field class extension will not paint.

    - by jlindenbaum
    Using JRE 5.0.0, simulator device is an 8520. On a screen I am using a FlowFieldManager(Manager.VERTICAL_SCROLL) and adding Fields to it to show data. When I do this.flowManager = new FlowFieldManager(Manager.VERTICAL_SCROLL); Field field = new Field() { protected void paint(Graphics graphics) { graphics.drawTest("Test", 0, 0); } protected void layout(int width, int height) { this.setExtend(300, 300); // just testing } } this.flowManager.add(field); The screen renders correctly and 'Test' appears on the screen. If, on the other hand, I try and abstract this into a class called CustomField with the same properties and add it to the flow manager the render will not happen. Debugging shows that the device enters into the Object, into the layout function, but not the paint function. I can't figure out why the paint function is not called when I extend Field. The 4.5 API says that layout and paint are the only functions that I really need to extend. (getPreferredWidth and getPreferredHeight will be used to calculate screen sizes etc.) Thanks in advance.

    Read the article

  • OpenGL multiple threads, variable handling [closed]

    - by toeplitz
    I have written an OpenGL program which runs in the following way: Main: - Initialize SDL - Create thread which has the OpenGL context: - Renderloop - Set camera (view) matrix with glUniform. - glDrawElements() .... etc. - Swapbuffers(); - Main SDL loop handling input events and such. - Update camera matrix of type glm::mat4. This is how I pass my camera object to the class that handles opengl. Camera *cam = new Camera(); gl.setCam(cam); where void setCam(Camera *camera) { this->camera = camera; } For rendering in the opengl context thread, this happens: glm::mat4 modelView = camera->view * model; glUniformMatrix4fv(shader->bindUniform("modelView"), 1, GL_FALSE, glm::value_ptr(modelView)); In the main program where my SDL and other things are handles I then recompute the view matrix. This his working fine without me using any mutex locks. Is this correct? On the other hand, I add objects to my scene by an "upload queue" and in this case I have to mutex lock my upload queue vector (vector class type) when adding items to it or else the program crashes. In summary: I recompute my matrix in a different thread and then use it in the opengl thread without any mutex lock. Why is this working? Edit: I think my question is similar to what was asked here: Should I lock a variable in one thread if I only need it's value in other threads, and why does it work if I don't?, only in my case it is even more simple with only one matrix being changed.

    Read the article

  • Get the package of a Java source file

    - by Oak
    My goal is to find the package (as string) of a Java source file, given as plaintext and not already sorted in folders. I can't just locate the first instance of the keyword package in the file, because it may appear inside a comment. So I was thinking about two alternatives: Scan the file word-by-word, maintaining a "inside-a-comment" flag for the scanner. The first time the package keyword is encountered while not inside a comment, stop the scanning and report the result. Use a regex - should be theoretically possible because block comments do not next in Java, but I tried making such a regex and it turned out to be quite complicated - for me, at least. Another difference between the two approaches is that when scanning manually I can stop the scan when I can be certain the package keyword can no longer appear, saving some time... and I'm not sure I can do something similar with regexes. On the other hand, the decision "when it can no longer appear" is not necessarily simple, though I could use some heuristic for that. I would like to hear any input on this problem, and would welcome any help with the regex. My solution is written in Java as well.

    Read the article

  • Book Recomendations: WPF/Silverlight for Business(smallish, internal) Environment

    - by Refracted Paladin
    Yes, I know there are an insane amount of Book posts here(SO) but none I believe for my specific need. If there is and I missed it I apologize. I am the only developer at a non-profit organization(~200 employees) where we are a M$ shop and 90% of the things I develop are specific to our company and are internal only. I am given a lot of latitude on how I accomplish my goals so using new technologies is in my best interest. So far I have developed all winform & asp.net applications but I am an expert by NO MEANS. I would now like to focus on XAML driven development(WPF & Silverlight) but I have no idea where to start. I am subscribed to numerous Silverlight blogs and I have went through a few good tutorials however, I would really appreciate a GOOD SOLID book in my hands going forward. I prefer learning books versus reference books and I REALLY would like one from a Business standpoint as well. Shameless, self-promoting is welcomed if you happen to be an author or reviewer for one that meets my criteria. I would, however, prefer that recomendations were based on first-hand experience(no, 'my friend as this awesome book he told me about', please). If more info is needed to provide accurate recomendations please let me know. Thanks

    Read the article

  • Best way to make sure I get all available options array/loop

    - by jaz872
    OK, here is my problem. I'm looking for the best way to loop through a bunch of options to make sure that I hit all available options. Some detail. I've created a feature that allows a client to build images that are basically other images layered on top of each other. These other images are split up into different groups. They have links on the side of the image that they can click to scroll through all the different images to view them. I'm now making an automated process that is going to run the function that changes the image when a user clicks one of the links. I need to make sure that every possible combo of the different images is hit during this process. So I have an array with the number of options for each group. The current array is [3, 9, 3, 3] My question is what is the best way to loop through this to make sure that all possible options will be shown? I apologize if this seems simple for someone out there, but I'm just having trouble wrapping my head around it. Hopefully if that person is out there, they can give a helping hand :)

    Read the article

  • Runtime of optimized Primehunter

    - by Setton
    Ok so I need some serious runtime help here! This method should take in an int value, check its primality, and return true if the number is indeed a prime. I understand why the loop only needs to go up to i squared, I understand that the worst case scenario is the case in which either the number is prime (or a multiple of a prime). But I don't understand how to quantify the actual runtime. I have done the loop myself by hand to try to understand the pattern or correlation of the number (n) and how many loops occur, but I literally feel like I keep falling into the same trap every time. I need a new way of thinking about this! I have a hint: "Think about the SIZE of the integer" which makes me want to quantify the literal number of integers in a number in relation to how many iterations it does in the for loop (floor log(n)) +1). BUT IT'S NOT WORKIIIING?! I KNOW it isn't square root n, obviously. I'm asking for Big O notation. public class PrimeHunter { public static boolean isPrime(int n) { boolean answer = (n > 1) ? true : false; //runtime = linear runtime for (int i = 2; i * i <= n; i++) //runtime = ????? { if (n % i == 0) //doesn't occur if it is a prime { answer = false; break; } } return answer; //runtime = linear runtime } }

    Read the article

  • How to create a datastore.Text object out of an array of dynamically created Strings?

    - by Adrogans
    I am creating a Google App Engine server for a project where I receive a large quantity of data via an HTTP POST request. The data is separated into lines, with 200 characters per line. The number of lines can go into the hundreds, so 10's of thousands of characters total. What I want to do is concatenate all of those lines into a single Text object, since Strings have a maximum length of 500 characters but the Text object can be as large as 1MB. Here is what I thought of so far: public void doPost(HttpServletRequest req, HttpServletResponse resp) { ... String[] audioSampleData = new String[numberOfLines]; for (int i = 0; i < numberOfLines; i++) { audioSampleData[i] = req.getReader().readLine(); } com.google.appengine.api.datastore.Text textAudioSampleData = new Text(audioSampleData[0] + audioSampleData[1] + ...); ... } But as you can see, I don't know how to do this without knowing the number of lines before-hand. Is there a way for me to iterate through the String indexes within the Text constructor? I can't seem to find anything on that. Of note is that the Text object can't be modified after being created, and it must have a String as parameter for the constructor. (Documentation here) Is there any way to this? I need all of the data in the String array in one Text object. Many Thanks!

    Read the article

  • How to arrange HTML5 web page elements?

    - by Argus9
    I'm trying to make a sample web page to get acquainted with HTML5, and I'd like to try replicating Facebook's page layout; that is, the header that spans the entire width of the screen, a small footer at the bottom, and a three-column main body, consisting of a list of links on the left, the main content in the middle, and an optional section on the right (for ads, frames, etc.). It's neat and displays well in multiple window sizes. So far, I've tried to accomplish this with a <header>, <footer> and a <nav> and <section> block, respectively. There's a few anomalies with the page, however. The footer (which contains a simple text block with copyright info) appears at the top-right of the page below the header when the window is maximized. On the other hand, when there isn't enough space to display everything in the window, it places the main body text below the section. In other words, it keeps moving elements around to fit the window. Could someone please tell me how I'd achieve the look I'm going for? I've tried playing around with a few CSS attributes I read about through Google, but I'm pretty sure I don't know what I'm doing, and could really use some guidance. Thank you!

    Read the article

  • How specific do I get in BDD scenarios?

    - by CodeSpelunker
    Take two different ways of stating the same behavior. Option A: Given a customer has 50 items in their shopping cart When they check out Then they will receive a 10% discount on their order Option B: Given a customer has a high volume of items in their shopping cart When they check out Then they will receive a high volume discount on their order The former is far more specific. If someone has some question about exactly when a customer gets a high volume discount or how much to give them, reading this scenario makes it very clear. Serving the purposes of documenting the behavior, it's about as specific as it can be, although any change in those values will require changing the scenario. The second is more generalized and doesn't have the clarity of the first. Automating it would require incorporating the values "50" and "10" in the step implementations. On the other hand, the scenario captures the core business need: a high volume customer gets a discount. If we later decide to use "40" and "15", the scenario doesn't have to change because the core business need hasn't really changed (though the step implementation would). Also, the term "high volume customer" communicates something about why we're giving them the discount. So, which is better? Rather, under what circumstances should I favor the former or the latter?

    Read the article

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. (How to use locking in IIS7 Configuration) UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in. It all works fine under Windows 7 IIS 7 locally (has been for months) Web.Config <system.webServer> (edit) <handlers> ( deleted removes/adds ) </handlers> <security> <authentication> 96: <windowsAuthentication enabled="true" useKernelMode="true"> <extendedProtection tokenChecking="Allow" /> <providers> <clear /> <add value="NTLM" /> <add value="Negotiate" /> </providers> </windowsAuthentication> </authentication> </security>

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • Laptop LCD sometimes stops working on reboot. Please help.

    - by J Ringle
    I have a Gateway P-6831FX Laptop with Vista Ultimate. The Laptop LCD will sometimes not come on after I reboot the computer. I don't even close the lid and it happens. It isn't dim, it doesn't come on at all. No posting of CMOS (BIOS), nothing. Please note... this happens sometimes, not every time. Frustrating! When plugged into an external monitor, which works fine, Vista display properties can't even "sense" the laptop LCD. I try to enable the laptop LCD for dual display, turning on the laptop LCD, and it does nothing. It's like the laptop LCD is not even there. Manually taking a magnet in my hand to the laptop lid sensing switch (the sensor that turns off display/sleep mode when you close lid), sometimes causes the LCD backlight to "turn on" but not display any images. By "turn on" I mean I can see the screen backlight turn on to a 'dark gray' screen instead of pitch black. Subsequent reboot the laptop display is not working again! Here are the facts: Only happens at random and only after a reboot. Waking from Sleep mode isn't a problem. Pressing F4 function key for dual display does nothing when this happens. Closing lid doesn't seem to be related. (unless it is only after reboot.) using external magnet from laptop screen sensor sometimes triggers backlight to turn on but reboot back to square one with no LCD display. an external display always works fine. I have taken apart LCD, checked all wires and ribbons for loose connections or damage. I have replaced the Inverter. It doesn't seem to be heat related as I can put in sleep mode and resume fine when very hot. (external monitor works fine too). Sometimes the screen works fine as if there is not a problem at all. Even after a reboot... This is random. Any ideas out there? If it is a bad part... which one? The LCD seems to be fine. What are the odds of 2 bad inverters? The backlight is fine. The LCD wires/ribbons seem to be fine. I am at a loss. No warranty left and Gateway tech support is clueless. Thanks for any feedback that might help.

    Read the article

  • which package i should choose, if i want to install virtualenv for python?

    - by hugemeow
    pip search just returns so many matches, i am confused about which package i should choose to install .. should i only install virtualenv? or i'd better also install virtualenv-commands and virtualenv-commands, etc, but i really don't know exactly what virtualenv-commands is ... mirror0@lab:~$ pip search virtualenv virtualenvwrapper - Enhancements to virtualenv virtualenv - Virtual Python Environment builder veh - virtualenv for hg pyutilib.virtualenv - PyUtilib utility for building custom virtualenv bootstrap scripts. envbuilder - A package for automatic generation of virtualenvs virtstrap-core - A bootstrapping mechanism for virtualenv+pip and shell scripts tox - virtualenv-based automation of test activities virtualenvwrapper-win - Port of Doug Hellmann's virtualenvwrapper to Windows batch scripts everyapp.bootstrap - Enhanced virtualenv bootstrap script creation. orb - pip/virtualenv shell script wrapper monupco-virtualenv-python - monupco.com registration agent for stand-alone Python virtualenv applications virtualenvwrapper-powershell - Enhancements to virtualenv (for Windows). A clone of Doug Hellmann's virtualenvwrapper RVirtualEnv - relocatable python virtual environment virtualenv-clone - script to clone virtualenvs. virtualenvcontext - switch virtualenvs with a python context manager lessrb - Wrapper for ruby less so that it's in a virtualenv. carton - make self-extracting virtualenvs virtualenv5 - Virtual Python 3 Environment builder clever-alexis - Clever redhead girl that builds and packs Python project with Virtualenv into rpm, deb, etc. kforgeinstall - Virtualenv bootstrap script for KForge pypyenv - Install PyPy in virtualenv virtualenv-distribute - Virtual Python Environment builder virtualenvwrapper.project - virtualenvwrapper plugin to manage a project work directory virtualenv-commands - Additional commands for virtualenv. rjm.recipe.venv - zc.buildout recipe to turn the entire buildout tree into a virtualenv virtualenvwrapper.bitbucket - virtualenvwrapper plugin to manage a project work directory based on a BitBucket repository tg_bootstrap - Bootstrap a TurboGears app in a VirtualEnv django-env - Automaticly manages virtualenv for django project virtual-node - Install node.js into your virtualenv django-environment - A plugin for virtualenvwrapper that makes setting up and creating new Django environments easier. vip - vip is a simple library that makes your python aware of existing virtualenv underneath. virtualenvwrapper.django - virtualenvwrapper plugin to create a Django project work directory terrarium - Package and ship relocatable python virtualenvs venv_dependencies - Easy to install any dependencies in a virtualenviroment(without making symlinks by hand and etc...) virtualenv-sh - Convenient shell interface to virtualenv virtualenvwrapper.github - Plugin for virtualenvwrapper to automatically create projects based on github repositories. virtualenvwrapper.configvar - Plugin for virtualenvwrapper to automatically export config vars found in your project level .env file. virtualenvwrapper-emacs-desktop - virtualenvwrapper plugin to control emacs desktop mode bootstrapper - Bootstrap Python projects with virtualenv and pip. virtualenv3 - Obsolete fork of virtualenv isotoma.depends.zope2_13_8 - Running zope in a virtualenv virtual-less - Install lessc into your virtualenv virtualenvwrapper.tmpenv - Temporary virtualenvs are automatically deleted when deactivated isotoma.plone.heroku - Tooling for running Plone on heroku in a virtualenv gae-virtualenv - Using virtualenv with zipimport on Google App Engine pinvenv - VirtualEnv plugins for pin isotoma.depends.plone4_1 - Running plone in a virtualenv virtualenv-tools - A set of tools for virtualenv virtualenvwrapper.npm - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any npm installed globaly when the venv is activated d51.django.virtualenv.test_runner - Simple package for running isolated Django tests from within virtualenv difio-virtualenv-python - Difio registration agent for stand-alone Python virtualenv applications VirtualEnvManager - A package to manage various virtual environments. virtualenvwrapper.gem - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any gems installed when the venv is activated

    Read the article

  • VMWare Network bug in multiple VMWare Workstation versions if using a hardcoded IP address

    - by onyxruby
    I'm having a very tricky problem with some of my VM sessions being unable to reach the Internet or even ping the gateway. I have just set up a new VM Workstation (7) on a W2K8 64bit server (I'll be converting to ESXI 4 once I can find a decent book on it, so for the meanwhile I use workstation). I have imported a number of VM's and setup some new ones on the server.In short the problem with some of the VM's being unable to reach the Internet is that they can't reach the gateway. I've looking at a number of things and can pretty safely rule out the following: Switch, Router, DHCP Server, DNS, Client IP configuration, Routes and typos. The problem is that some of the new clients cannot reach the gateway if their IP address is hardcoded, they can't even ping it by IP address. That rules out DNS and DHCP. Now, if I allow them to get their IP address by DHCP they can reach the gateway and Internet without issue. The interesting thing on this, is that this behavior occurs even if I leave the DNS information hardcoded under TCP/IP settings. It doesn't work unless the IP and gateway are handed out by DHCP even though the same information IP info is being used by the host. Fundamentally from the standpoint of the clients, they are trying to reach the exact same gateway using the exact same IP information regardless of whether they are hardcoded or assigned by DHCP. Here's an example of one client. IP Address 192.168.7.66 - Subnet Mask 255.255.255.0 - Gateway 192.168.7.254 - DNS1 192.168.7.44 - DNS2 192.168.7.254. The issue occurs across six different microsoft operating systems, Windows 7 and Windows 2008 variants all have the issue. My W2K3, XP, Vista and W98 clients all work without issue with hardcoded IP addresses. I have tried things like rearranging the DNS order, flushing DNS and so on. It's not a routing or switch issue as the clients can work just fine if they get their IP by DHCP. It's not a paramater issue as the exact same paramaters are handed out by DHCP as I plug in by hand. It's not a DNS issue as clients cant reach other clients even with IP addresses only. I have run a tracert to the gateway by IP address and it times out on the very first hop before failing on hop3 with destination host unreachable. If I get the IP address by DHCP the tracert finds the gateway (and Internet) without issue. I have read a few other posts online in forums talking about this problem randomly occuring over the years in other VM versions as well, so I suspect some kind of long standing bug. Does anyone have any ideas on this? Is it possibly a bug with Windows 7 and W2K clients under VM?

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

  • Best Practise: DNS and VPN (with private network IPs)

    - by ribx
    I am trying to find the best solution for my DNS problem. We are running several services in our company that you can reach only over VPN. Other services, that are reachable through the internet got the domain ... At the moment all services inside the VPN network go by .local... These have an VPN IP of the private network 192.168.252.0/24. Clients reach from Linux over OSX to Windows. I can think of 4 possibilities to implement a DNS infrastructure: Most common: an internal DNS Server, that is pushed by the VPN. But this has several drawbacks: your DNS responses are limited to the speed of the VPN Connection and your own DNS server. Because of very complex websites, this can increase the time for a page to load quite a lot. Also: we have several VPNs that are not connected to each other and all of them have their own DNS server. Several DNS servers locally. These have to be configured by hand. And you have to use some third party tool like dnsmasq. If you start a DNS request, you ask your locally running DNS server, which decides which server to ask for which domain name. One college of mine uses such a solution with this OSX (I am sorry, I don't remember the name of the application). You use your domain hoster. Most of them have APIs available to manipulate your DNS entries. So you could pull your private network informations to your domain hoster. I am not sure whether they all accept private network IPs. But I guess there will be some problems in the same way as in number 4. The one we currently use, because it's for us the most logical choice: we forward the sub domain *.local.. to our own public DNS Server. This works quite good for some public DNS Servers like Google. But most ISPs do not forward the answers. Or don't do that always. Like my ISP sends me a positive result of the a DNS request of a *.local.. domain only every 10th time I make a nslookup. (Can someone explain this?) Here the real Question: Is there another solution we were not thinking about? Or: What of these methods do you use?

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

  • How to change mouse pointer icon in Xfce Debian 7 Wheezy?

    - by kadaj
    I copied the cursor theme (oxy-neon or Oxygen Neon) to /usr/share/icons and from Applications Menu - Settings - Mouse, I am able to see the new theme. I clicked on it and the pointer doesn't change. However the text typing icon ('I'), busy icon, hand icon, and resize window icons got changed. The pointer icon remains the same, the black Adwaita. I removed the Adwaita folder from the icons folder, and still the mouse pointer doesn't change. Is the pointer theme specified elsewhere? I have no setting under home directory. I tried logging out, restart, restarting xfwm4, but nothing works. I just found that the icon pointer changes when the pointer is inside Firefox, but it's not consistent. It keeps changing when I click menu items. Very weird. Any idea how to fix this? This is the output of running: gsettings list-recursively org.gnome.desktop.interface : ~$ gsettings list-recursively org.gnome.desktop.interface org.gnome.desktop.interface automatic-mnemonics true org.gnome.desktop.interface buttons-have-icons false org.gnome.desktop.interface can-change-accels false org.gnome.desktop.interface clock-format '24h' org.gnome.desktop.interface clock-show-date false org.gnome.desktop.interface clock-show-seconds false org.gnome.desktop.interface cursor-blink true org.gnome.desktop.interface cursor-blink-time 1200 org.gnome.desktop.interface cursor-blink-timeout 10 org.gnome.desktop.interface cursor-size 24 org.gnome.desktop.interface cursor-theme 'Adwaita' org.gnome.desktop.interface document-font-name 'Sans 11' org.gnome.desktop.interface enable-animations true org.gnome.desktop.interface font-name 'Cantarell 11' org.gnome.desktop.interface gtk-color-palette 'black:white:gray50:red:purple:blue:light blue:green:yellow:orange:lavender:brown:goldenrod4:dodger blue:pink:light green:gray10:gray30:gray75:gray90' org.gnome.desktop.interface gtk-color-scheme '' org.gnome.desktop.interface gtk-im-module '' org.gnome.desktop.interface gtk-im-preedit-style 'callback' org.gnome.desktop.interface gtk-im-status-style 'callback' org.gnome.desktop.interface gtk-key-theme 'Default' org.gnome.desktop.interface gtk-theme 'Adwaita' org.gnome.desktop.interface gtk-timeout-initial 200 org.gnome.desktop.interface gtk-timeout-repeat 20 org.gnome.desktop.interface icon-theme 'gnome' org.gnome.desktop.interface menubar-accel 'F10' org.gnome.desktop.interface menubar-detachable false org.gnome.desktop.interface menus-have-icons false org.gnome.desktop.interface menus-have-tearoff false org.gnome.desktop.interface monospace-font-name 'Monospace 11' org.gnome.desktop.interface show-input-method-menu true org.gnome.desktop.interface show-unicode-menu true org.gnome.desktop.interface text-scaling-factor 1.0 org.gnome.desktop.interface toolbar-detachable false org.gnome.desktop.interface toolbar-icons-size 'large' org.gnome.desktop.interface toolbar-style 'both-horiz' org.gnome.desktop.interface toolkit-accessibility false ~$

    Read the article

  • Static IPv6 address in Windows unused for outgoing connections

    - by Luc
    I'm running a Windows server and trying to get it to use a static IPv6 address for outgoing connections to other IPv6 hosts (such as Gmail). I need this because Gmail requires a ptr record, and I can't set one for random addresses. The static address is configured on the host, but it also has a temporary privacy address as well as a random address from the router it seems. By default Windows uses the privacy address; it seems this is the expected behavior (and it makes perfect sense for people/users that did not set a static address, but I did!). I've tried disabling the privacy address with: netsh int ipv6 set privacy disabled This indeed gets rid of the privacy address, but I still have the random address that the router assigned. To disable this, it was said I needed to disable "router discovery" using this command: net interface ipv6 set interface 14 routerdiscovery=disabled Upon doing this, all IPv6 connectivity is lost. If I do this while pinging Gmail, it will report "Destination host unreachable" as soon as I enter the command. In the static IPv6 configuration, I did configure the default gateway and prefix length, so I don't see why it's unable to connect. Probably has something to do with the lack of ARP in IPv6 and somehow being unable to resolve the router's MAC, but I wouldn't know how to fix this. Finally I've tried disabling the DHCPv6 lease with these commands: netsh interface ipv6 set interface "IDMZ Team" managedaddress=disabled netsh interface ipv6 set interface "IDMZ Team" otherstateful=disabled Which was to no avail; the host continues to obtain and use the router-assigned IPv6 address. The router is a FritzBox 7340, which shows me all the IPv4 and IPv6 addresses that the host (identified by MAC) utilizes, but I'm unable to change the assigned address. Maybe this could be done over the telnet interface of the router somehow, but again, I wouldn't know how to do this even if it's the way to go. In short, any of the following would probably solve my problem: Change Windows' source address selection behavior. Have Windows not get an address from the router and not generate a privacy address; Have the router hand out a static address and make Windows use that as source address. Recover connectivity after disabling router discovery on Windows. Alternatively I might use some (batch, perl, ...) script to throw away all IPv6 addresses except the desired one, but this feels rather hacky. If it's the only way (or less hacky than another hacky solution), it might be an option though. Thanks!

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >