Search Results

Search found 35278 results on 1412 pages for 'browser based'.

Page 60/1412 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Port forward based on external IP (for VPS hosting)

    - by Ben Alter
    What I want to do is to host a VPS. First, I'd like to set up a static IP address that forwards to my home IP address (so I can have more than one IP coming into my house). How can I do this without contacting my ISP (and is it even possible?; I don't care about paying for something that does this). Once I have the extra external IP address, how can I forward it to my VPS? How is my router supposed to differentiate between two separate external IP addresses?

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • First steps on setting up a query based server [closed]

    - by asghar ashgari
    I got a physical server at home and I want to do the following silly project to learn the concept behind server-backend development, and then do a real project later on: Idea: Turn the server to a calculator. I want any person publicly send a query to the server (i.e., 2+2) from the terminal and the server give me the result. So the question is basically where to start, what sort of software I need to install, and what sort of program I need to write?.

    Read the article

  • XUbuntu: Open file browser via "run command" menu

    - by mbelow
    In older Xubuntu versions, if I entered a path to a directory in the "run command" dialog, a thunar-window was opened showing this directory. I found that to be very handy if I would open f.e.g "/tmp", I just needed to press WindowsKey+r, enter /tmp and press enter, there I was. This was also great for URLs (ftp, http etc) Unfortunately, since 12.04, this doesn't work anymore. It seem as if the "run command" is now integrated with the program finder. It still works when I type "thunar ftp://...." or "thunar /tmp", but it's a bit tedious now. Is there a way to restore the old behavior? Note: I'm running a german localization of Xubuntu - I hope I translated "run command" and "program finder" correctly...

    Read the article

  • Week in Geek: Google Chrome Rises to the Top of the Browser Heap, Becomes #1

    - by Asian Angel
    Our last edition of WIG for May is filled with news link goodness covering topics such as a smartphone hijacking vulnerability affects AT&T and 47 other carriers, a possible problem with Windows 8 booting too quickly, half of PC users are pirates according to a study, and more. How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers How To Hide Passwords in an Encrypted Drive Even the FBI Can’t Get Into

    Read the article

  • Looking for the better way to combine deep architecture refactoring with feature based development

    - by voroninp
    Problem statement: Given: TFS as Source Control Heavy desktop client application with tons of legacy code with bad or almost absent architecture design. Clients constantly requiring new features with sound quality, fast delivery and constantly complaining on user unfriendly UI. Problem: Application undoubtedly requires deep refactoring. This process inevitably makes application unstable and dedicated stabilization phase is needed. We've tried: Refactoring in master with periodical merges from master (MB) to feature branch (FB). (my mistake) Result: Many unstable branches. What we are advised: Create additional branch for refactoring (RB) periodically synchronizing it with MB via merge from MB to RB. After RB is stabilized we substitute master with RB and create new branch for further refactoring. This is the plan. But here I expect the real hell of merging MB to RB after merging any FB to MB. The main advantage: Stable master most of the time. Are there any better alternatives to the procees?

    Read the article

  • Efficient way to render tile-based map in Java

    - by Lucius
    Some time ago I posted here because I was having some memory issues with a game I'm working on. That has been pretty much solved thanks to some suggestions here, so I decided to come back with another problem I'm having. Basically, I feel that too much of the CPU is being used when rendering the map. I have a Core i5-2500 processor and when running the game, the CPU usage is about 35% - and I can't accept that that's just how it has to be. This is how I'm going about rendering the map: I have the X and Y coordinates of the player, so I'm not drawing the whole map, just the visible portion of it; The number of visible tiles on screen varies according to the resolution chosen by the player (the CPU usage is 35% here when playing at a resolution of 1440x900); If the tile is "empty", I just skip drawing it (this didn't visibly lower the CPU usage, but reduced the drawing time in about 20ms); The map is composed of 5 layers - for more details; The tiles are 32x32 pixels; And just to be on the safe side, I'll post the code for drawing the game here, although it's as messy and unreadable as it can be T_T (I'll try to make it a little readable) private void drawGame(Graphics2D g2d){ //Width and Height of the visible portion of the map (not of the screen) int visionWidht = visibleCols * TILE_SIZE; int visionHeight = visibleRows * TILE_SIZE; //Since the map can be smaller than the screen, I center it just to be sure int xAdjust = (getWidth() - visionWidht) / 2; int yAdjust = (getHeight() - visionHeight) / 2; //This "deducedX" thing is to move the map a few pixels horizontally, since the player moves by pixels and not full tiles int playerDrawX = listOfCharacters.get(0).getX(); int deducedX = 0; if (listOfCharacters.get(0).currentCol() - visibleCols / 2 >= 0) { playerDrawX = visibleCols / 2 * TILE_SIZE; map_draw_col = listOfCharacters.get(0).currentCol() - visibleCols / 2; deducedX = listOfCharacters.get(0).getXCol(); } //"deducedY" is the same deal as "deducedX", but vertically int playerDrawY = listOfCharacters.get(0).getY(); int deducedY = 0; if (listOfCharacters.get(0).currentRow() - visibleRows / 2 >= 0) { playerDrawY = visibleRows / 2 * TILE_SIZE; map_draw_row = listOfCharacters.get(0).currentRow() - visibleRows / 2; deducedY = listOfCharacters.get(0).getYRow(); } int max_cols = visibleCols + map_draw_col; if (max_cols >= map.getCols()) { max_cols = map.getCols() - 1; deducedX = 0; map_draw_col = max_cols - visibleCols + 1; playerDrawX = listOfCharacters.get(0).getX() - map_draw_col * TILE_SIZE; } int max_rows = visibleRows + map_draw_row; if (max_rows >= map.getRows()) { max_rows = map.getRows() - 1; deducedY = 0; map_draw_row = max_rows - visibleRows + 1; playerDrawY = listOfCharacters.get(0).getY() - map_draw_row * TILE_SIZE; } //map_draw_row and map_draw_col representes the coordinate of the upper left tile on the screen //iterate through all the tiles on screen and draw them - this is what consumes most of the CPU for (int col = map_draw_col; col <= max_cols; col++) { for (int row = map_draw_row; row <= max_rows; row++) { Tile[] tiles = map.getTiles(col, row); for(int layer = 0; layer < tiles.length; layer++){ Tile currentTile = tiles[layer]; boolean shouldDraw = true; //I only draw the tile if it exists and is not empty (id=-1) if(currentTile != null && currentTile.getId() >= 0){ //The layers above 1 can be draw behing or infront of the player according to where it's standing if(layer > 1 && currentTile.getId() >= 0){ if(playerBehind(col, row, layer, listOfCharacters.get(0))){ behinds.get(0).add(new int[]{col, row}); //the tiles that are infront of the player wont be draw right now shouldDraw = false; } } if(shouldDraw){ g2d.drawImage( tiles[layer].getImage(), (col-map_draw_col)*TILE_SIZE - deducedX + xAdjust, (row-map_draw_row)*TILE_SIZE - deducedY + yAdjust, null); } } } } } } There's some more code in this method but nothing relevant to this question. Basically, the biggest problem is that I iterate over around 5000 tiles (in this specific resolution) 60 times each second. I thought about rendering the visible portion of the map once and storing it into a BufferedImage and when the player moved move the whole image the same amount but to the opposite side and then drawn the tiles that appeared on the screen, but if I do it like that, I wont be able to have animated tiles (at least I think). That being said, any suggestions?

    Read the article

  • Grid based collision - How many cells?

    - by Fibericon
    The game I'm creating is a bullet hell game, so there can be quite a few objects on the screen at any given time. It probably maxes out at about 40 enemies and 200 or so bullets. That being said, I'm splitting up the playing field into a grid for my collision checking. Right now, it's only 8 cells. How many would be optimal? I'm worried that if I use too many, I'll be wasting CPU power. My main concern is processing power, to make the game run smoothly. RAM is not a big concern for me.

    Read the article

  • Search Engine Optimization - The Best Way to Market Your Online Based Business

    There are many things to consider, when you start your own online marketing campaign via search engine optimization services. If you're an entrepreneur, you won't have time for all this, meaning you would need to hire someone to do all the work related to the optimization and the functioning of the website. The lucky fact is that most of the companies, which offer search engine optimization (SEO) services, also provide web design, content writing, web development, social bookmarking, and other related services.

    Read the article

  • Week in Geek: SkyDrive Bug Blocks Opera Browser Users from the Service

    - by Asian Angel
    Our latest edition of WIG is filled with news link coverage on topics such as how the FBI and CIA can read your e-mail, Blizzard admits to wrongfully banning a Diablo 3 Linux user and refunds his money, e-mailed malware disguised as group coupon offers are increasing, and more. Chainlink clipart courtesy of For Web Designer. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Using Ops Center to Provision Solaris using a Card-Based NIC

    - by Larry Wake
    Scott Dickson writes:  "Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11." See what he did, using Oracle Enterprise Manager Ops Center 12c. [Read More]

    Read the article

  • Setting up a GUI-based libvirt VM

    - by LibertasMens
    I am attempting to create a virtual machine using libvirt that will have a GNOME GUI and remote accessibility. I have successfully set up the VM to run, but I am unable to access it remotely. The command issued to build the VM: sudo vmbuilder kvm ubuntu --suite=precise --flavour=generic --arch=amd64 --mirror=http://de.archive.ubuntu.com/ubuntu -o --libvirt=qemu:///system --part=/usr/xxxx/vmbuilder.partition --templates=/usr/xxxx/mytemplates --addpkg=nano --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/usr/xxxx/boot.sh --cpus=2 --mem=4096--user=xxxx --name=xxxx --pass=xxxx --hostname=xxxx --bridge=br0 My intent is to have a virtual machine that my client can remotely access with a GUI.

    Read the article

  • Displaying a grid based map using C++ and sdl

    - by user15386
    I am trying to create a roguelike game using c++ and SDL. However, I am having trouble getting it to display the map, which is represented by a 2d array of a tile class. Currently, my code is this: for (int y = 0; y!=MAPHEIGHT; y++) { for (int x = 0; x!=MAPWIDTH 1; x++) { apply_surface( x * TILEWIDTH, y * TILEHEIGHT, mymap[x][y].image, screen ); } } However, running this code causes it to both dither for a while before opening the SDL window, and (usually) tell me there is an access violation. How can I display my map?

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • Google Chrome on Ubuntu 12.04 not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • How to implement Scrum in a company with three similar web-based products

    - by user1909034
    I am somewhat familiar with the concepts and benefits of Scrum. With that in mind, I am trying to improve the failing Scrum product management structure of a company I'm now working for that has three separate B2C products, catering to the same demographic and accessible on the same website. Each product has a product owner and a unique development team (5 - 9 people in each) behind it. Given that the target audiences are similar (not sure if it should matter) and the 3 web products are similar in nature, what are the potential benefits/risks associated with merging the teams and having just one product owner/scrum master/dev team? Some questions that come to mind are: does it make sense to have 3 product owners and three distinct backlogs if your website has three distinct products? Also, if you only have one product owner, what is the best metric off which to choose who that will be?

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >