Search Results

Search found 21853 results on 875 pages for 'point'.

Page 430/875 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Run single php code using multiple domains

    - by Acharya
    Hi all, I have a php code/site at xyz.com. Now I want to run the same site using multiple domains means when somebody open domain1.com, domain2.com ,domain4.com, so on urls, it should run the code/side that is at xyz.com I know one way to do this. I can host all these domains to the server where xyz.com is hosted so all domains will point to same peace of code/site. n above solution i need to hosted the domains manually. Is there any other way to do this as I want to add domains dynamically? Thanks in advance!

    Read the article

  • Installing a new ASP.NET 4.0 site on a Windows 2008 server.

    - by TATWORTH
    I have been specifically requested to blog about getting an ASP.NET 4.0 site working on a Windows 2008 server that has never run a 4.0 web site before. Make sure the 4.0 framework is installed on the server! Patch it will ALL the security patches have been applied. ((for a live server, make sure that you tested the patches on your development server first) You will find the HTTP Log status codes at http://support.microsoft.com/kb/943891 - they are very important in understandign the IIS logs) After installing, turn on 4.0, by doing the following: Start the Internet Information Services (IIS Manager) Select the server node in the connections pane. (this is the node above Application Pools, FTP Sites and Server Farms) Double click the ISAPI and CGI Restrictions item in the centre pane You should see 1 or 2 ASP.NET v4.0.30319 entries, select Enable in the Actions pane for all of them. ASP.NET 4.0 should now run! Remeber after creating your new 4.0 ASP.NET site. select the Sites node and find out the Id of it. By default, the IIS logs are at C:\inetpub\logs\LogFiles and if your site is say 21, then the logs will be created in the W3SVC21 sub-directory. The key point about using these logs is that in the event of an error when trying to start the site for the first time, the log will contain the status code and the sub-code. By having the full code and sub-code, set up issues can be resolved in minutes instead of hours.

    Read the article

  • ADF Logging In Deployed Apps

    - by Duncan Mills
    Harking back to my series on using the ADF logger and the related  ADF Insider Video, I've had a couple of queries this week about using the logger from Enterprise Manager (EM). I've alluded in those previous materials to how EM can be used but it's evident that folks need a little help.  So in this article, I'll quickly look at how you can switch logging on from the EM console for an application and how you can view the output.  Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server. Step 1 - Select your Application In the EM navigator select the app you're interested in: At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way.  Step 2 - Open the Application Deployment Menu At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus: Step 3 - Set your Logging Levels  Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. In this case I've filtered the class list down to just oracle.demo, and set the log level to config. You can now go away and do stuff in the app to generate log entries. Step 4 - View the Output  Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in.  In this case I've filtered by module name. You'll notice here that you can again look at related log messages. Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.

    Read the article

  • How to manage SOAP requests to a pool of VM each listening on a HTTP port with a priority value in these requests?

    - by sputnick
    I have a front SOAP web-server under Linux. It will have to communicate with Windows Servers VM listening each on a HTTP port, for a HTTP POST request. The chosen VM should return a report of the task to the SOAP client. In the SOAP requests, there's a special variable : the priority of the request (kind of SLA), and my question is coming right now : I think of using a ha software (nginx, HAProxy, HeartBeat...) that can manage priority in this point of view. Is it relevant or do you think I need to implement a queue by myself with some specific developments? Ex: I have a SOAP requests with low priority in the pipe : the weight priority for these VM should be decreased if I have high priority SOAP requests at the same time. Any clue will be really appreciated.

    Read the article

  • Download videos from youtube as I see it

    - by Sab
    This may seem a somewhat strange requirement : I want to download youtube videos as I see it. I know that I would have to capture the packets using a program like wireshark , and I do know that this is possible. So lets say I have 3 computers on my network and 1 smartphone. Lets say I view a youtube video on my phone. I now want this video to be recorded on any one of the computers so that I can see it later(record in the sense capture the packets so that I dont have to download it again and waste my bandwidth). Are there any programs which will do this for me? The reason I want this is I use IMediaShare to view youtube videos on my Tv. Now once I see a video if I want to see it at a later point of time I have to download the entire video again.

    Read the article

  • Touchpad not working after login in Ubuntu

    - by Maria Mateescu
    At some point my touchpad stopped working on Lenovo x220 under Ubuntu 11.10, after login. I have found two possible solutions for that online, but neither of them work. First, gconftool-2 --set --type boolean /desktop/gnome/peripherals/touchpad/touchpad_enabled true and a second one, xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Off" 8 0 After looking more carefully into xinput I have realized that xinput list-props "SynPS/2 Synaptics TouchPad" outputs: Device Enabled (132): 0 This field seems to be stuck to zero, because trying to set it back to 1 by: xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Device Enabled" 8 1 doesn't seem to have any effect, e.g. I still have: Device Enabled (132): 0 Any ideas? Thank you!

    Read the article

  • How to correctly write an installation or setup document

    - by UmNyobe
    I just joined a small start-up as a software engineer after graduation. The start-up is 4 year old, and I am working with the CEO and the COO, even if there are some people abroad. Basically they both used to do almost everything. I am currently on some kind of training phase. I have at my disposition architecture, setup and installation internal documentation. Architecture documentation is like a bible and should contain complete information. The rest are used to give directions in different processes. The issue is that these documents are more or less dated, as they just didn't have the time to change them. I will be in charge of training the next hires, and updating these documents is part of my training. In some there is a lot of hard-coded information like: Install this_module_which_still_exists cd this_dir_name_changed cp this_file_name_changed other_dir_name_changed ./config_script.sh ./execute_script.sh The issues i have faced : Either the module installation is completely different (for instance now there is an rpm, or a different OS) Either names changed, and i need to switch old names by new names Description of the purpose of the current step missing. Information about a whole topic is missing Fortunately these guys are around and I get all the information I want and all the explanations I need. I want to bring a design to the next documents so in the future people don't feel like they are completely rewriting a document each time they are updating it. Do you have suggestions? If there is a lightweight design methodology available online you can point me to it's nice too. One thing I will do for sure is set up a versioning repository for the documents alone. There is already one for the source code so I don't know why internal documents deserve a different treatment.

    Read the article

  • IIS6 can't find site on local network

    - by chezy525
    I have a windows 2003 server with dual NICs running IIS6. I can access everything remotely, but the internal network can't seem to find the site regardless of which IP Address I try to go to. There are really several weird things that are happening here, but I'm going to limit this question to what I'm guessing to be the simplest problem (the solution to which I'm hoping solves other things as well): From the server itself, I can access the webpage using the primary IP address (i.e. http://192.168.1.2/index.htm), but not using the secondary IP address (i.e. http://10.10.10.2/index.htm). Self pinging both IP addresses works, and the "Web site identification" in IIS has the IP address set to "(All Unassigned)"... which I believe should bind both IP addresses to this site. I apologize if I'm not providing enough details about my setup, but at this point I don't even know what's relevant...

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • Are scheduled job servers the right choice for a time sensitive game engine?

    - by maple_shaft
    I am currently architecting and designing an exciting new web application that will be entering into some areas that I have very little experience in, game development. The application is not necessarily a game, but there are some very time sensitive tasks and scheduled jobs that a server will need to run to perform game related activities (Eg. New match up starts at noon every day for a 12 day tournament, updating scoreboards at 5pm every day, etc...) In the past I have typically used cron jobs with the Quartz Scheduler running within a web application server, but I know that this isn't likely a scalable solution for the truly massive userbase that management is telling me to expect (Granted they are management and are probably highly optimistic about this) and also for how important the role of these tasks are in this web application. The other important thing I want to consider is that I want to avoid SPOF (Single Point Of Failure). If the primary job server goes down, another job server should be able to successfully run the job in its place. I suppose this can be done appropriately record locking and database transactions. My question is if scheduled jobs like CRON running on a web application server are a wise design choice given the time sensitive game tasks of this application, or is there something more appropriate for running a scalable game engine parallel to the web application servers?

    Read the article

  • How to use a D-Link usb network adapter on debian

    - by Barranka
    I have a Debian (squeeze) desktop, and I need to use a D-Link 150 USB Wireless Network Adapter. So far I've done this: $ lsusb ... Bus 001 Device 006: ID 2001:3c18 D-Link Corp. ... After looking for a solution in google, I found that I needed to install the following package: firmware-ralink_0.28+squeeze1_all.deb I've installed it, but Debian doesn't want to find the adapter. When I run lsmod, I can't find what I'm supposed to find: rt2870sta Can you point me in the right direction?

    Read the article

  • Running Subversion on Windows sans Apache?

    - by DA
    We're stuck with a Windows IIS 6 server at the moment. We'd really like to get a decent OS version control system set up for it. Can Subversion run without Apache? Most documentation that I've come across states that Apache is required, but I have seen a few mentions that Subversion can run as a stand-alone-server by itself alongside IIS. Alas, I can't find details on that particular configuration. Is that true and, if so, can anyone point me to some documentation on that?

    Read the article

  • CentOS 6 init script doesn't work properly

    - by user711643
    I'm setting up my ruby production server based on CentOS 6. I need a process called god (which is a process monitoring tool) to start at boot. I'm using an init script that I found here. Just as stated in the guide I ran: chkconfig --add god and then chkconfig --level 345 god on After this if I run "service god start|restart" everything works. It loads the available configurations and brings up the related processes (if they are not running). Problem is it doesn't work at boot. If I reboot the system, then I do "ps -aux | grep god". At this point "god" is running but apparently it didn't load the configuration files. If i run again service god restart, it loads everything without problems. What am I doing wrong?

    Read the article

  • SQL Server 2012 LocalDB

    - by user3061846
    I´m a noobie so please be patient ! I developed an app using C# and SQL Server Express 2012 with a local database, my connection string is"Data Source=localhost ; Initial Catalog = scalnet ; Integrated Security=SSPI; Trusted_Connection=Yes"; Everything worked ok till the time I made a setup and tried to install my app in another computer. My first question is: - What version of SQL should I install is this machine? it should be as ligth as possible. - I tried to install SQL exprees 2012 but it gives me an error when I execute my app "A network related or instance specific error occurred while establishing a ..... (provider:Named Pipes Provider, error:40 - Could not open a connection to SQL server." This probably should be a problem with the server configuration but I have no ideia how to solve this... Can anyone point me to the rigth direction ? ? Thanks

    Read the article

  • Default documentroot apache does not work

    - by James Wise
    I have apache version 2.2 and php 5.3.15 on a single server. I configured virtual hosting and a default vhost. 0_default_.conf - goes to /var/www/default sub.domain.com.conf - goes to /var/www/sub.domain.com My question is, how could I set the default documentroot to sub.domain.com permanently? That means all request should be redirected to sub.domain.com. I try to remove 0_default_.conf but when viewing the page it display the php source code of sub.domain.com. Here is my configurations -- http://pastebin.com/4e3awUJ4 Although I can create index.php to /var/www/default and permanently redirect to sub.domain.com site but it's not viable solution for me because what if I didn't point the ip address of sub.domain.com to the server so user cannot view that subdomain. I would appreciate if anyone could share their knowledge and wisdom. Thanks. JamesW

    Read the article

  • .htaccess redirect root directory and subpages with parameters

    - by wali
    I am having difficulty trying to redirect a root directory while at the same time redirect pages in a sub directory to a different URL. For example: http://test.example.com/olddir/sub/page.php?v=one to http://test.example.com/new/one while also redirecting the any request to the root of the olddir folder. I have tried RewriteCond %{QUERY_STRING} v=one RewriteRule ^/olddir/sub/page.php /new/? [R=301] and RedirectMatch /oldir "test.example.com" RedirectMatch /olddir/sub/page.php?v=one "test.example.com/new/one" Any help at this point will be extremely appreciated...Thanks!

    Read the article

  • Is there a proven concept to website reverse certificate authentication?

    - by Tom
    We're looking at exposing some of our internal application data externally via a website. The actual details of the website aren't that interesting, it'll be built using ASP.NET/IIS etc, that might be relevant. With this, I'm essentially I'm looking for a mechanism to authenticate users viewing my website. This sounds trivial, a username/password is typically fine, but I want more. Now I've read plenty about SSL/x.509 to realise that the CA determines that we're alright, and that the user can trust us. But I want to trust the user, I want the user to be rejected if they don't have the correct credentials. I've seen a system for online banking whereby the bank issues a certificate which gets installed on the users' computer (it was actually smartcard based). If the website can't discover/utilise the key-pair then you are immediately rejected! This is brutal, but necessary. Is there a mechanism where I can do the following: Generate a certificate for a user Issue the certificate for them to install, it can be installed on 1 machine If their certificate is not accessible, they are denied all access A standard username/password scheme is then used after that SSL employed using their certificate once they're "in" This really must already exist, please point me in the right direction! Thanks for your help :)

    Read the article

  • Advice for outdoor wifi hardware and topology

    - by Robot
    I haven't setup any wifi networks other than an access point or two at any single location, so I'd like advice on how to setup an outdoor/weatherproof network in an area approximately 150 feet by 200 feet. The interesting thing is there are a pair of pools in the middle of the coverage area. Here is a picture: blue is pool, green is coverage area, yellow is building with wired access. Can anyone advise me on weatherproof APs, antennas and placement for best coverage of the pool deck? I've looked at the Meraki stuff, but I'm thinking it's overkill.

    Read the article

  • Sysprep.exe completely missing on both of my Windows 7 64bit machines. How should I find a workaround?

    - by Zoltán Tamási
    The sysprep.exe file is simply missing on my Windows 7 64bit machine. I tried to find it on another computer, but it wasn't found there either. I can't understand it, because on a lot of forums and even in the official articles there are a lot of references to this tool. I've checked system, system32, sysWOW64 folders, and even made a full search with Total Commander. I only found a sysprep folder in the system32 folder, but inside was only an en-US subfolder, which was empty. Then I thought I will give my Windows PE bootdisk a try, which I've created a while ago. No result, only the same empty en-US folder is present there as well. Please if anyone knows what's happening, point me to the right direction. I need to clone my system and I'm stuck right at the first step...

    Read the article

  • Can't connect nonlocally after 12.10 upgrade

    - by user101815
    I've just upgraded one of my systems from 12.04 to 12.10. Now I can't connect on that system beyond my local network. Connections within the local network seem to work fine, and I can make nonlocal connections from other machines (like the one I'm asking this question from). I suspect that some routing information has been messed up, but I don't know where to look for it. It's not a nameserver problem -- pinging outside sites by their IP addresses doesn't work either. I have another laptop next to this one, also running Kubuntu 12.10. On the one that can't connect, arp produces no output. On the other one, it produces 192.168.0.1 ether 00:23:69:fa:ce:ae C wlan0 On the working machine, the output of netstat starts with some tcp entries. On the nonworking one, those entries are absent. I asked this question on the Ubuntu forum but haven't gotten any answers there. One further complication: since the troublesome machine has no outside connection, it's extremely difficult to download anything to it. For what it's worth, "ping 8.8.8.8" produces "connect: Network is unreachable". Update: after a lot of fiddling, I have my external world back. I don't know what the key action was, but the first indication of progress was that "ping 8.8.8.8" worked. At that point I still didn't have a working nameserver, so external URLs didn't work. But I did this (based on an online post, of course): sudo dpkg-reconfigure resolvconf and answered Yes to all prompts. That did the trick!! Apparently my problem was unique, or close to it, since I couldn't find any online references to it: local net working, remote net not working, including explicit IP addresses. So I suppose that if no one else has this problem, no one cares about the solution!!

    Read the article

  • Seeing DNS changes takes too long on my PC, can it be my router misconfiguration?

    - by Borek
    I administer a few sites and need to update their DNS entries from time to time, e.g., adding an A-record point certain subdomain to a certain IP. When I check sites like http://www.opendns.com/support/cache/, I can clearly see the DNS change taking effect throughout the world - is it just my PC that can't see this change (ping newsubdomain.example.org says it cannot resolve host name) The network "map" is like this: My PC -> my router -> my ISP's router -> internet On my PC, the DNS is set automatically which means that if I run iconfig /all, my router will be returned as the DNS server (192.168.1.1). On my router, the DNS is set to be what my ISP provided me with. Is this correct? What can I do to see new hostnames resolved quicker?

    Read the article

  • Network card dies when trying to wake up laptop from sleep

    - by Bugmaster
    I have a Dell XPS 1640 laptop (aka Dell Studio XPS 16), running Vista. Whenever the laptop wakes up from sleep, my Broadcomm LAN card immediately dies. Any attempt to access it completely locks up whichever program I use to access it. There appears to be no way to re-awaken the card, other than rebooting; note that Vista does not even shut down properly if some program is waiting for the card. So far, I have tried the following: Disabled the "allow your computer to shut down this device to save power" in the device properties. This had no effect. Attempted to disable and then enable the card in the device properties. No effect. Tried ipconfig /renew: this locks up ipconfig to the point where I can't even kill its process. Force-install updated drivers from broadcomm: no effect. Does anyone have any other ideas ?

    Read the article

  • How to recover from locked down XP Home computer

    - by Chris
    We've got a Kiosk machine provided to us by a manufacturer. The video card is flaky, so I want to replace it with another card we have on hand, rather than shipping across the country. The problem is that they have policies in place that locks the system down to a point where only the manufacturers demo works on the computer so I can't install drivers for the newer card. I know pretty much nothing about windows policies or the policy editor. Am I fighting a losing battle trying to replace thi scard?

    Read the article

  • Use only external monitor at screen's native resolution

    - by joaoc
    My laptop's screen lamp just died (I can see content on the screen if I point a light at it) and I was using it with an external monitor. I can switch from extended desktop to mirrored mode but, and here is where I need help, the resolutions don't match. The laptop's resolution is 1600x1200 and the external monitor is 1680x1050. I am ok with just using one screen ATM but I would like for it to at least use the native resolution of the external monitor. This is Windows XP and under Monitor settings I only get the resolutions for the original monitor under mirrored mode. How can I force the screen into a resolution not supported by the laptop screen but that is a native resolution for the external monitor?

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >