Search Results

Search found 4618 results on 185 pages for 'websites'.

Page 149/185 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • IIS Multiple Domain Redirect

    - by bingles
    I currently have 2 domain names that I want to setup different websites for. I am currently looking at using some free hosting that works well for my current needs but doesn't give me any way to point "mydomain.com" to the actual site. Instead I have to give users a longer, harder to remember url. My proposed solution is to point my domains to my home ip and host a small ASP.NET app through IIS consisting of a redirect page that simply redirects to the appropriate site. Is there a way in ASP.NET to recognize which domain url was requested in order to know where to redirect the page to?

    Read the article

  • What is the best way to read GetResponseStream() ?

    - by Dev Dona
    What is the best way to read an HTTP response from GetResponseStream ? Currently I'm using the following approach. Using SReader As StreamReader = New StreamReader(HttpRes.GetResponseStream) SourceCode = SReader.ReadToEnd() End Using I'm not quite sure if this is the most effecient way to read an http response. I need the output as string, I've seen an article ( http://www.informit.com/guides/content.aspx?g=dotnet&seqNum=583 ) with a different approach but I'm not quite if it's a good one. And in my tests that code had some encoding issues with in different websites. How do you read web responses?

    Read the article

  • Open Source PHP search engine

    - by Ravi Gupta
    I am looking for an open source search engine plugin written in php for my website(eCommerce). Before anybody answer that I have a doubt regarding the search engine. Usually search engine crawl web pages, create indexes and then use them while looking for contents. But will the same model work for eCommerce websites? Yeah, it can crawl products pages, index them but don't you think it would be better if it crawls the database directly and index the products stored in the database? And when a user search for any product, it will simply give us the rows of the table which matches the user query? May be what I am asking is a stupid question but I am new to web development, so kindly help me to understand the concept. I have looked at a search engine called Sphider but didn't get what all I have to do to make it work with an eCommerce website.

    Read the article

  • Practise Questions for Templates,Functors,CallBack functions in c++?

    - by Eternal Learner
    Hi, I have been reading templates,functors,callback function for the past week and have referred some good books and articles. I however feel that, unless I can get good practice - programming in templates and use functors-callbacks there is no way I can really understand all the concepts or fluently use them while coding. Could anyone suggest some articles or books or websites where , there is a definition of the problem and also a solution to the same. I could just write code for the problem and check later on if my solution is good enough.. I am also aware that some of our stack-overflow members are experts in templates and callback functions. It would be great if they could design a problem and also post a solution , where a lot of template beginners like me could benefit.

    Read the article

  • PHP CURL works fine from localhost not from server

    - by Joby Joseph
    I have 2 servers, srv1 and srv2. All client sites are stored in srv2 and all authentication details are stored in srv1. Each time a client site is loaded, the site in srv2 sends a curl request to srv1 to validate. I am always getting bool(false) when I print the curl response using var_dump. But if I am requesting for validation from my local wamp installation(localhost) it is perfectly returning the response. According to my understanding srv2 is blocking srv1 ip or something like that. Any help or suggestions will be greatly appreciated. The solution is really urgent as all my clients are stuck with invalid authorization message. Edit: Now when i tested curl in srv2 is working fine i think. Because I am able to fetch other websites without any trouble. But I am not able to fetch data from srv1. I can view a page from srv1 through browser url but not through curl.

    Read the article

  • How to ensure the HTTP_REQUEST Is coming from the right place?

    - by seatoskyhk
    I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable. REMOTE_ADDR is reliable though. so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list? For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.

    Read the article

  • What would be my best MySQL Synchronization method?

    - by Kerry
    We're moving a social media service to be on separate data centers with global load balancing, as our other hosting provider's entire data center went down. Twice. This means that both websites need to be synchronized in some sense -- I'm less worried about the code of the pages, that's easy enough to sync, but they need to have the same database data. From my research on SO, it seems MySQL Replication is a good option, but the MySQL manual, for scaling out, says that its best when there are far more reads then there are writes/updates: http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-scaleout.html In our case, it's about equal. We're getting around 200-300 thousand requests a day right now, and we can grow rapidly. Every request is both a read and write request. What would be the best method or tool to handle this?

    Read the article

  • MySQL Query, Date Range From "Blacklist"

    - by erbaker
    I have 2 databases. One is properties and the other is dates. In dates I have associated the land_id and a date (In YYYYMMDD format) which means that the date is not available. I need to formulate a query that a user can specify a start and end date, and then choose a property for which dates are available (not in the date database). How do airline and hotel websites do this kind of logic? I was thinking about taking the date range and picking all days in between and doing a query where the dates do not match and ordering it by number of results, but I can see how that could easily turn into an intense query. CREATE TABLE IF NOT EXISTS `dates` ( `id` int(11) NOT NULL AUTO_INCREMENT, `land_id` int(11) NOT NULL, `date` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=44 ; -- -- Dumping data for table `dates` -- INSERT INTO `dates` (`id`, `land_id`, `date`) VALUES (43, 1, '20100526'), (39, 1, '20100522'), (40, 1, '20100523'), (41, 1, '20100521'), (42, 1, '20100525');

    Read the article

  • Where do you find images and graphics designers for your softwares ?

    - by ereOn
    Hi, As a programmer, I'm sure some of you already experienced the same problem: You create a good software (free, open-source, or for friend-only diffusion, whatever) relying on good code and good ideas but since you're a programmer and not an image designer, your program looks just bad. While it seems pretty easy to find motivated developpers to join for free an open-source project, it seems quite hard to find a single free graphic designer. What free and good resources do you usually use for your programs/websites ? Do you have any cool tip that you're willing to share ? Do you know any place where to find people involved into graphic design willing to participate to open-source projects ?

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • Debugging ASP.NET in VS

    - by negligible
    A lot of what I'm doing at the moment is figuring out other peoples code and adding or adapting functions, so currently I am debugging more than I am writing code of my own. I'm still new to this, Junior Developer, and I am always finding new ways to improve what I am doing. For example I recently found This Guide which had some excellent tips, such as overriding the ToString() method in your classes so children are readable from their parents. So I am looking for any other tips or tricks to make my debugging more efficient, as I recognise it as a big part of programming, that you more experienced programmers may have picked up or found. Anything appreciated, I can read websites just fine so no need to explain it yourself if you have a good link!

    Read the article

  • asp.net: moving from session variables to cookies

    - by P a u l
    My forms are losing session variables on shared hosting very quickly (webhost4life), and I think I want to replace them with cookies. Does the following look reasonable for tracking an ID from form to form: if(Request.Cookies["currentForm"] == null) return; projectID = new Guid(Request.Cookies["currentForm"]["selectedProjectID"]); Response.Cookies["currentForm"]["selectedProjectID"] = Request.Cookies["currentForm"]["selectedProjectID"]; Note that I am setting the Response cookie in all the forms after I read the Request cookie. Is this necessary? Do the Request cookies copy to the Response automatically? I'm setting no properties on the cookies and create them this way: Response.Cookies["currentForm"]["selectedProjectID"] = someGuid.ToString(); The intention is that these are temporary header cookies, not persisted on the client any longer than the browser session. I ask this since I don't often write websites.

    Read the article

  • Image links work but show "broken image" in IE.

    - by Path
    I have a problem. I have made some image files for a menu. They work fine in Firefox, but IE (8, haven't tested with others) and Chrome show a broken image.. Image, on top. Even though the images work. The page is here: http://www.silkeborgmuseum.dk/udvikling/index.php This is a very old page of mine but I need to make it work. I have tried searching google and stackoverflow, but have not so far been able to find anyone else having this problem or what is causing it. Can anyone help? As a parting comment, I will say that I have only been developing websites for a few months, but wow, i already hate IE with a fiery passion.

    Read the article

  • Error becuase Virtual directory not being configured as an application in IIS

    - by Cipher
    Hi, I was trying to install a CMS in a folder in my website. After the installation on trying to run, it shows this error: Error 14 It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. E:\Users\Sarin\Documents\Visual Studio 2010\WebSites\WebAssist\blog\web.config 61 I added the webiste as a Virtual Directory and also converted that to application. On trying to browse this application, the following error occurs as shown in the screenshot: http://i.imgur.com/jcRJe.jpg How to make this work?

    Read the article

  • Where do you find images and graphics for your softwares ?

    - by ereOn
    Hi, As a programmer, I'm sure some of you already experienced the same problem: You create a good software (free, open-source, or for friend-only diffusion, whatever) relying on good code and good ideas but since you're a programmer and not an image designer, your program looks just bad. While it seems pretty easy to find motivated developpers to join for free an open-source project, it seems quite hard to find a single free graphic designer. What free and good resources do you usually use for your programs/websites ? Do you have any cool tip that you're willing to share ?

    Read the article

  • How to load jQuery if it's not already loaded?

    - by David
    Hi I'm working on a weather widget, so in order to optimize it i want to check if jQuery has been loaded in the page, if Not, the widget will load it from my website. Because not all the websites use jQuery. how to do that? Example of how to put my widget: <html> blah blah blah ................. <script src="http://www.xxx.com/weather.js"></script> </body> </html> Thank you

    Read the article

  • Test massive website

    - by Ant
    My company has just migrated all the code for our website to 3 identical servers on an off-site location. Now it is our job to test them. However, the amount of websites/functionality that we have to test is exorbitant, and multiply that times 3! To check every single link and every single function is a daunting task. We are in the process of manually doing that right now. My question to you guys/girls is this... Is there a way to automate the testing so we don't have to waste our time clicking, waiting, and checking the response, times 3? ;-) Let me know if you need any other info. Thanks!

    Read the article

  • When configuring daily backups, which files should I include to be sure I have the MySQL db's

    - by user575599
    I have a dedicated LAMP server with cpanel hosting 100 websites (some of them have MySQL db's). I am currently using the Jungle Disk Server Edition to backup our files from our LAMP server to Amazon S3. Once a week were are backing up the entire cpanel which is an enormous strain on resources but that is a separate issue. Now, what I want to do is to set up a daily job to backup just the HTML files and the MySQL db's. If I just backup the "public_html" folder will my MySQL database info be stored in that directory? Would backing up the public_html folder be enough to recover the db? I can find plenty of resources online about how to manually backup MySQL db's but with a 100 sites, I need it automated. I'm hoping for an easy solution where I can just grab a folder to backup each day.

    Read the article

  • Anyway to improve my gzip PHP method?

    - by Joe
    I Gzip my pages currently like so: <?php ob_start("ob_gzhandler"); //my page content ob_flush(); ?> However, I read a comment somewhere, earlier on, that this method uses a lot of memory, and I know that my website has been using a lot of memory on my virtual private server, so I thought it would be nice if I knew a way to reduce memory usage. I tested my site with an online gzip tester which says my websites are sending gzipped pages, so my gzip method works, but the main obviously I'm looking for a less memory intensive option, if any. I appreciate all suggestions. :) Oh and merry christmas ;P

    Read the article

  • One code on dev server, and production server - how to deal with links?

    - by Yegor
    I have a copy of a code running ont he prod server, and I sue my local machine(s) running xampp as a dev server. I have several websites that I actively develop, so Im forced to use http://localhost/sitename All my URLs are relative to the domain, (/file.php). They work fine on the prod server, but on a local server, they all point to localhost, when I want to make them all work relative to the site folder they are in. Is there anything I could do, other than what I do now, which is this: if($_SERVER['SERVER_NAME'] == "localhost") { $path_to = "http://" . $_SERVER['SERVER_NAME'] . "/folder"; $path_to_files = $_SERVER['DOCUMENT_ROOT'] . "/folder"; } else { $path_to = "http://" . $_SERVER['SERVER_NAME']; $path_to_files = $_SERVER['DOCUMENT_ROOT']; } and simply putting $path_to before each link on the site.

    Read the article

  • Icon fonts vs images

    - by Miss A
    My manager tells me not to use icon fonts on our websites, as it is another http request plus the extra kBs to download. Also because I would have to use content before for the font (I can't change the html), he prefers background images so it works in IE7. Personally I love the little things, so nice and crisp and resizeable! I get it if we only use a couple of icons on a website but if I would use, say 5 icons on a site - what do you guys think? Is it worth using an icon font or is he right thinking that it is not? I am just a sucker for anything new and exciting, and this year it is the retina display.

    Read the article

  • PHP cors validation

    - by Brian Putt
    I have an endpoint that takes GET requests to collect data from any source that wants to send data. Is there a way to run some validation that the data is in fact coming from the sources we allowed? They enter the website url that they will be sending the data from and we generate an api key. The data is sent via a javascript file that they install onto their website. I have the Access-Control-Allow-Origin set to * as it doesn't necessarily scale to add in hundreds or more websites to that header and that in itself is a security risk as it shows anyone who wants to look at the headers who uses the script. Currently I am thinking of using the http_origin / origin referrer, but obviously that doesn't do too much

    Read the article

  • returning values from function or method multiple times by only calling the class once

    - by Sokhrat Sikar
    I have a members.php file that shows my websites members. I echo members name by using foreach method. A method of Members class returns an array, then I use foreach loop in the members.php file to echo the members. I am trying to aovid writing php code in my members.php file. Is there a way to avoid using foreach inside members.php file? For example, is it possible to return value from a method couple of times? (by only calling the object once). Just like how we normally call the functions? This question doesn't make sense, but I am just trying to see if there is a away around this issue?

    Read the article

  • retrieving 'nulls' from website using Java URL input stream

    - by Roio
    hi all, i'm trying to read the text from a website using the Java URL input stream as follows - URL u = new URL(str); br3 = new BufferedReader(new InputStreamReader(u.openStream())); while(true) System.out.println(br3.readLine()); this seems to work fine for most websites, but for some url shortening services like linkbee, the object draws a blank. e.g. http://linkbee.com/FUAKF, i can view the source code using an explorer, however i repeatedly get nulls when i use the above code.

    Read the article

  • UI Design - design pattern for city/country drop down? (ASP.NET MVC)

    - by JK
    What is the best way to do a city/country dropdown pair in ASP.NET MVC? I see lots of places with country above city, but that's unnatural: in real life we write city/country. I've used city, then country, but the problem is that the user then has to go backwards after changing the country. The other problem is what do you do about cities/countries not in your list? If city/country are both drop downs, then the user cant type their own city if it is missing. But if you have a dropdown and a textbox, that makes it unwieldy (you end up with 4 controls to enter 2 pieces of data). Are there any examples websites where the city/country dropdown pair are done in a very useable and clear manner?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >