Search Results

Search found 18210 results on 729 pages for 'website promotion'.

Page 157/729 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Uploading image file from website on yahoo server causes 410 gone error message when trying to acces

    - by creocare
    I have a form that lets users upload a photo of themselves. This seems to work. The photo does exist on the server once uploaded. When I try to access the file I get a "403 forbidden - you don't have permission to access this url on this sever" and also I get "Additionally, a 410 Gone error was encountered while trying to use an ErrorDocument to handle the request." This is the code I have for uploading the image. $target_path = "images/"; $target_path = $target_path . basename( $_FILES['uploadpic']['name']); if(move_uploaded_file($_FILES['uploadpic']['tmp_name'], $target_path)) { echo "The file ". basename( $_FILES['uploadpic']['name']). " has been uploaded"; } else { echo "There was an error uploading the file, please try again!"; }

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • VB.Net: How to fill in a form in a website, then click at a button and download the file using webbr

    - by BasisBit
    I am using a WebBrowser-Control to fill in a webform and then click at a button, this currently results in a standard Download File Dialog (you get these if you download a file using internet explorer), but instead, I have to catch this file and save it automatically with a by me defined name to a specific folder. I am trying to code a little application in vb.net which download the Export-file from my wordpress-blog, and I want to do this completely without user-interaction. Currently everything works, except the downloading of the file. I tried to catch it with the event System.Windows.Controls.WebBrowser.Navigating(ByVal Object, ByVal System.Windows.Navigation.NavigatingCancelEventArgs) but I don't see where to download the file from :( I hope you guys can help me.

    Read the article

  • How do I set the Execute Permissions for an IIS6 website with Powershell using WMI?

    - by DarkwingDuck
    In inetmgr you can set the property I desire by going to Home Directory - Application Settings - Execute Permissions - and setting the drop down to 'Scripts Only'. I'm trying to replicate this behavior in Powershell. The Target OS is Windows Server 2003 running IIS6. Currently I have this simple code to get the site: $Site = get-wmiobject -Namespace root\MicrosoftIISv2 -query ('select * from IISWebServerSetting where ServerComment="mySite"') There are lots of properties it might be but nothing really leaps out. I've tried changing the setting in inetmgr and dumping the properties out before and after, but I see no differences (it could be a child property though). Any ideas? Thanks in advance.

    Read the article

  • How do you stop scripters from slamming your website hundreds of times a second?

    - by davebug
    [update] I've accepted an answer, as lc deserves the bounty due to the well thought-out answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible for us to actually track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey bit of hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHAing everyone is an alternative. I'll attempt to do a more full explanation in here later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki). I've added bounty to this question and attempted to explain why the current answers don't fit our needs. First, though, thanks to all of you who have thought about this, it's amazing to have this collective intelligence to help work through seemingly impossible problems. I'll be a little more clear than I was before: This is about the bag o' crap sales on woot.com. I'm the president of Woot Workshop, the subsidiary of Woot that does the design, writes the product descriptions, podcasts, blog posts, and moderates the forums. I work in the css/html world and am only barely familiar with the rest of the developer world. I work closely with the developers and have talked through all of the answers here (and many other ideas we've had). Usability of the site is a massive part of my job, and making the site exciting and fun is most of the rest of it. That's where the three goals below derive. CAPTCHA harms usability, and bots steal the fun and excitement out of our crap sales. To set up the scenario a little more, bots are slamming our front page tens of times a second screenscraping (and/or scanning our rss) for the Random Crap sale. The moment they see that, it triggers a second stage of the program that logs in, clicks I want One, fills out the form, and buys the crap. In current (2/6/2009) order of votes: lc: On stackoverflow and other sites that use this method, they're almost always dealing with authenticated (logged in) users, because the task being attempted requires that. On Woot, anonymous (non-logged) users can view our home page. In other words, the slamming bots can be non-authenticated (and essentially non-trackable except by IP address). So we're back to scanning for IPs, which a) is fairly useless in this age of cloud networking and spambot zombies and b) catches too many innocents given the number of businesses that come from one IP address (not to mention the issues with non-static IP ISPs and potential performance hits to trying to track this). Oh, and having people call us would be the worst possible scenario. Can we have them call you? BradC Ned Batchelder's methods look pretty cool, but they're pretty firmly designed to defeat bots built for a network of sites. Our problem is bots are built specifically to defeat our site. Some of these methods could likely work for a short time until the scripters evolved their bots to ignore the honeypot, screenscrape for nearby label names instead of form ids, and use a javascript-capable browser control. lc again "Unless, of course, the hype is part of you

    Read the article

  • How do I check if a user is a fan of my facebook page on my website?

    - by Tony
    I want to check if my users are fans of my facebook page. I think something like this should do it: <script type="text/javascript" src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US"></script> <script type="text/javascript">FB.init("my api key","xd_receiver.htm");</script> <script type="text/javascript"> //<![CDATA[ FB_RequireFeatures(["Api"], function(){ var api = FB.Facebook.apiClient; api.pages_isFan(PAGE_ID,gigyaUser.FACEBOOK_USER_ID,function(response){ alert(response); }); }); //]]> </script> However, for some reason I keep getting null even though the user is in fact a fan of the page. Am I missing something?

    Read the article

  • Force an ASP.NET 3.5 WebSite to use version 1.0.61025.0 of System.Web.Extensions

    - by Greg
    I just upgraded my Web Site project from 2.0 to 3.5 to take advantage of the TimeZoneInfo class. When I did this, I started getting an ambiguous assembly error (*see below). The problem is, I'm not using ScriptManager, an old version of SyncFusion is. I can't upgrade SyncFusion right now, so I need to tell ASP.NET to use version 1.0.61025.0 of the assembly. I ripped out all of the 3.5 script stuff from the web.config and adding bindingRedirects to it, but it didn't work. <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="3.5.0.0" newVersion="1.0.61025.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="3.5.0.0" newVersion="1.0.61025.0" /> </dependentAssembly> </assemblyBinding> </runtime> The type 'System.Web.UI.ScriptManager' is ambiguous: it could come from assembly 'C:\inetpub\wwwroot\xxx\bin\System.Web.Extensions.DLL' or from assembly 'C:\WINDOWS\assembly\GAC_MSIL\System.Web.Extensions\3.5.0.0__31bf3856ad364e35\System.Web.Extensions.dll'. Please specify the assembly explicitly in the type name.

    Read the article

  • What's the easiest/fast way to get my website up and running on the web?

    - by ggfan
    This is probably a really really beginner's question, but I would like to know what's the fastest way to get my site on the web so that people can start using it. I'm learning everything about programming out of books and at home so I don't have much experience. --Before I go to like godaddy.com or such site to get a domain name, is there any free sites that would allow me to upload my site so users can use it? I have html,css,php,mysql,javascipt in my scripts so I don't think many sites allow free uploads with such languages. --If I can't find a free site, is there any good places to get a domain name and web hosting that supports most languages at a low price? (doesn't have to be professional hosting because I am still a beginner) --If I go to say godaddy.com and get their webhosting and domain name, would I be allowed to run php,mysql,python,java on it? (I looked at some hosting sites and most only allow php/mysql)

    Read the article

  • How can I download all files of a specific type from a website using PHP?

    - by CheeseConQueso
    I want to get all midi (*.mid) files from a site that's set up pretty simple in terms of directory tree structure. I wish we had wget installed here, but that's another party.... The site is VGMusic.com and the path containing all of the midi files is: http://www.vgmusic.com/music/console/nintendo/nes/ I tried glob'ing it out, but I suppose that glob only works locally? Here is what I wrote to try to make it happen (doesn't work.. obviously..): <?php echo 'not a blizzard<br>'; foreach(glob('http://www.vgmusic.com/music/console/nintendo/nes/*.mid') as $filename) { echo $filename.'<br>'; //$newfile = 'http://www.mydomain.com/nes/'.$filename; //copy($filename, $newfile) } ?> I tried it also without the http:// in there with no luck.

    Read the article

  • How do you detect a website visitor's country (Specifically, US or not)?

    - by BigDave
    I need to show different links for US and non-US visitors to my site. This is for convenience only, so I am not looking for a super-high degree of accuracy, and security or spoofing are not a concern. I know there are geotargeting services and lists, but this seems like overkill since I only need to determine (roughly) if the person is in the US or not. I was thinking about using JavaScript to get the user's timezone, but this appears to only give the offset, so users in Canada, Mexico, and South America would have the same value as people in the US. Are there any other bits of information available either in JavaScript, or PHP, short of grabbing the IP address and doing a lookup, to determine this?

    Read the article

  • Running BlogEngine.NET with ASP.NET MVC under same website?

    - by Raj Aththanayake
    Hi Can anyone please help me with this? I have a Windows 2008 server and MVC 2.0 site is hosted under IIS 7.0 root directory. The site works fine. I want to use the BlogEngine.NET with my site. For example if my mite name is http:// mysite.com (which is the root of IIS) and the blog should be http://mysite.com/blog/Default.aspx Is this possible? Can I create a sub virtual directory within my root (where the MVC 2 app is hosted) and run the ASP.NET BlogEngine.Net in it? Any ideas appreciated.

    Read the article

  • What is an example of a website/service which _isn't_ REST?

    - by montooner
    So I just started digging into web tech, and I'm stuck on the concept of REST. Could someone clarify REST by giving me an example of what isn't rest? So, as far as I can tell, REST requires the server and client to both be in the same state at the end of every request-response HTTP transfer. Does that sound right? My understanding is that, if a client stores state information locally (which the server does not know about), that service is NOT rest. Thanks in advance.

    Read the article

  • How to change Sharepoint look and feel like a professional website ?

    - by pointlesspolitics
    I am working on the MOSS 2007 site and looking to do some customisation like professional site. Professional means, at the first glance nobody can say it is a typical sharepoint site. example :https://www.twynhamschool.com/ I know to add the header icons and images in the master page with much afforts but still not sure how to approach to completely change the site face like professional site using CSS and master pages/site layouts. BTW I am using sharepoint designer and very much confused with the programmatic approach to install the master pages as features/solutions. Any good tips and tricks are most welcome on this issue. If some one knows the list of good sites and articles which explain the step by step instructions with examples, please let me know. Thanks

    Read the article

  • Running a Comet server implementation on a Hosted website?

    - by Shishya
    Is it possible to use any of the many implementations of comet like streamhub..etc with a hosted web account from providers like GoDaddy i.e. get a domain and web hosting account from them. I want to host a iphone web application on go daddy, but i need to have comet i.e. data/ notifications pushed to my application. Any other alteranative will also be helpful?

    Read the article

  • Which web Tier Framework for a public commercial website with heavy load ?

    - by Maxime ARNSTAMM
    Hello everyone, As a part of an enterprise architecture exercise, i need to find a java-based framework filling these constraints : heavy (i think) load : 5000 concurrent connections widely known : can't be too exotic, the contractors would be too high priced. relatively easy to use : developpement time must be reasonnable must be as compliant as possible with the css/html layout produced by a designer Must look like "web 2.0" from the marketing point of view. What i learned from my limited experience is : jsf : 1, don't know. 2, 3 ok. 4 not ok (at least not without huge effort) wicket : 1, not really. 2, 3 and 4 ok. gwt : 1, don't know. 2, 3 ok. 4 not ok (but more ok than jsf) others : not really "web 2.0" or not really known I'm really junior, so my ideas about those frameworks are probably wrong, that's why i come to you, stackoverflowees. Thanks for helping :)

    Read the article

  • Why am I unable to Debug my ASP.NET website in Visual Studio?

    - by willem
    I used to be able to attach to my w3wp process and Debug my web application, but this is not working anymore. I have no idea what changed to break this. My breakpoints simply have the "breakpoint will currently not be hit. The source code is different from the original version." What I have tried: Did a solution Clean. Did a solution Rebuild. Deleted the bin folder Restarted Visual Studio Restarted IIS Restarted my Computer Added a simple Response.Write to ensure that the latest DLL is being used. It is. Made sure that Debug ASP.NET is checked in my project properties. It is. Made sure that all my projects are compiled in my build configuration. They are. But none of these help. I attach to w3wp, but my breakpoints never get hit. Any ideas?

    Read the article

  • How can you start a process from asp.net without interfering with the website?

    - by Sem Dendoncker
    Hi, We have an asp.net application that is able to create .air files. To do this we use the following code: System.Diagnostics.Process process = new System.Diagnostics.Process(); //process.StartInfo.FileName = strBatchFile; if (File.Exists(@"C:\Program Files\Java\jre6\bin\java.exe")) { process.StartInfo.FileName = @"C:\Program Files\Java\jre6\bin\java.exe"; } else { process.StartInfo.FileName = @"C:\Program Files (x86)\Java\jre6\bin\java.exe"; } process.StartInfo.Arguments = GetArguments(); process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.UseShellExecute = false; process.PriorityClass = ProcessPriorityClass.Idle; process.Start(); string strOutput = process.StandardOutput.ReadToEnd(); string strError = process.StandardError.ReadToEnd(); HttpContext.Current.Response.Write(strOutput + "<p>" + strError + "</p>"); process.WaitForExit(); Well the problem now is that sometimes the cpu of the server is reaching 100% causing the application to run very slow and even lose sessions (we think this is the problem). Is there any other solution on how to generate air files or run an external process without interfering with the asp.net application? Cheers, M.

    Read the article

  • Which web crawler to use to save news articles from a website into .txt files?

    - by brokencoding
    Hi, i am currently in dire need of news articles to test a LSI implementation (it's in a foreign language, so there isnt the usual packs of files ready to use). So i need a crawler that given a starting url, let's say http://news.bbc.co.uk/ follows all the contained links and saves their content into .txt files, if we could specify the format to be UTF8 i would be in heaven. I have 0 expertise in this area, so i beg you for some sugestions in which crawler to use for this task.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >