Search Results

Search found 27592 results on 1104 pages for 'google sites'.

Page 483/1104 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • php: fopen() of an URL breaks for domain names, not for numerical addresses

    - by b0fh
    After hours of trying to debug a third-party application having trouble with fopen(), i finally discovered that php -r 'echo(file_get_contents("http://www.google.com/robots.txt"));' fails, but php -r 'echo(file_get_contents("http://173.194.32.81/robots.txt"));' Succeeds. Note that as the webserver user, I can ping www.google.com and it resolves just fine. I straced both executions of PHP, and they diverge like this: For the numerical v4 URL: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("173.194 poll([{fd=3, events=POLLOUT}], 1, 0) = 0 (Timeout) ...[bunch of poll/select/recvfrom]... close(3) = 0 For the domain name: socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3 close(3) = 0 PHP didn't even try to do anything with that socket, it seems. Or even resolve the domain, for that matter. WTF ? Recompiling PHP with or without ipv6 support did not seem to matter. Disabling ipv6 on this system is not desirable. Gentoo Linux, PHP 5.3.14, currently giving a try to PHP 5.4 and see if it helps. Anyone has an idea ? EDIT: php -r 'echo gethostbyname("www.google.com");' Works and yield an ipv4, while php -r 'echo(file_get_contents("http://[2a00:1450:4007:803::1011]/"));' Seems to return a blank result. EDIT 2: I didn't even notice the first time, that the v6 socket opened when the name is used is a SOCK_DGRAM.

    Read the article

  • Why doesn't jquery .load() load a text file from an external website?

    - by Edward Tanguay
    In the example below, when I click the button, it says "Load was performed" but no text is shown. I have a clientaccesspolicy.xml in the root directory and am able to asynchronously load the same file from silverlight. So I would think I should be able to access from AJAX as well. What do I have to change so that the text of the file http://www.tanguay.info/knowsite/data.txt is properly displayed in the #content element? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1.3.2"); google.setOnLoadCallback(function() { $('#loadButton').click(loadDataFromExernalWebsite); }); function loadDataFromExernalWebsite() { $('#content').load('http://www.tanguay.info/knowsite/data.txt', function() { alert('Load was performed.'); }); } </script> </head> <body> <p>Click the button to load content:</p> <p id="content"></p> <input id="loadButton" type="button" value="load content"/> </body> </html>

    Read the article

  • Ruby on Rails website hosting

    - by sfactor
    i want to start a website. it'll be a small community based website. i've learned a fair bit of ruby on rails and am planning to use it. however, i have never deployed a production website before. i've just practiced in my local computer. i wanted to know what are the things i need to deploy the website on the internet. what is the best place to get a domain name and web hosting, esp for ruby on rails sites. how are cloud based services like amazon EC2 etc different from a traditional web host. which is a better choice. what else might i need to do to deploy a website. also i may happen to have a fair bit of users in the future. so how to go about planning for scalability issues. how to sites like twitter, fmylife.com etc all go about these things.

    Read the article

  • Developing a rich internet application

    - by Serge
    Hello, I have been a desktop developer for a few years mostly doing object oriented stuff. I am trying to branch out into web development, and as a hobby project trying to put a web application together. I have been reading quite alot of information, but I still can't seem to decide on the path to take and would really like some advice. Basically, I want to build something like this: http://mon.itor.us/ I have found this as well: http://www.trilancer.com/jpolite/#t1 But so far it is of little use as I am trying to grasp Javascript. I have been using visual studio for that, is that a good IDE for this tye of thing or should I try expression blend? Jpolite seems to do everything with javascript, which seems kind of cool, but I if I want to make a chart inside a widget that connects to a database, do I need something more? Is this where ASP.NET comes in? I am familiar with .NET, but if I use ASP.NET for my website, do I have to host it on IIS and windows server as opposed to Apache since mono is still being ironed out? Because that would cost more, so would PHP be a better choice? Also, for charting these guys as well as google seem to use flex: http://www.google.com/finance I have found this: http://www.reynoldsftw.com/2009/03/javascript-chart-alternatives/ Would that be sufficient to implement something like google fiance purely in javascript or is there a good reason they use flex? SOrry for the long post but I was trying to be as detailed as possible. Thanks.

    Read the article

  • Basic Steps in reading Excel files into matlab

    - by user3693727
    >> [NUM,TXT,RAW]=xlsread('C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1') ??? Error using ==> xlsread at 219 XLSREAD unable to open file C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1. File C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1.xls not found. This is the error that I have received when I try to read a simple Excel file into MATLAB. This is a snapshot of the spreadsheet I would like to load in. Could guide me the basic know-how to extract these data? I have looked through the other questions pertaining to reading Excel files into MATLAB, but I am still very confused. I ultimately wish to extract the file below for my project using the same method. The second image shows the data I have to extract which I could not do. Its file type seems to be different, it is comma separated values file which is not xls. Hence, I am also confuse about whether different file type prevents extraction of data. Thanks you for helping(:

    Read the article

  • Problem parsing an atom feed using simplexml_load_file(), can't get an attribute.

    - by Craig Ward
    Hi, I am trying to create a social timeline. I pull in feeds form certain places so I have a timeline of thing I have done. The problem I am having is with Google reader Shared Items. I want to get the time at which I shared the item which is contained in <entry gr:crawl-timestamp-msec="1269088723811"> Trying to get the element using $date = $xml->entry[$i]->link->attributes()->gr:crawl-timestamp-msec; fails because of the : after gr which causes a PHP error. I could figure out how to get the element, so thought I would change the name using the code below but it throws the following error Warning: simplexml_load_file() [function.simplexml-load-file]: I/O warning : failed to load external entity "<?xml version="1.0"?><feed xmlns:idx="urn:atom-extension:indexing" xmlns:media="http://search.yahoo.com/mrss/" xmlns <?php $get_feed = file_get_contents('http://www.google.com/reader/public/atom/user/03120403612393553979/state/com.google/broadcast'); $old = "gr:crawl-timestamp-msec"; $new = "timestamp"; $xml_file = str_replace($old, $new, $get_feed); $xml = simplexml_load_file($xml_file); $i = 0; foreach ($xml->entry as $value) { $id = $xml->entry[$i]->id; $date = date('Y-m-d H:i:s', strtotime($xml->entry[$i]->attributes()->timestamp )); $text = $xml->entry[$i]->title; $link = $xml->entry[$i]->link->attributes()->href; $source = "googleshared"; echo "date = $date<br />"; $sql="INSERT IGNORE INTO timeline (id,date,text,link, source) VALUES ('$id', '$date', '$text', '$link', '$source')"; mysql_query($sql); $i++; }` Could someone point me in the right direction please. Cheers Craig

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by Emtucifor
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running as doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for the web server so it is "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • |Ideas for applications using face detection and recognition

    - by Omry
    Full disclosure: I work at face.com. Face.com just launched a free (up to an hourly limit) face detection and recognition REST API. We got a very handy API sandbox that developers can use to play the API and to see what it can and can't do. Besides the obvious point of letting you guys know about the API, I wanted to hear from you what kind of applications you think can be developed with it. Some pretty obvious ideas: Face based login (not entirely secure but still fun). Automatic face crop for sites that let users upload photos (dating sites etc) Some kind of integration into augmented reality games There is no right or wrong answers here, use your imagination :).

    Read the article

  • Follow tab's url in jquery ui tabs

    - by Aakash Chakravarthy
    Hello, I have jquery tabs like <ul id="tabsList"> <li><a href="#tab-1">Name 1</a></li> <li><a href="#tab-2">Name 2</a></li> <li><a href="http://www.google.com/">Name 3</a></li> </ul> <div id="tab-1">content 1</div> <div id="tab-2">content 2</div> the first two tabs load the respective divs. But the third one should go to google.com, instead it does nothing. It just adds http://example.com/index.html#ui-tabs-[object Object] to the url. I am developing a wordpress plugin and the admin page needs a tab interface. I tested this in a local server and not working update: i don't want to load google.com inside the page. It should open the webpage in new tab/window like ordinary links do.

    Read the article

  • SPWeb.Webs, Site vs SubSite

    - by noob.spt
    Hi, I have a very basic question here. I am confused between SPSite. SiteCollection and SPWeb. So my understanding is (or what I could research on this) that, http://My_server TOP Level SIte or SPWEbApplication http://My_server/My_site Site Collection or SPSite Now a site under SPSite that will be referenced through SPWeb. So what are we getting when using SPWeb.Webs. What is a Subsite? Please let me know if I need to rephrase the question or more info is needed. Thanks. SPWeb mySite = SPContext.Current.Web; SPWebCollection sites = mySite.Webs; foreach (SPWeb subSite in sites) { Response.Write(SPEncode.HtmlEncode(subSite.Title) + "<BR>"); }

    Read the article

  • pylons on production server fedora 8

    - by stormdrain
    I'm interested in learning some python, and thought Pylons would be a good starting point (after spending 2 days trying to get django working -- to no avail). I have an Amazon EC2 instance with Fedora 8 on it. It is a bare-bones install. I am halfway through my second day of trying to get it to work. I have mod_wsgi installed. I have Apache (though that's a later task to tackle). I have easy_install, paster is working fine; basically all of the pre-requisites mentioned throughout the Pylons docs. I can't for the life of me get the thing to work. And I can't seem to find a coherent walkthough anywhere that lists all the steps necessary. There is tons of info out there, but it is all scattered. Wsgi this, python that. Google, google, google... "47 million results found for 'socket.error:(lol, 'Yous a goofs')". So, this is my latest attempt: apachectl -k stop cd /home/ paster create -t pylons test [blah blah.. ok] cd test nano development.ini [hmm, last time I changed the host from 127.0.0.1 to my domain name or url, it threw an error like socket.error: (99, 'Cannot assign requested address')... I'll just leave it] [open port 5000 on firewall] paster serve development.ini [firefox-url:5000] Firefox can't establish a connection to the server Doing these steps locally works as expected. This is just a test to see if I can get it to work at all, which I can't. If I get it to work, then is the task of getting it to work with apache. My madness is that I'd like to play around a little developing and deploying before diving into a full-fledged project. So far: self, I am dissapoint.

    Read the article

  • Do we need to disconnect from sharepoint? If Yes How? ( using Web Services : C#)

    - by Pari
    Hi, I am using web services to access sharepoint list,sites and documents. Like: List.asmx,Site.asmx e.t.c. My question is that: Do we need to disconnect from sharepoint when using above services? And if yes then How? Example: GetSiteCollection(String login, String password, String url) { Webs ws = new Webs(); try { ws.Credentials = new NetworkCredential(login, password); } catch (Exception ex) { MessageBox.Show(ex.Message); } ws.Url = url + @"/_vti_bin/webs.asmx"; ws.PreAuthenticate = true; XmlNode websiteNode = ws.GetWebCollection(); XmlNodeList nodes = websiteNode.SelectNodes("*"); // getting list set of sites //Now here after this is there any way to disconnect from server? }

    Read the article

  • Get Eclipse to recognize the maps api

    - by NickTFried
    Hi I'm developing an Android app and trying to incorporate maps into one of my sub-activities. Having followed all of the instructions from Android, my java file will not recognize the "MapActivity" or the import statements to include the needed api. Here is my XML manifest and my class file. <?xml version="1.0" encoding="utf-8"?> <uses-permission android:name="android.permissions.INTERNET"/> <uses-permission android:name="android.permissions.ACCESS_FINE_LOCATION"/> <application android:icon="@drawable/icon" android:label="@string/app_name"> <uses-library android:name="com.google.android.maps" /> <activity android:name=".CadetCommand" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="RedLight"></activity> <activity android:name="PTCalculator"></activity> </application> <uses-sdk android:minSdkVersion="7"/> here is my java file: package edu.elon.cs.mobile; import com.google.android.maps.MapActivity; import com.google.android.maps.MapView; import android.os.Bundle; public class LandNav extends MapActivity{ } Any suggestion would help.

    Read the article

  • Controling virtualbox internet access?

    - by HandyGandy
    I am finally going through the process of moving my XP into a vbox (host linux). The thing is that I am migrating a virtually clean install. So aside from the occasional antivirus scan, I want to make sure that my XP is not sending malware data (keystoke.logs, spam etc. ) out silently ( and thus having picked up some virus ). To that end I want to limit XP to contacting my LAN and a few internet sites. ( mainly sites that require proprietary windows only software to access, AV sites and Windows update ). I want XP to only access preapproved addresses. If it is trying to contact a nonapproved address, I want it somehow logged and access restricted until I allow access. I also don't want to have to decide whether to allow access to a site at my leisure. To keeps things clear let me give an example: I start my vbox/XP ( which I call MYXP) running on my linux box ( called MYLINUX connecting to the net through a linksys wrt54g ) and connects via samba to my LAN ( since my LAN seems to be possessed of every evil thing, it's address is 192.168.666. ). At the moment my configuration is set so that I allow MYXP to access 192.168.666 and www.MYANTIVIRUS_UPDATES.com and www.MS_UPDATES.com. Then on the VM I start a program which tries to make a connection to www.playmygame.com . www.playmygame.com is on my preapproved list so the connection goes through. Later I check attempted accesses and discover that it also tried to connect to www.mygame_high_scores.com I figure this is OK so I add www.mygame_high_scores.com to my approved list. Later, I again check address and discover that my VM/XP tried to access www.mygame_steals_your_identity.com. I do some checking and discover the address is registered to someone in Kiev, Nigeria. Since this doesn't sound kosher to me, I replace the MYXP VM with one that was backed up before I installed mygame. I remove www.playmygame.com and www.mygame_high_scores.com from my access list for MYXP. It should acomplish this with little overheard. When I am not running the VM ideally it should not have any overhead. Suggestions?

    Read the article

  • fuzzy implementaion for capture specific strings

    - by kasun-456
    I am going to develop a web crawler using java to capture hotel room prices from hotel websites. In this case i want to capture room price with the room type and the meal type, so my algorithm should intelligent for that. as an example: Room type: Delux Meal type: HalfBoad price : $20.00 The main problem is room prices can be in different different ways in different different hotel sites. so my algorithm should independent from hotel sites. I am plan to use above room types and meal types as a fuzzy sets and compare the words in webpage with above fuzzy sets using a suitable membership function. any one experienced with this??? or have an Idea for my problem??

    Read the article

  • geographical deployment Vs geo load balancing SharePoint 2010

    - by vrajaraman
    we have a company wide SharePoint portals planned for few thousand users. since the users are distributed among different countries and their applications (hosted in sharepoint) We would like to consider geo deployment Vs geo load balancing. Please share your inputs. We are aware of this, Geo SharePoint Cluster facilitates - Farms at Central and other sites , db into regional. 2 db cluster - syncing using logshipping or SAN sync or SQL 2008 features like database mirroing Vs Loading balancing using URL and some 3rd party. all farm,sites,db centralised. benefits expecting. 1 High availability. 2.diaster recovering management. 3.maintenance hope i miss some of the points to be covered

    Read the article

  • How to run an application as root without asking for an admin password?

    - by kvaruni
    I am writing a program in Objective-C (XCode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much. At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline". However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task. Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.

    Read the article

  • Cross domain cookie tracking

    - by Jon
    Hi, The company I work for has four domains and I'm trying to set up the cookies, so one cookie can be generated and tracked across all the domains. From reading various posts on here I thought it was possible. I've set up a sub domain on one site, to serve a cookie and 1*1 pixel image to all four sites. But I can't get this working on the other sites. If anyone can clarify that: Its possible? If I'm missing something obvious or a link to a good example? I'm trying to do this server side with PHP. Thanks

    Read the article

  • How to get Firebug to tell me what error jquery's .load() is returning?

    - by Edward Tanguay
    I'm trying to find out what data/error jquery's .load() method is returning in the following code (the #content element is blank so I assume there is some kind of error). Where do I find in Firebug what content or error .load() is returning? How can I use console.log to find out at least what content is being returned? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1.3.2"); google.setOnLoadCallback(function() { $('#loadButton').click(loadDataFromExernalWebsite); }); function loadDataFromExernalWebsite() { console.log("test"); $('#content').load('http://www.tanguay.info/web/getdata/index.php?url=http://www.tanguay.info/knowsite/data.txt', function() { alert('Load was performed.'); }); } </script> </head> <body> <p>Click the button to load content:</p> <p id="content"></p> <input id="loadButton" type="button" value="load content"/> </body> </html>

    Read the article

  • replace string in preg_replace

    - by zahir hussain
    <?php $a="php.net s earch for in the all php.net sites this mirror only function list online documentation bug database Site News Archive All Changelogs just pear.php.net just pecl.php.net just talks.php.net general mailing list developer mailing list documentation mailing list What is PHP? PHP is a widely-used..."; ?> I want to highlight specific words. For example php, net and func: php.net s earch for in the all **php**.**net** sites this mirror only **func**tion list online documentation bug database Site News Archive All Changelogs just pear.**php**.**net** just pecl.**php**.**net** just talks.php.net general mailing list developer mailing list documentation mailing list What is **PHP**? **PHP** is a widely-used... Thanks advance.

    Read the article

  • website creation - for non web programmers?

    - by Tim
    I thought I would find decent questions and answers for this, but none really caught my eye... I am a C++ developer and I own a few domains. I'd like to start off with simple web sites for each with a minimum of time and fuss and minimum learning. I have too many projects going and don't have the time to learn how to build websites. One is for a company that currently has only a single product with custom development as well. I hacked together some really bad html with paypal links on it. It is just one simple product. I want to add uservoice to it and maybe some other stuff like FAQ, forums, etc. Right now I just link to a google group I created. Another is a startup in development phase, but we want to provide simple content like whitepapers and press releases and a section for investors. - mostly an "about us" type of thing. We will also be providing details about our product. Then there is a blog site - currently using godaddy's quickblogcast. Not a bad start but I suspect I want to move to something else. The question is - is there a framework that I can use that will make decent, if not outstanding, sites? Again, I have my hands full with three projects in addition to my day job and don't have time to learn web programming. I also don;t want to just pay a web person and then be out in the cold for upgrades, changes, etc. I have been burned before. I am happy with a web-based app or a desktop app that builds html or whatever and then I can ftp it up to the hosting servers. To summarize: - simple to get started - low time to get a web page going - ability to integrate with a few hand-done pages - pay pal integration - uservoice integration - ability to put under my svn would be nice too EDIT Thanks to the responders. I understand now why my original searches failed. I was not searching for "CMS". I'll go back and do that. I would expect that this is a many-times-duplicate... EDIT: I am considering using Wordpress and Drupal - one for each of the sites. I did one Drupal site quickly just so I could qualify for one of the Microsoft programs for discounted dev tools - anyway - it was a quick and dirty homepage and I am still on the learning curve. I look forward to playing with it. So far it has been ok. I am not sure about doing a taste-test between the two - might be a waste of time where I could just become that much better at Drupal faster than spending time on wordpress... Will keep updated. EDIT: Selecting the Drupal answer by slim for now. That is what I am going with. Don't have time to check them all out. Wordpress sounds like a good option too, but such limited time... Results: I have tried wordpress and drupal so far. Wordpress is great for blogging or for a site that you want to run ads from, but I disagree that it is ready for a corporate site, unless you want to spend lots of time making your own theme, etc. But if you spend that time, why not work with drupal? Drupal was a little intimidating at first - but after spending about 4 hours reading the overview and step-by-step guide online was a HUGE step. I got a simple site up and running easily after that. Trying to make a website just by going to the admin panel without reading anything is a waste of time. You really need to read the docs. The site is great. start here: http://drupal.org/getting-started I'd suggest drupal to anyone. It has amazing capabilities, lots of support and lots of users. Just doing blogging? Wordpress is really great for that. So now I've got two sites running with a lot of the functionality I wanted - and they look good. ONE MORE EDIT Well, I have switched back to wordpress after buying a theme and then getting help from web developers. I guess either one will work - it is just a matter of getting comfortable witht he basics, using the right tools and trying things out.

    Read the article

  • File version maintainence via PHP FTP?

    - by Michael
    Currently I'm working on a "plugin" that will be installed on many different sites and I was wondering on the best way for me to maintain the file version of this "plugin". Here's what I was thinking. Have a "master copy" of the plugin on a server, then connect via FTP to the target sites and upload the copy to their site overwriting whatever files they may have. I was wondering the best way to go about this. The "plugin" will have many different folders and files so transferring one file at a time will be too tedious. Is there a way to copy an entire folder over at a time? Or even better, is there a way to recurse through the folders and checking for file difference before uploading the new file? This is to make sure we are uploading a new file and not just the same one.

    Read the article

  • Repeated properties design pattern

    - by Mark
    I have a DownloadManager class that manages multiple DownloadItem objects. Each DownloadItem has events like ProgressChanged and DownloadCompleted. Usually you want to use the same event handler for all download items, so it's a bit annoying to have to set the event handlers over and over again for each DownloadItem. Thus, I need to decide which pattern to use: Use one DownloadItem as a template and clone it as necessary var dm = DownloadManager(); var di = DownloadItem(); di.ProgressChanged += new DownloadProgressChangedEventHandler(di_ProgressChanged); di.DownloadCompleted += new DownloadProgressChangedEventHandler(di_DownloadCompleted); DownloadItem newDi; newDi = di.Clone(); newDi.Uri = "http://google.com"; dm.Enqueue(newDi); newDi = di.Clone(); newDi.Uri = "http://yahoo.com"; dm.Enqueue(newDi); Set the event handlers on the DownloadManager instead and have it copy the events over to each DownloadItem that is enqeued. var dm = DownloadManager(); dm.ProgressChanged += new DownloadProgressChangedEventHandler(di_ProgressChanged); dm.DownloadCompleted += new DownloadProgressChangedEventHandler(di_DownloadCompleted); dm.Enqueue(new DownloadItem("http://google.com")); dm.Enqueue(new DownloadItem("http://yahoo.com")); Or use some kind of factory var dm = DownloadManager(); var dif = DownloadItemFactory(); dif.ProgressChanged += new DownloadProgressChangedEventHandler(di_ProgressChanged); dif.DownloadCompleted += new DownloadProgressChangedEventHandler(di_DownloadCompleted); dm.Enqueue(dif.Create("http://google.com")); dm.Enqueue(dif.Create("http://yahoo.com")); What would you recommend?

    Read the article

  • SetTimeout() and ClearTimeout() to stop freezing of IE8 and dialog aobut scripts overruning

    - by igl00
    I have some 3rd party software where i can open nsites and run javascript. Because some sites make me stack overflow i ussed the trick wih Registry to modify Styles WRAD to FFFFFF. Still some sites may do stack overflow due to DOM. I thought on start of running each site i would do javascript: setTimeout("window.status='one';",10000); then on then end i would like to clear it - my question is how to if this doesnt have any actual id? Will the usual clearTimeout() without anything inside do it fine?

    Read the article

  • Multiple outliers for two variable linear regression

    - by Dave Jarvis
    Problem Building on my previous question, the "extreme" outliers in the following graph are somewhat obvious: Question Given: T - Set of all temperatures Y - Set of all years ST - Sum of temperatures. SY - Sum of years. N - Number of elements T(n) - Temperature of the nth element in the temperature set How would you implement an efficient MySQL stored procedure or user-defined function (UDF) to determine if T(n) is an outlier? (If such an implementation already exists, that would be good to know as well.) Related Sites I am slowly working through these sites to get a better understanding of the problem: Multiple Outliers Detection Procedures in Linear Regression M-estimator Measure of Surprise for Outlier Detection Ordinary Least Squares Linear Regression Many thanks!

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >