Search Results

Search found 18210 results on 729 pages for 'website promotion'.

Page 334/729 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • CSS column height different on Opera/IE to FF

    - by Infiniti Fizz
    Hi all, Thanks to everyone who helped with my last question but I've got a new browser-independent problem: For some reason, the image navigator (not yet functioning) on a website I'm working on is currently not displaying in the correct place on Firefox. It appears in the right place in IE8 and Opera but Firefox seems to have a problem with it. As can be seen in the below image, the imageContainer div (the image and the left/right arrows) appears on top of the footer, this is how it should look i.e. how it looks in IE8 and Opera. But in the image below, the imageContainer div is cutting into the footer div for some reason, and I don't know why. imageContainer has a margin-top: 110px; to get it in the right place at the bottom of its column. There are 2 columns, the left housing the paragraphs and imageContainer and the right housing the Calendar and contact details. The footer div also has clear: both; Also, it's not just the image that is falling into the footer, it's the arrows as well only they are the same colour as the footer so this isn't immediately apparent. Any ideas why it isn't displaying correctly? Is there a better way of aligning the imageContainer to the bottom of it's column (to keep the box shape of the website) other than using the margin-top to position it? Thanks in advance, infinitifizz

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • Problems compiling libjingle/gtk+-2.0 for Mac OS X

    - by mindthief
    Hi All, I'm trying to compile libjingle on Mac OSX Snow Leopard. The INSTALL file said to './configure', 'make' and 'make install', as usual. But make fails for me. Initially it gave some messages indicating that I didn't have pkg-config installed (I guess OSX doesn't come with it installed?), so I downloaded pkg-config from http://pkgconfig.freedesktop.org/releases/ Now I get this message: Package gtk+-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `gtk+-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'gtk+-2.0' found I tried to install gtk by using the script at SourceForge: http://sourceforge.net/projects/gtk-osx/ (this is the website pointed to by the gtk website) Running the script didn't really seem to do anything, here is the output: $./gtk-osx-build-setup.sh Checking out jhbuild (2.27.3) from git... From git://git.gnome.org/jhbuild * tag 2.27.3 -> FETCH_HEAD Installing jhbuild... Installing jhbuild configuration... Installing gtk-osx moduleset files... Done. $ And I still get that error message about "Package gtk+-2.0 not found" while make-ing libjingle. Help will be appreciated, thanks!

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • Identify machines behind a router uniquely based on ipaddress

    - by Amith George
    Some background first. I have a .net client agent installed on each of the machines in the lan. They are interacting with my central server [website] also on the same lan. It is important for my website to figure out which of the machines can talk to each other. For example, machines of one subnet cannot directly talk to machines of another subnet without configuring the routers and such. But machines in the same subnet should be able to talk to each other directly. The problem I am facing is when the lan setup is like in Figure 1. Because Comp1, Comp2 and Comp3 are behind a router, they have got the ipaddress 192.168.1.2 till 192.168.1.4. My client agent on these machines report the same ipaddress back to the server. However, machines Comp4, Comp5 also have the same ipaddresses. Thus, as far as my server is concerned, there are 2 machines with the same ipaddress. Not just that, because the subnet mask is 255.255.255.0 for all machines, my server is fooled into thinking that Comp1 can directly talk to Comp5, which is not possible. So, how do I solve this? What do I need to change in my client or in my server, so that I can support this scenario. These two are the only things in my control.

    Read the article

  • How to deploy a number of disparate project types?

    - by niteice
    This question is similar to http://stackoverflow.com/questions/1900269/whats-the-best-way-to-deploy-an-executable-process-on-a-web-server. The situation is this: I'm developing a product that needs to be deployed to a web server. It consists of 4 website projects, a background service, a couple of command-line tools, and two assemblies shared by all of these components. Now, I also happen to administer the server that this product will be deployed on. So I'm familiar with everything that may need to be done to perform an update: Copy website files Replace the service binary Install updated components in the GAC Configure IIS Update database schema After some research it seems that, to reduce deployment time and to be able to let the other sysadmins handle deployment, I want to deploy all of these as an MSI, except that I don't know a thing about installers. I know VS can generate web deployment projects, but where do I go from there? Being able to simply click Next a few times on an installer is my goal for deploying updates. It would also be nice to modularize it, so for example, I could distribute the four websites among multiple servers and have everything appear as individual components in the installer, and as one entity in Add/Remove Programs. Is all of this too much to ask in a single package?

    Read the article

  • How to store multiple cookies through PHP Curl

    - by Ahmad
    'SOUP.IO' is not providing any api. So Iam trying to use 'PHP Curl' to login and submit data through PHP. Iam able to login the website successfully(through cUrl), but when I try to submit data through cUrl, it gives me error of 'invalid user'. When I tried to analysed the code and website, I came to know that cUrl is getting values of only 1-2 cookies. Where as when I open the same page in FireFox, it shows me 6-7 cookies related to 'SOUP.IO'. Can some one guide me how to get all these 7 cookies values. Following cookies are getable by cUrl: soup_session_id Following cookies are shown in Firefox (not through cUrl): __qca, __utma, __utmb, __utmc, __utmz Following is my cUrl code: $cookie_file_path = getcwd()."/cookie/cookie.txt"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://www.soup.io'); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_HEADER, TRUE); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file_path); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file_path); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) FirePHP/0.4'); curl_setopt($ch, CURLOPT_MAXREDIRS, 10); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $result = curl_exec($ch); curl_close($ch); print_r($result); ? Can some one guide me in this regards Thanks in advance

    Read the article

  • Facebook new js api and cross-domain file

    - by vondip
    Hi all, I am building a simple facebook iframe application. I've decided since the code is separate from facebook none the less, I will also create a connect website as well. In my connect website I'm trying to figure out the following: I am using facebook's new api and I am calling the init function. I can't seem to figure out where I combine my cross-domain file. There's no mention of it in their documentation either. http://developers.facebook.com/docs/reference/javascript/FB.init I am referring to these lines of code: <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'your app id', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); </script>

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. I am now about to venture out to look for a hosting provider etc. This is my problem: I have read on many posts (admittedly old posts) that Debian server is much more robust than Ubuntu server - is this till the case?. In particular, I am constantly "raising elephants" with my Ubuntu 9.10 - this is "ok" for home use, but for a website server, I would not be so forgiving. Also, there seems to be a new "patch" every few weeks - which I would not like on a server (I want to leave the server well alone, and let it get on with its business of serving pages). So in this instance Debian looks a more attractive proposition. I am worried that the C++ apps I have developed on Ubuntu may not be binary compatable with Debian (or I may need to install additional libraries/packages etc to get things to work), and I have zero experience with Debian. Additionally, I dont want to be grappling with the learning curve associated with a new OS, whilst trying to launch a new web site (I am assuming Debian UI is quite different from Ubuntu). In this case, the maxim "the devil you know is better than the one you dont", seems appropriate - and I find Ubuntu a more attractive proposition (atleast I know my apps will run without any probs etc). Can anyone provide some rational advice (based on actual experience), to help me decide which route to take - given the two (conflicting) trains of thought outlined above?

    Read the article

  • ASP MVC - Routing Required?

    - by evo_9
    I've been reading up on MVC2 which came in VS2010 and it sounds pretty interesting. I'm actually in the middle of a large multi-tenant application project, and have just started coding the UI. I'm considering changing to MVC as I'm not that far along at this point. I have some questions about the Routing capabilities, namely are they required to use MVC or can I more or less ignore Routing? Or do I have to setup a default routing record that will make things work like standard ASPX (as far as routing alone is concerned)? The reason why I don't want to use Routing is because I've already defined a custom URL 'rewrite' mechanism of my own (which fires on session_start). In addition, I'm using jquery and opens-standards for the entire UI, and MVC's aspx overhead-free approach seems like a better fit based on how I've already started to build the application (I am not using viewstate at all, for example). I guess my big concern is whether the routing can be ignored, of if I will have to re-implement my custom URL rewriting to work with MVC, and if that's the case, how would I do that? As a new Routing routine, or stick with the session_start (if that's even possible?). Lastly, I don't want to use anything even remotely 'intelligent/readable' for the url - for a site like StackOverflow, the readability of the URL is a positive, but the opposite is true if it's not a public website like this one. In fact, it would seem to me that the more friendly MVC routing URL (which indirectly show method names) could pose a security risk on a private, non-public website app like I'm developing. For all these reasons I would love to use the lightweight aspects of MVC but skip the Routing entirely - is this possible?

    Read the article

  • .Net 2.0 ServiceController.GetServices()

    - by Miles
    I've got a website that has windows authentication enable on it. From a page in the website, the users have the ability to start a service that does some stuff with the database. It works fine for me to start the service because I'm a local admin on the server. But I just had a user test it and they can't get the service started. My question is: Does anyone know of a way to get a list of services on a specified computer by name using a different windows account than the one they are currently logged in with? I really don't want to add all the users that need to start the service into a windows group and set them all to a local admin on my IIS server..... Here's some of the code I've got: public static ServiceControllerStatus FindService() { ServiceControllerStatus status = ServiceControllerStatus.Stopped; try { string machineName = ConfigurationManager.AppSettings["ServiceMachineName"]; ServiceController[] services = ServiceController.GetServices(machineName); string serviceName = ConfigurationManager.AppSettings["ServiceName"].ToLower(); foreach (ServiceController service in services) { if (service.ServiceName.ToLower() == serviceName) { status = service.Status; break; } } } catch(Exception ex) { status = ServiceControllerStatus.Stopped; SaveError(ex, "Utilities - FindService()"); } return status; } My exception comes from the second line in the try block. Here's the error: System.InvalidOperationException: Cannot open Service Control Manager on computer 'server.domain.com'. This operation might require other privileges. --- System.ComponentModel.Win32Exception: Access is denied --- End of inner exception stack trace --- at System.ServiceProcess.ServiceController.GetDataBaseHandleWithAccess(String machineName, Int32 serviceControlManaqerAccess) at System.ServiceProcess.ServiceController.GetServicesOfType(String machineName, Int32 serviceType) at TelemarketingWebSite.Utilities.StartService() Thanks for the help/info

    Read the article

  • Secure web module for paid subscribtion

    - by DarkJaff
    Hello everyone, I'm building a website (a community web site like digg) but we will soon release a new feature that people will need to pay for. Right now, our website is in pure C# in .NET, very simple pages with some AJAX. When the member log in, there is no HTTPS. Everything is check with session and the internal validation that I do. What we need, is that when the people are logged in, they can click on a link a proceed to a payment (Paypal, credit card, etc). After the payment is done, the "billing module" will return a value to my site to validate that the payment is done so the account will be flagged as "paying member". I'm guessing this is the way to do, maybe I'm wrong! So my questions are: -What is the name of this kind of billing module? (I will do some research on that) -Do you know any ready to go module that does this kind of thing? -(I push my luck) Do you know any FREE module that do this kind of things. If something is not clear, don't hesitate to ask question :) Thanks a lot! DarkJaff

    Read the article

  • How can I prevent/make it hard to download my flash video?

    - by Billy
    I want to at least prevent normal users to download my flash video. What's the best way to do it? Create a httphandler, add a token (e.g. timeid), set the cache control to no-cache so that only the users with correct token can view the correct video. Is that feasible? It is the requirement from client that the video should not be downloaded by users and should be watched only in the particular website. I want to know if this works: http://www.somesite.com/video.swf?time=1248319067 Server will generate a token(time in the above example) so that user can only have one request to this link. If the user wants to watch the video again, he needs to go to our website to get the token again. Is this okay to prevent novices from downloading? I can't download this flash video by the downloadHelper firefox plugin: http://news.bbc.co.uk/2/hi/americas/8164177.stm Updated (13:49 pm 2009/07/23): The above file can be downloaded using some video download software. The video files of following Chinese sites are well protected (I can't download it using many video download software): http://programme.tvb.com/drama/abrideforaride/video/ Do you know how it is done?

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • Simple way of getting the Last.fm artist image for recently listened songs?

    - by animuson
    On the Last.fm website, your recently listened track include the 34x34 (or whatever size) image at the left of each song. However, in the RSS feed that they give you, no image URLs are provided for the songs. I was wondering if there was a good way of figuring out the ID for the image that needs to be used for that artist and displaying it based on the data that we're given. I know it is possible to load the artist page from their website and then grab the image values from JavaScript, but that seems overly complicated and would probably take quite some time to do. What we're given: <item> <title>Owl City – Rainbow Veins</title> <link>http://www.last.fm/music/Owl+City/_/Rainbow+Veins</link> <pubDate>Thu, 20 May 2010 18:15:29 +0000</pubDate> <guid>http://www.last.fm/user/animuson#1274379329</guid> <description>http://www.last.fm/music/Owl+City</description> </item> and the 34x34 image for this song would be here (ID# 37056785). Does anything like this exist? I've considered storing the ID number in a cache of some sort once it has been checked once, but what if the image changes?

    Read the article

  • JSoup - Select only one listobject

    - by Zyril
    I'm trying to extract some certain data from a website using JSoup and Java. So far I've been successful in what I'm trying to achieve. <ul class="beverageFacts"> <li><span>Årgång</span><strong>**2009**&nbsp;</strong></li> I want to extract what is inside the ** in the above HTML. I can do this by using the code that follows in JSoup: doc.select("ul.beverageFacts li:lt(1) strong"); I'm using the lt(1) because there are several more list items following that I want to omit. Now to my problem; there's an optional information tab on the site I'm extracting data from, and it also has a class called "beverageFacts". My code will at the moment extract that data too, which I don't want it to do. The code is further down in the source of the website, and I've tried to use the indexer :lt(1) here aswell, but it wont work. <div id="beverageMoreFacts" style="display: block"> <ul class="beverageFacts"><li class="half"> <span> Färg</span><strong> Ljusgul färg.</strong> My overall result is that I extract "2009 Ljusgul färg." instead of only "2009". How can I write my code so it will only extract the first part, which it succesfully does, and omits the rest? EDIT: I get the same result using: doc.select("ul.beverageFacts li:eq(0) strong"); Thanks, Z

    Read the article

  • Trying to write up a C daemon, but don't know enough C to continue

    - by JamesM-SiteGen
    Okay, so I want this daemon to run in the background with little to no interaction. I plan to have it work with Apache, lighttpd, etc to send the session & request information to allow C to generate a website from an object DB, saving will have to be an option so, you can start the daemon with an existing DB but will not save to it unless you login to the admin area and enable, and restart the daemon. Summary of the daemon: Load a database from file. Have a function to restart the daemon. Allow Apache, lighttpd, etc to get necessary data about the request and session. A varible to allow the database to be saved to the file, otherwise it will only be stored in ram. If it is set to save back to the file, then only have neccessary data in ram. Use sql-light for the database file. Build a webpage from some template files. $(myVar) for getting variables. Get templates from a directory. ./templates/01-test/{index.html,template.css,template.js} Live version of the code and more information: http://typewith.me/YbGB1h1g1p Also I am working on a website CMS in php, but I am tring to switch to C as it is faster than php. ( php is quite fast, but the fact that making a few mySQL requests for every webpage is quite unefficent and I'm sure that it can be far better, so an object that we can recall data from in C would have to be faster ) P.S I am using Arch-Linux not MS-Windows, with the package group base-devel for the common developer tools such as make and makepgk. Edit: Oupps, forgot the question ;) Okay, so the question is, how can I turn this basic C daemon into a base to what I am attempting to do here?

    Read the article

  • How to parse XML with special characters?

    - by Snooze
    Whenever I try to parse XML with special characters such as o or ???? I get an error. The xml documents claims to use UTF-8 encoding but that does not seem to be the case. Here is what the troublesome text looks like when I view the XML in Firefox: Bleach: The Diamond Dust Rebellion - MÅ? Hitotsu no HyÅ?rinmaru; Bleach - The DiamondDust Rebellion - Mou Hitotsu no Hyourinmaru On the actual website, Å? is actually the character o. <br /> One day, Doraemon and his friends meet Professor Mangetsu (æº?æ??å??ç??, Professor Mangetsu?), who studies magic and magical beings such as goblins, and his daughter Miyoko (ç¾?å¤?å­?, Miyoko?), and are warned of the dangerous approximation of the &quot;star of the Underworld&quot; to the Earth&#039;s orbit.<br /> <br /> And once again, on the actual website, those characters appear as ???? and ???. The actual XML file is formatted properly other than those special characters, which certainly do not appear to be using the UTF-8 encoding. Is there a way to get NSXML to parse these XML files?

    Read the article

  • Is CakePhp 'standards compliant' when generating HTML, Forms, etc?

    - by dtj
    So I've been reading a lot of "Designing with Web Standards" and really enjoying it. I'm a big CakePhp user, and as I look at the source for various form elements that Cake creates with its FormHelper, I see all sorts of extraneous In the book, he promotes semantic HTML, and writing your markup as simple / generic as possible. So my question is, am I better writing my own HTML in these situations? I really want to work in compliance with XHTML and CSS standards, and it seems I'd spend just as much time (if not more) cleaning up Cakes HTML, when I could just write my own thoughts? p.s. Here's an example in an out of the box form that CakePhp generates using the FormHelper <form id="CompanyAddForm" method="post" action="/omni_cake/companies/add" accept-charset="utf-8"><div style="display:none;"><input type="hidden" name="_method" value="POST" /></div> <div class="input text required"><label for="CompanyName">Name</label><input name="data[Company][name]" type="text" maxlength="50" id="CompanyName" /></div> <div class="input text required"><label for="CompanyWebsite">Website</label><input name="data[Company][website]" type="text" maxlength="50" id="CompanyWebsite" /></div> <div class="input textarea"><label for="CompanyNotes">Notes</label><textarea name="data[Company][notes]" cols="30" rows="6" id="CompanyNotes" ></textarea></div> <div class="submit"><input type="submit" value="Submit" /></div></form>

    Read the article

  • Why does my CGI script keep redirecting links to localhost?

    - by Noah Brainey
    Visit this page http://online-file-sharing.net/tos.html and click one of the bottom footer links. It redirects you to your localhost in the address bar. I have no idea why it does this. This is in the main script that my entire website revolves around: upload.cgi $ENV{PATH} = '/bin:/usr/bin'; delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; ($ENV{DOCUMENT_ROOT}) = ($ENV{DOCUMENT_ROOT} =~ /(.*)/); # untaint. #$ENV{SCRIPT_NAME} = '/cgi-bin/upload.cgi'; use lib './perlmodules'; #use Time::HiRes 'gettimeofday'; #my $hires_start = gettimeofday(); my (%PREF,%TEXT) = (); No file is displayed when someone visits the root directory, although I have a .htaccess file saying to open my upload.cgi file which is located in my root directory. When I point my browser directly to the CGI file it works but it brings me to my localhost again. I'm hosting this website on my own server, which is this computer, and using XAMPP if this information helps. I'm also using DynDNS as my nameservers. I hope you can give me some insight.

    Read the article

  • Is there a simple way to "roll your own forms" for mysql in php, for example in jquery?

    - by talkingnews
    I've been googling around for a really simple way of making what is, in effect, nothing more than an enhanced phpMySql. In a mysql database, I have: Name, address, phone, website etc, plus 2 or 3 custom fields. This data is pulled out to make a website. All I want is to be able to make a freeform form, a bit like Access, but for the web, and the only thing I want to do over and above normal field editing would be to have a list of when I contact them, what was said, and perhaps a reminder when the next action is due. I've looked at so many CRMs my mind is boggling, and they all do WAY more than I need. I don't have leads or accounts, all I have is the need to make sure than when I update the person's details, and for that data to be in the same DB as my site is generate from. I'm happy to learn if I can get pointed in the right direction, and I have a feeling that something like what I want might lie in the direction of jquery. It's just that there's so much good jquery stuff about, I can't see the wood for the trees! Thanks.

    Read the article

  • Am I making the right choice in choosing Yii as my PHP Framework?

    - by Bara
    I am about to begin development of a new website and have been doing research on PHP Frameworks. I'm not an advanced PHP developer, but I have been developing web sites and apps (in asp.net) for a few years now. My website will primarily be AJAX-based (using jQuery) and making lots of calls to web services. After some research, here's what I came up with: CakePHP: Originally started developing in this, but found it too complex. The fact that it forces you to use and learn all this new stuff just to use it was a bit daunting, so I put it aside for the time being. Zend: The performance of the framework leaves me a bit skeptical, but I heard it has great support for creating web services. I also heard it was a bit complex. CodeIgniter: No real reason for not using this one. Based on what I've read CodeIgniter and Yii are very similar, but Yii is a bit faster and doesn't have un-needed code for PHP4 (since I plan on developing exclusively in PHP5). As far as Yii, the only things that scare me about it are that it is newer than the other frameworks so it has a smaller community. It also doesn't seem to have a ton of web service support (only SOAP, from my understanding) as opposed to Zend. So my questions come down to: Should these things worry me? (not as big of a community, poor web service support) Is there anything else I should look into? Is my choice of Yii over the other frameworks ok for a primarily AJAX-based web app? Bara

    Read the article

  • What is the equivalent to Master Views from ASP.NET in PHP?

    - by KingNestor
    I'm used to working in ASP.NET / ASP.NET MVC and now for class I have to make a PHP website. What is the equivalent to Master Views from ASP.NET in the PHP world? Ideally I would like to be able to define a page layout with something like: Master.php <html> <head> <title>My WebSite</title> <?php headcontent?> </head> <body> <?php bodycontent?> </body> </html> and then have my other PHP pages inherit from Master, so I can insert into those predefined places. Is this possible in PHP? Right now I have the top half of my page defined as "Header.html" and the bottom half is "footer.html" and I include_once both of them on each page I create. However, this isn't ideal for when I want to be able to insert into multiple places on my master page such as being able to insert content into the head. Can someone skilled in PHP point me in the right direction?

    Read the article

  • Moving front page (cursebird or foursquare)

    - by Dan Samuels
    Ok so I'll be honest, I have a good amount of experience with php/mysql, I've just started learning jQuery and I've done very little, but some with ajax. So using the terms ajax/jquery interchangeably are a bit confusing to me. Anyway as the title suggest I have a website with 5 items, and I want them to move (meaning, if a more recent one is entered, remove the last item, and put the new one on top), they are 5 of the most recent items in the database table, now I've coded jquery as a test so it fades out the last one, the whole thing moves down, makes room at the top, and fades in a new one. However, it's a test and has 0 interaction with the database, the one that fades in is just in a hidden div. So the jQuery part is taken care of. So I'm unsure how to go about this, I was thinking maybe have ajax check a website off the page that has those 5 items in raw format, and if they change then to refresh? Not looking for a "plz code 4 me" answer, just the concept of how it would work, or some links to get off to the right start. edit - Also, the 5 items are ranked, so if I click item 3 I need it to move above item 2 refreshlessly, so this causes a whole other issue I assume.

    Read the article

  • Adding a div element inside a panel?

    - by Bar Mako
    I'm working with GWT and I'm trying to add google-maps to my website. Since I want to use google-maps V3 I'm using JSNI. In order to display the map in my website I need to create a div element with id="map" and get it in the initialization function of the map. I did so, and it worked out fine but its location on the webpage is funny and I want it to be attached to a panel I'm creating in my code. So my question is how can I do it? Can I create a div somehow with GWT inside a panel ? I've tried to do create a new HTMLPanel like this: runsPanel.add(new HTMLPanel("<div id=\"map\"></div>")); Where runsPanel is a the panel I want to to be attached to. Yet, it fails to retrive the div when I use the following initialization function: private native JavaScriptObject initializeMap() /*-{ var latLng = new $wnd.google.maps.LatLng(31.974, 34.813); //around Rishon-LeTsiyon var mapOptions = { zoom : 14, center : latLng, mapTypeId : $wnd.google.maps.MapTypeId.ROADMAP }; var mapDiv = $doc.getElementById('map'); if (mapDiv == null) { alert("MapDiv is null!"); } var map = new $wnd.google.maps.Map(mapDiv, mapOptions); return map; }-*/; (It pops the alert - "MapDiv is null!") Any ideas? Thanks

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >