Search Results

Search found 27592 results on 1104 pages for 'google sites'.

Page 435/1104 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • MySQL INTO OUTFILE overide existing file?

    - by Derek Organ
    I've written a big sql script that creates a CSV file. I want to call a cronjob every night to create a fresh CSV file and have it available on the website. Say for example I'm store my file in '/home/sites/example.com/www/files/backup.csv' and my SQL is SELECT * INTO OUTFILE '/home/sites/example.com/www/files/backup.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM ( .... MySQL gives me an error when the file already exists File '/home/sites/example.com/www/files/backup.csv' already exists Is there a way to make MySQL overwrite the file? I could have PHP detect if the file exists and delete it before creating it again but it would be more succinct if I can do it directly in MySQL.

    Read the article

  • Don't want to JSON serialize the whole list of classes

    - by mjmcloug
    Hey, I've got a IList of Sites in my application and Site has a large amount of properties. I'm wanting to convert this list to JSON to be used in a dropdownlist similar to this var sites = SiteRepository.FindAllSites(); return new JsonResult() { Data = sites, JsonRequestBehavior = JsonRequestBehavior.AllowGet }; the problem I have is that I only want to use the id and name properties of the site class. I was thinking a way round this would be to use an 'adaptor' class that would then only expose these two properties and I would then serialize that. The problem I have is I want to make the class generic so that it can handle any list of objects. Has anybody come across a similar situation and solved it?

    Read the article

  • What's the easiest/fast way to get my website up and running on the web?

    - by ggfan
    This is probably a really really beginner's question, but I would like to know what's the fastest way to get my site on the web so that people can start using it. I'm learning everything about programming out of books and at home so I don't have much experience. --Before I go to like godaddy.com or such site to get a domain name, is there any free sites that would allow me to upload my site so users can use it? I have html,css,php,mysql,javascipt in my scripts so I don't think many sites allow free uploads with such languages. --If I can't find a free site, is there any good places to get a domain name and web hosting that supports most languages at a low price? (doesn't have to be professional hosting because I am still a beginner) --If I go to say godaddy.com and get their webhosting and domain name, would I be allowed to run php,mysql,python,java on it? (I looked at some hosting sites and most only allow php/mysql)

    Read the article

  • Insert Data from to a table

    - by Lee_McIntosh
    I have a table that lists number of comments from a particular site like the following: Date Site Comments Total --------------------------------------------------------------- 2010-04-01 00:00:00.000 1 5 5 2010-04-01 00:00:00.000 2 8 13 2010-04-01 00:00:00.000 4 2 7 2010-04-01 00:00:00.000 7 13 13 2010-04-01 00:00:00.000 9 1 2 I have another table that lists ALL sites for example from 1 to 10 Site ----- 1 2 ... 9 10 Using the following code i can find out which sites are missing entries for the previous month: SELECT s.site from tbl_Sites s EXCEPT SELECT c.site from tbl_Comments c WHERE c.[Date] = DATEADD(mm, DATEDIFF(mm, 0, GetDate()) -1,0) Producing: site ----- 3 5 6 8 10 I would like to be able to insert the missing sites that is listed from my query into the comments table with some default values, i.e '0's Date Site Comments Total --------------------------------------------------------------- 2010-04-01 00:00:00.000 3 0 0 2010-04-01 00:00:00.000 5 0 0 2010-04-01 00:00:00.000 6 0 0 2010-04-01 00:00:00.000 8 0 0 2010-04-01 00:00:00.000 10 0 0 the question is, how did i update/insert the table/values? cheers, Lee

    Read the article

  • Writing a search engine

    - by wvd
    Hello all, The title might be a bit misleading, but I couldn't figure out a better title. I'm writing a simple search engine which will search on several sites for the specific domain. To be concrete: I'm writing a search engine for hardstyle livesets/aftermovies/tracks. To do I will search on the sites who provide livesets, tracks, and such. The problem here is speed, I need to pass the search query to 5-7 sites, get the results and then use my own algorithm to display the results in a sorted order. I could just "multithread" it, but it's easier said then done so I have a few questions. What would be the best solution to this problem? Should I just multithread/process this application, so I'm going to get a bit of speed-up? Are there any other solutions or I am doing something really wrong? Thanks, William van Doorn

    Read the article

  • most popular j2ee based websites

    - by krishna
    J2EE as I understand is used to build Enterprise applications. My question is :What are the most popular public facing(internet) sites using j2ee stack. The one's that I know of are : linkedIn.com ,evite.com and sun ibm and oracle (obviously) Eclipse.org uses php, I wonder why? If you have worked on/know any other sites, can your share your experience and also the technologies used(if that's not an issue)? EDIT:It doesn't have to use the full stack. EDIT : There are quite a few ecommerce websites like bestbuy.com. I know this bcos I worked with the ATG(atg.com) ecommerce suite and their website lists their clients. Iam looking for those kind of examples and also your experience working on them. Please limit to only internet sites

    Read the article

  • Switching to a VPS

    - by Damian
    Well, I know absolutely nothing about the subject, so I really need help. I currently have a website running on google app-engine (Java) and I can't get it to what I want because of app engine's limitations (no full text search mainly). The traffic is low, never reached 15% of the free quota (around 1500 daily pageviews). I also have 3 sites in drupal hosted in a shared hosting service, and this is giving me problems, because the server speed is awful. The sites are VERY low trafic, but load times are bad, and I might need to add more sites for some clients, so this will only get worse. So, i'm planning to move all that to VPS. The question is, can I have 2 http servers running in the same VPS? because I will need Apache-php-drupal server and a java server (tomcat?). I have really no idea on this, so any tip will be very helpful to me. Thanks!

    Read the article

  • Dot Net Nuke app_offline randomly being generated

    - by chelfers
    We have had multiple DNN sites running for quite a few months now without any issues. Twice in the last 3 days our sites have gone offline by the addition of the app_offline.htm file in the root dir. There is only one developer with access to the sites at a coding / directory viewing level and the file is generated at weird times times when he is NOT accessing our network. We are not publishing anything to the server ( and have not published any .net code in days ), upgrading, changing code, or even modifying content. Has anyone run into this issue?

    Read the article

  • [CakePHP] what is the aim to declare $name in each controller

    - by kwokwai
    Hi all, I am leaning cakePHP. I noticed that a variable $name is declared in each Controller. What is its purpose? Is it referring to the name of table Sites? <?php class SitesController extends AppController { var $name = 'Sites'; ... } ?> If yes, Can users refer to more than one table like this? var $name = 'Sites', 'Sites2', 'Sites3';

    Read the article

  • How to include a custum list in one Sharepoint site in to another as a web part?

    - by Rakhitha
    I have two share point web sites. One is a child web site of the other. For example if my first site is myweb1, other one is myweb1/myweb2. I have a custom list created in myweb1. I want to include that as a web part in number of web pages in both myweb1 and myweb1/myweb2 sites. Including the web part in the same site which contains the custom list is not a problem. But how do I include it in the other site. The web part does not show up in the list. I dont want to copy the content of custom list. I want pages in both sites that have included this list as a web part to be updated whenever the list content is changed. Any ideas?

    Read the article

  • install CakePHP on Mac osx: apache problems

    - by ed209
    First time cake user and I'm having real apache problems. For some reason the .htaccess is trying to find File does not exist: /Library/WebServer/Documents/Users but there is no such directory as Users. I have tried setting up the following also: /etc/apache2/extra/httpd-vhosts.conf <VirtualHost *:80 > DocumentRoot "/Users/username/Sites/mysite/app/webroot" ServerName mysite.dev ServerAlias www.mysite.dev mysite.dev *.mysite.dev <Directory "/Users/username/Sites/mysite/app/webroot"> Options Indexes FollowSymLinks AllowOverride All </Directory> </VirtualHost> /etc/hosts 127.0.0.1 mysite.dev /etc/apache2/users/username.conf <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> That also hasn't worked, but with a different error Failed opening required 'cake/libs/cache/file.php' Although I'd rather not use virtual hosts, and just run it off localhost

    Read the article

  • How to write my own download manager using Objective C for iOS devices

    - by Saurabh
    I am writing a download manager for iPhone using objective C. I am using ASIHTTP framework and its working great. But my problem is I am not able to download from file sharing sites like filesonic, rapidshare, hotfile etc. I want to know how can I get download (actual download) url from these sites, or at least how these sites are hiding this info (and where), so I can get that somehow... Is there any open source library or framework to help me with this? How firefox or other desktop browser get this link? Any help will be much appreciated! Update 1 : I don't want to bypass their advertising and revenue streams. Almost all file sharing companies also provide free downloads with low bandwidth, I only want to use that service. there are many download managers available now for iPhone like - "Downloads Lite". I just want to build a similar functionality.

    Read the article

  • Denormalize for Simplicity: Ungood idea?

    - by yar
    After reading this question, I've learned that denormalization is not a solution for simplicity. What about this case? I have news-articles which have a list of sites-article-will-be-published-to. The latter can be expressed in normalized fashion either by table and a many-to-many relationship (via a cross-table, I think). But the simple solution is to just throw a bunch of booleans for the sites-article-will-be-published-to. Assuming the sites are: small in number will not change over time have no fields themselves, except a name Is this still a terrible idea? The many-to-many relationship seems somewhat cumbersome, but I've done it before in cases like this (and it seemed cumbersome). Note: I'm doing this in Rails, where it's not that painful.

    Read the article

  • Question about finding third party website APIs.

    - by Abe Miessler
    I have a client that works with several well established 3rd party sites to sell his product (in this case hotel rooms, but the question is more general). Every time he gets a reservation he has to go to each site and manually make changes. I've done a quick search to see if there are APIs or any developer resources for these sites but have come up with nothing. I've emailed his contacts for each site but have yet to receive a response. I don't have a lot of experience working with third party sites like this but in the past when I've looked for APIs I usually come up empty handed. Are these types of APIs rare or am I just looking in the wrong places? It seems like an obvious thing to have for a company that has an affiliate program. Any suggestions on where to look for APIs or am I at the mercy of my "contact"?

    Read the article

  • VirtualHost configuration

    - by Hari
    Hi, I need to configure two name-based virtual hosts in my ubuntu pc. If I type the address "http://mypage1" in browser, it should display my first customized html page and if I type the address "http://mypage2", it should display my second customized html page. I tried out the following: 1. installed apache 2. created a file mypage1 inside sites-available with the contents as follows: VirtualHost *:80 ServerName mypage1 ServerAlias http://mypage1 DocumentRoot /var/www/mypage1/html /VirtualHost 3. created a similar file mypage2 inside sites-available 4. ran the commands "a2ensite mypage1" and "a2ensite mypage2" to generate soft links inside sites-enabled. 5. restarted apache using "sudo /etc/init.d/apache2 restart" After doing the above steps, when I type mypage1 in firefox, I get dns_unresolved_hostname error. Kindly help me how to resolve this problem.

    Read the article

  • Sharing session (or cookie) using Grails acegi plugin

    - by firnnauriel
    Is it possible for two different Grails project, also having different domains, to share a session/cookie? Let's say I have 2 sites: www.mycompany.com, and, www.othercompany.com. Assume that both sites are having same domains, and same database and records too. What I want to know is if this code: authenticateService.userDomain() or even the authenticateService.isLoggedIn() will behave and return exactly the same object/result whether it is called in either of the site. Basically, what we need is a solution for sharing/identifying logged in user between two different sites. Need more details on how to implement this using acegi 0.5.2 and grails 1.2.1. Hoping for any leads on this. Thank you.

    Read the article

  • What value to use for include_path in .htaccess

    - by bateman_ap
    Currently in my .htaccess file I am setting include_path such as this: php_value include_path /mnt/webs/mysite/includes:/usr/share/pear However this isn't great if I need to put my sites on a new server as I need to go through a whole load of sites updating each .htaccess. Basically would like a way to basically say "use the folder called includes in the site root (I run a variety of different sites off one server so each .htaccess file and include path will be different). In a related question on here I had someone use: inclue(dirname(__FILE__)."/inc2.php"); But this is in code and might be a bit annoying to do everytime, however should I just use this method and scrap the idea of using .htaccess?

    Read the article

  • How to include a Custom list in one Sharepoint site in to another as a web part?

    - by Rakhitha
    I have two share point web sites. One is a child web site of the other. For example if my first site is myweb1, other one is myweb1/myweb2. I have a custom list created in myweb1. I want to include that as a web part in number of web pages in both myweb1 and myweb1/myweb2 sites. Including the web part in the same site which contains the custom list is not a problem. But how do I include it in the other site. The web part does not show up in the list. I dont want to copy the content of custom list. I want pages in both sites that have included this list as a web part to be updated whenever the list content is changed. Any ideas?

    Read the article

  • Advantage Database Replication

    - by Jon
    I have a client that wants two sites to have the ability to sync databases so information at Site A can be synced with Site B so the two sites can look at the same data. I'm not even sure of the infrastructure required. Would a VPN required to connect the 2 databases or would an internet based database work ie/Site A to InternetDatabase and Site B to InternetDatabase. Each site copies data to it periodically and then the InternetDatabase syncs it and the Sites can then pull data down. My other thought was something like Dropbox. If Site A and Site B use a Dropbox account to sync the ADT files etc can the database at each site then sync with those ADT files? Thanks

    Read the article

  • Search server 2008 express working with WSS 3.0 (error when crawl second web application (website) s

    - by tberube
    My search server 2008 express crawls 3 sharepoint servers and 1 windows file server. The file server and 2 other sharepoint server crawl all content and master and sub sites just fine. On the 3rd sharepoint server the default site crawl just fine...The second web application site (content database) crawls the top site and crawls all sub sites BUT The subsites get the below error in the crawl log. Deleted by the gatherer (The start address or content source that contained this item was deleted and hence this item was deleted.) I have checked the security rights (OK) I have checked the setting to crawl a SharePoint site (if wrong the top site would not work)... ???? Last part = I can crawl 3 other sharepoint sites (web applications/other content databases) on the same server...It seems to be just this one site...

    Read the article

  • Sharepoint lockout

    - by user301751
    Recently a guy from our 3rd line team thought it would be funny to delete my account from AD. This has now been re-added. Everything is back to normal apart from my Access to Sharepoint sites. I am getting "The file exists. (Exception from HRESULT: 0x80070050)" Error on all sites. After some googleing I came across a guy with the same issue and it was an issue with the SID being different from my old account. Since this I deleted my account from Site Administrators and re-added. This would refresh the SID with the new one. I also check on the Content database that the site ID matched using the following transactions and the SIDs match. select s.Id, w.FullUrl from Sites s inner join Webs w on s.RootWebId = w.Id select * from UserInfo where tp_Login='domain\username' and tp_SiteID='' I am now a bit clueless.

    Read the article

  • django - change content inside an iframe

    - by fabriciols
    Hello guys, I have a site running in django and want to make it available on third party sites. In these sites they make my content available through an tag. When my site is requested from within that iframe I would like to modify the content (remove header, filter data, etc.). What would be the best way to do this? Is there any way of knowing that the request is being made from an iframe? Multiples sites will request the same url, can I change the content, based on the requesting site? Thanks! PS: Sorry for my bad english :/

    Read the article

  • variable $base_path is not working

    - by Nidhi Prasad
    I am trying to get the value of base_path variable in PHP (on lamp server) . I have kept the code insider beta_test directory inside www directly. i.e, base path function should return " /beta_test/ " . But it is returning just single slash ( "/" ) . The code that I tried is <script type="text/javascript" src="<?php print base_path(); ?>sites/all/themes/people10/slider/call.js"></script> Expected output is <script type="text/javascript" src="/beta_test/sites/all/themes/people10/slider/call.js"></script> But its giving <script type="text/javascript" src="/sites/all/themes/people10/slider/call.js"></script> I am using php version 5.3.3.Can anyone please help me in getting this issue solved? I am newbie to php and drupal .

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >