Search Results

Search found 14539 results on 582 pages for 'date conversion'.

Page 442/582 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • What is the best way to clear the CSS style "float"?

    - by Sam Saffron
    I'm pretty accustomed to clearing my floats by using <br style="clear:both"/> but stuff keeps on changing and I am not sure if this is the best practice. There is a CSS hack (from positioneverything) available that lets you achieve the same result without the clearing div. But... they claim the hack is a little out of date and instead you perhaps should look at this hack. But.. after reading through 700 pages of comments :) it seems there may be some places the latter hack does not work. I would like to avoid any javascript hacks cause I would like my clearing to work regardless of javascript being enabled. What is the current best practice for clearing divs in a browser independent way?

    Read the article

  • MySQL easy question CURDATE()

    - by Tristan
    I want to compare two results one is stored in the first query, and the other is exactly the same as the first, but i want only to recieve data < today "SELECT s.GSP_nom as nom, timestamp, COUNT(s.GSP_nom) as nb_votes, AVG(v.vote+v.prix+v.serviceClient+v.interface+v.interface+v.services)/6 as moy FROM votes_serveur AS v INNER JOIN serveur AS s ON v.idServ = s.idServ WHERE s.valide = 1 AND v.date < CURDATE() ROUP BY s.GSP_nom HAVING nb_votes > 9 ORDER BY moy DESC LIMIT 0,15"; is that correct ? thank you

    Read the article

  • How to match this with a regex?

    - by andrei miko
    I just wanna use a regex to match something from my products file. I have them in this way "Something here","a link here","website here","date here(eg. 11/12/2012)","description1 here","**description2 here**","some other text here","here also", and so on ... I wanna match with a regex only description 2. I tried this: (?<=[0-9][0-9][0-9][0-9]).*(?=",") but it wasn't good because it was getting me description1, description2 and some quotes also. Thanks in advance.

    Read the article

  • Using new Image().src for click tracking

    - by razass
    I am attempting to figure out why this click tracker isn't working. The code was written by another developer so I am not entirely sure if this ever did work. function trackSponsor(o, p) { (new Image()).src = PATH_BASE + 'click/' + p + '/' + o + "?_cache=" + (+(new Date())); return false; } From what I can gather is that when this function is called it 'creates a new image' to fire a php script asynchronously. According to Firebug, the request is made however it is 'aborted' ~30ms in. The odd thing is that it will 'sometimes' work as in 1 in every 10+ regardless of the browser. I would much rather fix this so that it works instead of re-writing it as an ajax request. Any help is appreciated. Thanks in advance.

    Read the article

  • How to automate login to Google API to get OAuth 2.0 token to access known user account

    - by keyser_sozay
    Ok, so this question has been asked before here. In the response/answer to the question, the user tells him to store the token in the application (session and not db, although it doesn't matter where you store it). After going through the documentation on Google, it seems that the token has an expiration date after which it is no longer valid. Now, we could obviously automatically refresh the token every fixed interval, thereby prolonging the lifespan of the token, but for some reason, this manual process feels like a hack. My questions is: Is this most effective (/generally accepted) way to access google calendar/app data for a known user account by manually logging in and persisting the token in the application? Or is there another mechanism that allows us to programmatically login to this user account and go through the OAuth steps?

    Read the article

  • A good approach to db planing for reporting service

    - by Itay Moav
    The scenario: Big system (~200 tables). 60,000 users. Complex reports that will require me to do multiple queries for each report and even those will be complex queries with inner queries all over the place + some processing in PHP. I have seen an approach, which I am not sure about: Having one centralized, de-normalized, table that registers any activity in the system which is reportable. This table will hold mostly foreign keys, so she should be fairly compact and fast. So, for example (My system is a virtual learning management system), A user enrolls to course, the table stores the user id, date, course id, organization id, activity type (enrollment). Of course I also store this data in a normalized DB, which the actual application uses. Pros I see: easy, maintainable queries and code to process data and fast retrieval. Cons: there is a danger of the de-normalized table to be out of sync with the real DB. Is this approach worth considering, or (preferably from experience) is total $#%#%t?

    Read the article

  • javascript table - update on data request

    - by flyingcrab
    Hi, I am trying to update a table based on a json request. The first update / draw works fine - but any subsequent changes to the variables (the start and end date) do not show up - even though the json pulled from the server seems to be correct (according to firebug). AFAIK the code below should re-initialize everything - no sure what is going on (I'm using the Google vizulization api)? function handleQueryResponse(response) { if (response.isError()) { //alert('Error in query: ' + response.getMessage() + ' ' + response.getDetailedMessage()); return; } visualization = new google.visualization.Table(document.getElementById('visualization')); visualization.draw(response.getDataTable(), null); } One more thing: I'm working on a page that displays textbased tables and currently trying to decide between the google table (viz api) and a jQuery alternative I came across jqGrid any good ones I am missing?

    Read the article

  • Selecting keys based on metadata, possible with Amazon S3?

    - by nbv4
    I'm sending files to my S3 bucket that are basically gzipped database dumps. They keys are a human readable date ("2010-05-04.dump"), and along with that, I'm setting a metadata field to the UNIX time of the dump. I want to write a script that retrieve the latest dump from the bucket. That is to say I want the the key with the largest unix time metadata value. Is this possible with Amazon S3, or is this not how S3 is meant to work? I'm using both the command line tool aws, and the python library boto

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • select rows with unidentical column values using R

    - by Bazon
    Hi Guys, I need to create a new data frame that excludes dams that appear in "dam1" and "dam2" columns on the same fosdate (fostering date). I tried df <- df[df$dam1!=df$dam2,] but did not work. Dam1 and dam2 are factors which are the ids's of mothers. my df: fosdate dam1 dam2 8/09/2009 2Z523 2Z523 30/10/2009 1W509 5C080 30/10/2009 1W509 5C640 30/10/2009 1W509 1W509 1/10/2009 1W311 63927 The new data frame that I need to get is: dfnew: fosdate dam1 dam2 30/10/2009 1W509 5C080 30/10/2009 1W509 5C640 1/10/2009 1W311 63927 Would appreciate any help! Bazon

    Read the article

  • Request attributes in jsf / icefaces behaves strange (survive request end)

    - by hubertg
    I have the following code in a listener method: FacesContext.getCurrentInstance().getExternalContext().getRequestMap().put("time", new Date()); When a button is clicked the following code is executed System.out.println(FacesContext.getCurrentInstance().getExternalContext().getRequestMap().get("time")); One could except that "time" is null when the listener was not executed while processing the current request, but: it seems like the "time" object survives the request processing. So when "time" has been set sometimes in the past it stays there... can anybody explain this? Thanks.

    Read the article

  • When saving to a model, created and modified aren't automatically populated by CakePHP. Using SQL Se

    - by bakerjr
    Hi when saving to a model, my created and modified fields aren't automatically populated by CakePHP. It was automatically populated when I was using MySQL but now it isn't. I'm not using NOW() back when I was still using MySQL. Why is it? Also when a field's value is not set 'NULL' (with quotes) is inserted causing errors because SQL Server says I can't insert a string to a field of type smallint/date etc. How do I fix this? Thanks in advance!

    Read the article

  • How to enjoy DVD on Apple iPad

    - by user44251
    I believe many people spent a sleepless night yesterday waiting for the new Apple Tablet to come, just a few days ago or perhaps longer I noticed fierce debate about it, its name, size, capacity, processor, main features, price etc. And now, they can take a long breath with the new Apple Tablet named iPad officially released on 28, January, 2010 (Beijing Time). But I know a new battle just begins. iPad, sounds somewhat like iPod and it really shares some similarities in terms of shape like smart, light and portable. It has a 9.7-inch, LED-backlit, IPS display with a remarkable precise Multi-Touch screen. And yet, at just 1.5 lbs and 0.5 inches thin, it's easy to carry and use everywhere. It can greatly facilitates your experience with the web, emails, photos and videos. Right now, it can run almost 140.000 of the apps on the Apple store. It can even run the apps you have downloaded for your iPhone or iPod touch. But so far, I haven't seen any possibility that it can work with DVD, probability there is no built-in DVD-ROM or DVD player which can play DVD directly. As Apple iPad states, the video formats supported are MPEG-4 (MP4, M4V), H.264, MOV etc and audio formats accepted are AAC, Proteceted AAC, MP3, AIFF and WAV etc, those are formats that are commonly used with iMac. This could really a hard nut to crack if you want to watch your favourite DVD on this magic Apple iPad. But don't worry, there is still way out, you just need a few steps for ripping and importing DVD movies to Apple iPad with a simple application DVD to iPad converter What's on DVD to iPad Converter for Mac DVD to iPad converter for Mac is a powerful and professional application designed for the newly released Apple iPad which can rip, convert your DVD contents to Apple iPad compatible MPEG-4 (MP4, M4V), H.264, MOV etc, and other popular file formats like AVI, WMV, MPG, MKV, VOB, 3GP, FLV etc can also be converted so that you can put on your portable devices like iPod, iPhone, iRiver, BlackBerry etc. Besides, it can also extract audio from DVD videos and save as MP3, AIFF, AAC, WAV etc. Mac DVD to iPad converter has also been enhanced that can run both on PowerPC and Intel (Snow Leopard included). It can offer versatile editing features which allows you to make your own DVD videos. For example, you can cut your DVD to whatever length you like by Trim, crop off unwanted parts from DVD clips by Crop, add special effect like Gray, Emboss and Old film to make your videos more artistic. Besides, its built-in merging feature and batch mode allows you to join several DVD clips into a single one and do batch conversion. And more features can be expected if you afford a few minutes to try.

    Read the article

  • XML file creation Using XDocument

    - by Pramodh
    i've a list (List< string) "sampleList" which contains Data1 Data2 Data3... How to create an XML file using XDocument by iterating the items in the list in c sharp. The file structure is like <file> <name="samplee"/> <date=" "/> <info> <data value="Data1"/> <data value="Data2"/> <data value="Data3"/> </info> </file please help me to do this

    Read the article

  • How to keep Hibernate mapping use under control as requirements grow

    - by David Plumpton
    I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow. Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle. I'm just wondering if there are recommendations about ways to handle/mitigate this.

    Read the article

  • Comparing an id to id of different tables rows mysql

    - by jett
    So I am trying to retrieve all interests from someone, and be able to list them. This works with the following query. SELECT *,( SELECT GROUP_CONCAT(interest_id SEPARATOR ",") FROM people_interests WHERE person_id = people.id ) AS interests FROM people WHERE id IN ( SELECT person_id FROM people_interests WHERE interest_id = '.$site->db->clean($_POST['showinterest_id']).' ) ORDER BY lastname, firstname In this one which I am having trouble with, I want to select only those who happen to have their id in the table named volleyballplayers. The table just has an id, person_id, team_id, and date fields. SELECT *,( SELECT GROUP_CONCAT(interest_id SEPARATOR ",") FROM people_interests WHERE person_id = people.id ) AS interests FROM people WHERE id IN ( SELECT person_id FROM people_interests WHERE volleyballplayers.person_id = person_id ) ORDER BY lastname, firstname I just want to make sure that only the people who are in the volleyballplayers table show up, but I am getting an error saying that Unknown column 'volleyballplayers.person_id' in 'where clause' although I am quite sure of the name of table and I know the column is named person_id.

    Read the article

  • Converting datetime.ctime() values to Unicode

    - by Malcolm
    I would like to convert datetime.ctime() values to Unicode. Using Python 2.6.4 running under Windows I can set my locale to Spanish like below: import locale locale.setlocale(locale.LC_ALL, 'esp' ) Then I can pass %a, %A, %b, and %B to ctime() to get day and month names and abbreviations. import datetime dateValue = datetime.date( 2010, 5, 15 ) dayName = dateValue.strftime( '%A' ) dayName 's\xe1bado' How do I convert the 's\xe1bado' value to Unicode? Specifically what encoding do I use? I'm thinking I might do something like the following, but I'm not sure this is the right approach. codePage = locale.getdefaultlocale()[ 1 ] dayNameUnicode = unicode( dayName, codePage ) dayNameUnicode u's\xe1bado' Malcolm

    Read the article

  • What Web design tool would make a good CityDesk replacement?

    - by Joshua Fox
    I am looking for a tool for building static template-based web sites, your typical brochure-ware for a non-profit or a personal site. I have used CityDesk, but that is out-of-date, unsupported, and has certain problems. Of course there are lots of tools out there, but I cannot find anything similar to CityDesk: WYSIWYG as well as HTML coding a templating system not overdesigned like, say, Dreamweaver built for developers who understand HTML/JS/CSS but easier to use than hand-coding of PHP, Ruby, or other templates in a text editor supporting the editing of pages by non-developers preferably free I'd also like it to be CSS-aware; and to have lots of free templates available. Or alternatively, static template-based sites are often developed nowadays on the Web using a CMS like Django; is that the way to go? Edit: Namo, DreamWeaver, NetObjects Fusion, Coffee Cup, Evrsoft First Page, and Microsoft Expression might be candidates. I'll appreciate comments on these based on the criteria above.

    Read the article

  • How to model dependency injection in UML ?

    - by hjo1620
    I have a Contract class. The contract is valid 1 Jan 2010 - 31 Dec 2010. It can be in state Active or Passive, depending on which date I ask the instance for it's state. ex. if I ask 4 July 2010, it's in state Active, but if I ask 1 Jan 2011, it's in state Passive. Instances are created using constructor dependency injection, i.e. they are either Active or Passive already when created, null is not allowed as a parameter for the internal state member. One initial/created vertex is drawn in UML. I have two arrows, leading out from the initial vertex, one leading to state Active and the other to state Passive. Is this a correct representation of dependency injection in UML ? This is related to http://stackoverflow.com/questions/2779922/how-model-statemachine-when-state-is-dependent-on-a-function which initiated the question on how to model DI in general, in UML.

    Read the article

  • In Ruby Compare 2 lines in a log file which BOTH contain the SAME "WORD" but ONLY print out the line

    - by kamal
    here are sample lines Apr 9 11:53:26 skip [2244]: [2244] ab-cd-ef:cc [INFO] A recoverable error has occurred some other log lines .. .... Apr 9 12:53:26 skip [2244]: [2244] ab-cd-ef:cc [INFO] A recoverable error has occurred now the LATEST line would have to be one with the latest Date String, and THAT is the one that needs to be printed, plus the NEXT time the parser runs on the log file, somehow the previous LATEST line has to be compared with the Existing latest one, and it CAN e the case, that NOTHING Changed and the OLD line is STILL the latest one, OR there is a NEW line, but ONLY the NEW log line should be printed and NOT if there is NO NEW log Entry.

    Read the article

  • What to Learn: Rails 1.2.4 -> Rails 3

    - by Saterus
    I've recently convinced my management that our outdated version of Rails is slowing us down enough to warrant an upgrade. The approach we're taking is to start a fresh project with current technology rather than a painful upgrade. Our requirements for the project have changed and this will be much easier. The biggest problem is actually that my knowledge of Rails is out of date. I've dealt only with Rails 1.2.4 while the rest of the world has moved on long ago. What topics have I missed by being buried in my work instead of keeping up with the current Rails fashion? I'm hesitant to dig through blogs at random because I'm not sure how much has changed between the intervening versions of Rails. It's no use to learn Rails 2.1-2.3 specific stuff that is no longer useful for Rails 3.

    Read the article

  • Excel isn't reading sql exported csv properly

    - by mhopkins321
    I have a batch file that calls sqlcmd to run a command and then export the data as a csv. When viewed in a cell the trasancted date for example shows 35:30.0 but if you click on it the formula bar shows 1/1/1900 2:45:00 PM. I need the full timestamp to show in the cell. Any ideas? The batch file is the following sqlcmd -S server -U username -P password -d database -i "D:\path\sqlScript.sql" -s "," > D:\path\report.csv -I -W -k 1 The script is the following. Now I currently have them cast as varchars, but that's simply because i've tried to change it a bit. Varchar doesn't work either. SET NOCOUNT ON; select top(10)BO.Status, cast(tradeDate AS varchar) AS Trade_Date, CAST(closingTime AS varchar) AS Closing_Time, CAST(openingTime AS varchar) AS openingTime FROM GIANT COMPLICATED JOINS OF ALL SORTS OF TABLES

    Read the article

  • Group MySQL Data into Arbitrarily Sized Time Buckets

    - by Eric J.
    How do I count the number of records in a MySQL table based on a timestamp column per unit of time where the unit of time is arbitrary? Specifically, I want to count how many record's timestamps fell into 15 minute buckets during a given interval. I understand how to do this in buckets of 1 second, 1 minute, 1 hour, 1 day etc. using MySQL date functions, e.g. SELECT YEAR(datefield) Y, MONTH(datefield) M, DAY(datefield) D, COUNT(*) Cnt FROM mytable GROUP BY YEAR(datefield), MONTH(datefield), DAY(datefield) but how can I group by 15 minute buckets?

    Read the article

  • how to fetch a range of files from an FTP server using C#

    - by user260076
    hello all, i'm stuck at a point where i am using a wildcard parameter with the FtpWebRequest object as suck FtpWebRequest reqFTP = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://" + ftpServerIP + "/" + WildCard)); now this works fine, however i now want to fetch a specific range of files. say the file naming structure is *YYYYMMDD.* and i need to fetch all the files prior to today's date. i've been searching for a wildcard pattern for that with no good results, one that will work in a simple file listing. and it doesn't look like i can use regex here. any thoughts ?

    Read the article

  • git, how to I go back to origin master after pulling a branch

    - by fishtoprecords
    This has to be a FAQ, but I can't find it googling. Another person created a branch, commit'd to it, and pushed it to github using git push origin newbranch I successfully pulled it down using git pull origin newbranch Now, I want to go back to the origin master version. Nothing I do seems to cause the files in the origin master to replace those in the newbranch. git checkout master git checkout origin master git pull git pull origin HEAD etc git pull origin master returns: * branch master -> FETCH_HEAD Already up-to-date. This can't be hard, but I sure can't figure it out. 'git branch' returns * master and 'git branch -r' return origin/HEAD origin/experimental origin/master

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >