Search Results

Search found 1100 results on 44 pages for 'jan ackerman'.

Page 40/44 | < Previous Page | 36 37 38 39 40 41 42 43 44  | Next Page >

  • Ajax return string links not working

    - by Andreas Lympouras
    I have this ajax function: function getSearchResults(e) { var searchString = e.value; /*var x = e.keyCode; var searchString = String.fromCharCode(x);*/ if (searchString == "") { document.getElementById("results").innerHTML = ""; return; } var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp = new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { document.getElementById("results").innerHTML = xmlhttp.responseText; } } xmlhttp.open("GET", "<%=Page.ResolveUrl("~/template/searchHelper.aspx?searchString=")%>"+searchString, true); xmlhttp.setRequestHeader("If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT"); //solve any AJAX caching problem(not refreshing) xmlhttp.send(); } and here is when I call it: <input class="text-input" value="SEARCH" id="searchbox" onkeyup="javascript:getSearchResults(this);" maxlength="50" runat="server" /> and all my links in the searchHelper.aspx file(which retruns them as a string) are like this: <a class="item" href='../src/actors.aspx?id=77&name=dasdassss&type=a' > <span class="item">dasdassss</span></a> When I click this link I want my website to go to ../src/actors.aspx?id=77&name=dasdassss&type=a but nothing happens. When I hover over the link, it also shows me where the link is about to redirect! any help?

    Read the article

  • How can I make "month" columns in Sql?

    - by Beska
    I've got a set of data that looks something like this (VERY simplified): productId Qty dateOrdered --------- --- ----------- 1 2 10/10/2008 1 1 11/10/2008 1 2 10/10/2009 2 3 10/12/2009 1 1 10/15/2009 2 2 11/15/2009 Out of this, we're trying to create a query to get something like: productId Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec --------- ---- --- --- --- --- --- --- --- --- --- --- --- --- 1 2008 0 0 0 0 0 0 0 0 0 2 1 0 1 2009 0 0 0 0 0 0 0 0 0 3 0 0 2 2009 0 0 0 0 0 0 0 0 0 3 2 0 The way I'm doing this now, I'm doing 12 selects, one for each month, and putting those in temp tables. I then do a giant join. Everything works, but this guy is dog slow. I know this isn't much to go on, but knowing that I barely qualify as a tyro in the db world, I'm wondering if there is a better high level approach to this that I might try. (I'm guessing there is.) (I'm using MS Sql Server, so answers that are specific to that DB are fine.) (I'm just starting to look at "PIVOT" as a possible help, but I don't know anything about it yet, so if someone wants to comment about that, that might be helpful as well.)

    Read the article

  • .Net SQL Parameter for String List Problem

    - by JK
    I am writing a C# program in which I send a query to SQL Server to be processed and a dataset returns. I am using parameters to pass information to the query before it is sent to SQL server. This works fine except in the situation below. The query looks like this: reportQuery = " Select * From tableName Where Account_Number in (@AccountNum); and Account_Date = @AccountDate "; The AccountDate parameter works find but not the AccountNum parameter. I need the final query to execute like this: Select * From tableName Where Account_Number in ('AX3456','YZYL123','ZZZ123'); and Account_Date = '1-Jan-2010' The problem is that I have these account numbers (actually text) in a C# string list. To feed it to the parameter, I have been declaring the parameter as a string. I turn the list into one string and feed it to the parameter. I think the problem is that I am feeding the paramater this: "'AX3456','YZYL123','ZZZ123'" when it wants this 'AX3456','YZYL123','ZZZ123' How do I get the string list into the query using a parameter and have it execute as shown above? This is how I am declaring and assigning the parameter. SqlParameter AccountNumsParam = new SqlParameter(); AccountNumsParam.ParameterName = "@AccountNums"; AccountNumsParam.SqlDbType = SqlDbType.NVarChar; AccountNumsParam.Value = AccountNumsString; FYI, AccountNumString == "'AX3456','YZYL123','ZZZ123'"

    Read the article

  • Java giving incorrect year values

    - by whistler
    Something very, very strange is occurring in my program, and I'm wondering if anyone out there has seen this occur before. And, if so, how to fix it. Basically, I am parsing an csv file...no problem there. One column contains a date and I am taking it in as a String and changing to a Date object. Again, no problem there. The code is as follows: SimpleDateFormat dateFormat = new SimpleDateFormat("MM/dd/yy hh:mm"); Date initialDate = new Date(); try { initialDate = dateFormat.parse(rows.get(0)[8]); System.out.println(initialDate); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } Of course, I'm parsing other columns as well (and those are working fine). So, when I run my program for a small csv file (2.8 MB), the dates come out (i.e. are parsed) perfectly. However, when I run the program for a large csv file (25 MB), the dates are a hot mess. For example, take a look at the year values I am getting (the following is just a tiny portion of the println output from the code above): 1000264 at Sun Nov 05 15:30:00 EST 2186 1000320 at Sat Mar 04 17:30:00 EST 2169 1000347 at Sat Apr 01 09:45:00 EDT 2169 1000413 at Tue Jul 09 13:00:00 EDT 2182 1000638 at Fri Dec 11 13:45:00 EST 2167 1000667 at Wed Dec 10 10:00:00 EST 2188 1000690 at Mon Jan 02 13:00:00 EST 2169 1000843 at Thu Feb 11 13:30:00 EST 2196 In actuality, the years are in the realm of 1990-2006 or so. Again, this does not happen with the small csv file. Does anyone know what's going on here and how I can fix it? I need to process the large csv file (the small one was just for testing purposes). By request, here are the actual dates in the csv file and after that the value given by the code above: 5/20/03 15:30 5/20/03 15:30 8/30/04 9:00 8/30/04 9:00 12/20/04 10:30 12/20/04 10:30 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Thu Dec 08 09:00:00 EST 2196 Tue Dec 12 10:30:00 EST 2186 Tue Dec 12 10:30:00 EST 2186

    Read the article

  • How to add "missing" columns in a column group in reporting services?

    - by Gimly
    Hello, I'm trying to create a report that displays for each months of the year the quantity of goods sold. I have a query that returns the list of goods sold for each month, it looks something like this : SELECT Seller.FirstName, Seller.LastName, SellingHistory.Month, SUM(SellingHistory.QuantitySold) FROM SellingHistory JOIN Seller on SellingHistory.SellerId = Seller.SellerId WHERE SellingHistory.Year = @Year GOUP BY Seller.FirstName, Seller.LastName, SellingHistory.Month What I want to do is display a report that has a column for each months + a total column that will display for each Seller the quantity sold in the selected month. Seller Name | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Total What I managed to do is using a matrix and a column group (group on Month) to display the columns for existing data, if I have data from January to March, it will display the 3 first columns and the total. What I would like to do is always display all the columns. I thought about making that by adding the missing months in the SQL request, but I find that a bit weird and I'm sure there must be some "cleanest" solution as this is something that must be quite frequent. Thanks. PS: I'm using SQL Server Express 2008

    Read the article

  • Getting the current item number or index when using will_paginate in rails app

    - by Rich
    I have a rails app that stores movies watched, books read, etc. The index page for each type lists paged collections of all its items, using will_paginate to bring back 50 items per page. When I output the items I want to display a number to indicate what item in the total collection it is. The numbering should be reversed as the collection is displayed with most recent first. This might not relate to will_paginate but rather some other method of calculation. I will be using the same ordering in multiple types so it will need to be reusable. As an example, say I have 51 movies. The first item of the first page should display: Fight Club - Watched: 30th Dec 2010 Whilst the last item on the page should display: The Matrix - Watched: 3rd Jan 2010 The paged collection is available as an instance variable e.g. @movies, and @movies.count will display the number of items in the paged collection. So if we're on page 1, movies.count == 50, whilst on page 2 @movies.count == 1. Using Movie.count would give 51. If the page number and page size can be accessed the number could be calculated so how can they be returned? Though I'm hopeful there is something that already exists to handle this calculation!

    Read the article

  • jQuery ajax request response is empty in Internet Explorer

    - by Aprilia1982
    Hi, I'm doing the following ajax call: //exif loader function LoadExif(cImage) { $.ajax({ type: "POST", url: "http://localhost:62414/Default1.aspx/GetImageExif", data: "{iCurrentImage:" + cImage + "}", contentType: "application/json; charset=utf-8", dataType: "json", error: ajaxFailed, success: function (data, status) { var sStr = ''; for (var count in data.d) { sStr = sStr + data.d[count]; }; alert(sStr); } }); }; In Firefox the request works really fine. When I try to run the code in Internet Explorer, the response is empty. Here is the webmethod witch is called: <WebMethod()> _ <ScriptMethod(ResponseFormat:=ResponseFormat.Json)> _ Public Shared Function GetImageExif(ByVal iCurrentImage As Integer) As String Dim sbTable As New StringBuilder sbTable.AppendLine("<table>") sbTable.AppendLine("<tr>") sbTable.AppendLine("<td>Name</td><td>" & gGallery.Images(iCurrentImage).File.Name & "</td>") sbTable.AppendLine("</tr>") sbTable.AppendLine("</table>") Return sbTable.ToString End Function Any ideas? Jan

    Read the article

  • Hibernate not loading associated object

    - by Noor
    Hi, i am trying to load a hibernate object ForumMessage but in it contain another object Users and the Users object is not being loaded. My ForumMessage Mapping File: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Jan 4, 2011 10:10:29 AM by Hibernate Tools 3.4.0.Beta1 --> <hibernate-mapping> <class name="com.BiddingSystem.Models.ForumMessage" table="FORUMMESSAGE"> <id name="ForumMessageId" type="long"> <column name="FORUMMESSAGEID" /> <generator class="native" /> </id> <property name="ForumMessage" type="java.lang.String"> <column name="FORUMMESSAGE" /> </property> <many-to-one name="User" class="com.BiddingSystem.Models.Users" fetch="join"> <column name="UserId" /> </many-to-one> <property name="DatePosted" type="java.util.Date"> <column name="DATEPOSTED" /> </property> <many-to-one name="Topic" class="com.BiddingSystem.Models.ForumTopic" fetch="join"> <column name="TopicId" /> </many-to-one> </class> </hibernate-mapping> and I am using the follwing code: Session session = gileadHibernateUtil.getSessionFactory().openSession(); SQL="from ForumMessage"; System.out.println(SQL); Query query=session.createQuery(SQL); System.out.println(query.list().size()); return new LinkedList <ForumMessage>(query.list());

    Read the article

  • Python: Parsing a colon delimited file with various counts of fields

    - by Mark
    I'm trying to parse a a few files with the following format in 'clientname'.txt hostname:comp1 time: Fri Jan 28 20:00:02 GMT 2011 ip:xxx.xxx.xx.xx fs:good:45 memory:bad:78 swap:good:34 Mail:good Each section is delimited by a : but where lines 0,2,6 have 2 fields... lines 1,3-5 have 3 or more fields. (A big issue I've had trouble with is the time: line, since 20:00:02 is really a time and not 3 separate fields. I have several files like this that I need to parse. There are many more lines in some of these files with multiple fields. ... for i in clients: if os.path.isfile(rpt_path + i + rpt_ext): # if the rpt exists then do this rpt = rpt_path + i + rpt_ext l_count = 0 for line in open(rpt, "r"): s_line = line.rstrip() part = s_line.split(':') print part l_count = l_count + 1 else: # else break break First I'm checking if the file exists first, if it does then open the file and parse it (eventually) As of now I'm just printing the output (print part) to make sure it's parsing right. Honestly, the only trouble I'm having at this point is the time: field. How can I treat that line specifically different than all the others? The time field is ALWAYS the 2nd line in all of my report files.

    Read the article

  • need help understanding a function.

    - by Adam McC
    i had previously asked for help writing/improving a function that i need to calculate a premium based on differing values for each month. the premium is split in to 12 months and earned on a percentage for each month. so if the policy start in march and we are in jan we will have earned 10 months worth. so i need to add up the monthly earning to give us the total earned. wach company wil have differeing earnings values for each month. my original code is Here. its ghastly and slow hence the request for help. and i was presented with the following code. the code works but returns stupendously large figures. begin set @begin=datepart(month,@outdate) set @end=datepart(month,@experiencedate) ;with a as ( select *, case calmonth when 'january' then 1 when 'february' then 2 when 'march' then 3 when 'april' then 4 when 'may' then 5 when 'june' then 6 when 'july' then 7 when 'august' then 8 when 'september' then 9 when 'october' then 10 when 'november' then 11 when 'december' then 12 end as Mnth from tblearningpatterns where clientname=@client and earningpattern=@pattern ) , b as ( select earningvalue, Mnth, earningvalue as Ttl from a where Mnth=@begin union all select a.earningvalue, a.Mnth, cast(b.Ttl*a.earningvalue as decimal(15,3)) as Ttl from a inner join b on a.Mnth=b.Mnth+1 where a.Mnth<=@end ) select @earningvalue= Ttl from b inner join ( select max(Mnth) as Mnth from b ) c on b.Mnth=c.Mnth option(maxrecursion 12) SET @earnedpremium = @earningvalue*@premium end can someone please help me out?

    Read the article

  • ffmpeg hangs when creating a video

    - by FearUs
    I am trying to insert an audio channel with a video: first of all I extract the audio from the original video for processing: ffmpeg -i lotr.mp4 lotr.wav I then extract all frames for later processing too: ffmpeg -i lotr.mp4 -f image2 %d.jpg When done processing audio and video streams, I try to create the video ffmpeg -f image2 -r 15 -i %d.jpg new.mp4 then merge with the audio: ffmpeg -i new.mp4 -i lotr.wav -map 0:0 -map 1:0 new_w_audio.mp4 Result: CPU activity = 100%, the process hangs and never returns. PS: I even tried it without modifying the images or the audio (so just trying to unpack then repack the video) but still the same output FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 04:07:05 with gcc 4.4.2 configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect - -enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads -- cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack libavutil 50.36. 0 / 50.36. 0 libavcore 0.16. 1 / 0.16. 1 libavcodec 52.108. 0 / 52.108. 0 libavformat 52.93. 0 / 52.93. 0 libavdevice 52. 2. 3 / 52. 2. 3 libavfilter 1.74. 0 / 1.74. 0 libswscale 0.12. 0 / 0.12. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'new.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Duration: 00:00:29.66, start: 0.000000, bitrate: 193 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], 192 k b/s, 15 fps, 15 tbr, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 [wav @ 01fed010] max_analyze_duration reached Input #1, wav, from 'lotr.wav': Duration: 00:00:29.90, bitrate: 176 kb/s Stream #1.0: Audio: pcm_s16le, 11025 Hz, 1 channels, s16, 176 kb/s File 'new_w_audio.mp4' already exists. Overwrite ? [y/N] y [buffer @ 01b03820] w:200 h:134 pixfmt:yuv420p Output #0, mp4, to 'new_w_audio.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], q=2-3 1, 200 kb/s, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1: Audio: aac, 11025 Hz, 1 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding

    Read the article

  • DNSBL listed at zen.spamhaus.org - cant get outgoing mail working? Am I interpreting the response correctly?

    - by Joe Hopfgartner
    I have problem with a mailserver and there is something I kind of not understand! I can connect, authenticate, specify the sender address - but when specifying the reciever i get a error 550 which looks like so: RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 Now the strange thing is that 62.178.15.161 is my local client address. Not the servers ip address. Also the error code 550 seems to be defined as so: 550 Requested action not taken: mailbox unavailable To me that makes totally no sense. Why this error code with this spamhaus message? Why the local ip adress and not the servers? There is exim running and there is nothing turning up in the logs mail.err mail.info mail.log mail.warn in /var/log I looked up both the servers and the clients ip adress on blacklists. The clients ip adress is listed on some (as expected), but the server is totally clean. Here is the complete telnet log when I reproduced the error. Mail clients like Evolution and Thunderbird give me the same spamhaus error message. joe@joe-desktop:~$ telnet mail.hunsynth.org 25 Trying 193.164.132.42... Connected to mail.hunsynth.org. Escape character is '^]'. 220 hunsynth.org ESMTP Exim 4.69 Sat, 01 Jan 2011 17:52:45 +0100 HELP 214-Commands supported: 214 AUTH STARTTLS HELO EHLO MAIL RCPT DATA NOOP QUIT RSET HELP EHLO AUTH 250-hunsynth.org Hello chello062178015161.6.11.univie.teleweb.at [62.178.15.161] 250-SIZE 52428800 250-PIPELINING 250-AUTH PLAIN LOGIN CRAM-MD5 250-STARTTLS 250 HELP AUTH LOGIN 334 VXNlcm5hbWU6 dGVzdEBodW5zeW50aC5vcmc= 334 UGFzc3dvcmQ6 ***** 235 Authentication succeeded MAIL FROM:[email protected] 250 OK RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 quit 221 hunsynth.org closing connection Connection closed by foreign host. joe@joe-desktop:~$ Update: I tried the same thing from my other server and could successfully send an email. So it really looks like the server does check the IP wich establiches the connection is in some blacklist. This is theoretically a good thing - but - the authentication on the server should prevent that? Or shouldn't it? Well I just think it would be absurd if I couldn't send email over my smtp server from my dynamic ISP connection because the dynamic is listed, altough i have a clean server with login?

    Read the article

  • Bind9 virtual subdomains

    - by Steffan
    I am trying to setup virtual subdomains using Bind9, following this tutorial.. http://groups.drupal.org/node/16862 which I've completed. Basically setting up the zone and modifying the resolv.conf file and the named.conf.local file. I've gotten everything to work, and I am able to from my server ping mydomain.com , test.mydomain.com and when i do a dig I get the following.. ; <<>> DiG 9.7.0-P1 <<>> test.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32606 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;test.mydomain.com. IN A ;; ANSWER SECTION: test.mydomain.com. 86400 IN A 174.###.###.# ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS mydomain.com. ;; ADDITIONAL SECTION: mydomain.com. 86400 IN A 174.###.###.# ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Jan 19 21:06:01 2011 ;; MSG SIZE rcvd: 86 So it looks like everything is working. However, when I try and do test.mydomain.com in the browser, expecting it to default for now to mydomain.com it does not work and I get a server not found page in Firefox. I did read elsewhere that in your virutalhosts file you also need to setup a *.mydomain.com alias, but that didn't fix anything. Any other information that I could provide to help troubleshoot, or any troubleshooting suggestions? I am using Ubuntu 10.4, with typical LAMP setup. The only other things installed on the server are Bind9 and ftp client.

    Read the article

  • broken apache .htaccess (mod_rewrite)

    - by Tim
    Hey there, I'm running into an apache mod_rewrite configuration issue on one of our machines. Has anyone encountered / overcome anyone of these issues. URL1 ( http://www.uppereast.com ) is not being redirected to URL2 ( http://www.nyclocalliving.com ). This definitely worked in my test environment where a localhost address was rewritten to URL2 ( RewriteRule ^http://upe.localhost$ http://www.nyclocalliving.com ). I'm trying to get the all of the redirect rules working ( 2200 + ), but the 'http://www.nyclocalliving.com' site encounters a server error if I use more that 1000 or more rules. A) .htaccess file - I've tried the simplest approach which worked in a local environment 75 # Various rewrite rules. 76 <IfModule mod_rewrite.c> 77 RewriteEngine on 78 79 # BEGIN new URL Mapping rules 80 #RewriteRule ^http://www.uppereast.com/$ http://www.nyclocalliving.com ... 2307 #RewriteRule ^http://www.uppereast.com/zipcodechange.html$ http://www.nyclocalliving.com/zip-code-change fig. 1 B) /var/log/httpd/error_log file - there are these seg. fault errors when I enable the first rule ( line 80 ). no error logs otherwise. 1893 [Fri Sep 25 17:53:46 2009] [notice] Digest: generating secret for digest authentication ... 1894 [Fri Sep 25 17:53:46 2009] [notice] Digest: done 1895 [Fri Sep 25 17:53:46 2009] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations 1896 [Fri Sep 25 17:53:47 2009] [notice] child pid 29774 exit signal Segmentation fault (11) 1897 [Fri Sep 25 17:53:47 2009] [notice] child pid 29775 exit signal Segmentation fault (11) 1898 [Fri Sep 25 17:53:47 2009] [notice] child pid 29776 exit signal Segmentation fault (11) 1899 [Fri Sep 25 17:53:47 2009] [notice] child pid 29777 exit signal Segmentation fault (11) 1900 [Fri Sep 25 17:53:47 2009] [notice] child pid 29778 exit signal Segmentation fault (11) 1901 [Fri Sep 25 17:53:47 2009] [notice] child pid 29779 exit signal Segmentation fault (11) fig. 2 C) Some more debug information from the shell; the mod_rewrite is turned on and this is the machine architecture 1 # apachectl -t -D DUMP_MODULES | more 2 Loaded Modules: 3 core_module (static) 4 ... 5 rewrite_module (shared) 1 # uname -a 2 Linux RegionalWeb 2.6.24-23-xen #1 SMP Mon Jan 26 03:09:12 UTC 2009 x86_64 x86_64 x86_64 GNU/Linux fig. 3 I looked into some previous posts (http://serverfault.com/questions/18744/htaccess-not-working-modrewrite), but didn't find a solution for this. I'm sure there's a small switch somewhere that I'm missing. Thanks in advance Tim

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • Kernel upgrade CentOS 5.3 mount: could not find filesystem '/dev/root'

    - by matt
    We have a CentOS 5.3 x64 server that by default runs kernel version 2.6.18-164.11.1 and we are attempting to upgrade the box to 2.6.31.12 The drive is LVM +ext3, and the problem I'm having is when I upgrade the kernel and attempt to boot from it, no matter what version of the kernel I use, I get /dev/root not found towards the end of the boot process, and the kernel panics, and than reboots. I'm installing the kernel exactly as it says in this doc. I've tried it "The centOS way " using make rpm and than installing that. I've updated my mkinitrd. The most interesting part of this problem is that it has been so frustrating that I decided to try and clean install centos on an identical machine without LVM, and the result is EXACTLY the same. After upgrading the kernel, I get /dev/root not found. Does anyone know how to fix this, or what information would be relevant to remedy it? I'm open to try anything at this point. One more interesting thing about this problem is that in the new version of the kernel, during boot it complains that dm-mapper is started twice, than panics right after that. I've tried this with other kernel versions, and the result is the same. What am I missing here? If you need any more files, please just ask. Linux cg 2.6.18-164.11.1.el5 #1 SMP Wed Jan 20 07:32:21 EST 2010 x86_64 x86_64 x86_64 GNU/Linux /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 default=1 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.31.12-rt20) //NOT WORKING!!!! root (hd0,0) kernel /vmlinuz-2.6.31.12-rt20 ro root=/dev/VolGroup00/LogVol00 isolcpus=8,9,10,11,12,13,14,15 panic=10 initrd /initrd-2.6.31.12-rt20.img title CentOS (2.6.18-164.11.1.el5) //WORKING!! root (hd0,0) kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/VolGroup00/LogVol00 isolcpus=8,9,10,11,12,13,14,15 panic=10 initrd /initrd-2.6.18-164.11.1.el5.img

    Read the article

  • How to get same cookie to control two different folders on same site.

    - by Incandescent
    I am using the below cookie javascript to run a background color changer on my site. I want to also use it for the background color of my forum which is in a separate folder (http://lightbulbchoice.com/forum). I currently have it working on both the site and forum but you have to set each separately, i.e., each is setting it's own cookie. How do I get the forum to locate the main site cookie and not set it's own? // Cookie Functions - Second Helping (21-Jan-96) // Written by: Bill Dortch, hIdaho Design // The following functions were released to the public domain by him. function getCookieVal (offset) { var endstr = document.cookie.indexOf (";", offset); if (endstr == -1) endstr = document.cookie.length; return unescape(document.cookie.substring(offset, endstr)); } function GetCookie (name) { var arg = name + "="; var alen = arg.length; var clen = document.cookie.length; var i = 0; while (i < clen) { var j = i + alen; if (document.cookie.substring(i, j) == arg) return getCookieVal (j); i = document.cookie.indexOf(" ", i) + 1; if (i == 0) break; } return null; } function SetCookie (name, value) { var argv = SetCookie.arguments; var argc = SetCookie.arguments.length; var expires = (argc > 2) ? argv[2] : null; var path = (argc > 3) ? argv[3] : null; var domain = (argc > 4) ? argv[4] : null; var secure = (argc > 5) ? argv[5] : false; document.cookie = name + "=" + escape (value) + ((expires == null) ? "" : ("; expires=" + expires.toGMTString())) + ((path == null) ? "" : ("; path=" + path)) + ((domain == null) ? "" : ("; domain=" + domain)) + ((secure == true) ? "; secure" : ""); } // --> </script>

    Read the article

  • Using IIS7 why are my PNGs being cached by the browser, but my JS and CSS files not?

    - by Craig Shearer
    I am trying to sort out caching in IIS for my site. Basically, I want nothing cached, except for .png, .js, and .css files. At my site level, I opened the HTTP Reponse Headers and used the "Set Common Hedaers..." to set content to expire immediately. I have no Output Caching profiles set at any level in IIS. I clear my browser cache then try accessing my site. When my site requests a PNG file, I see responses like: Accept-Ranges bytes Age 0 Connection Keep-Alive Content-Type image/png Date Thu, 12 Apr 2012 21:55:15 GMT Etag "83b7322de318cd1:0" Last-Modified Thu, 12 Apr 2012 19:33:45 GMT Server Microsoft-IIS/7.5 X-Powered-By ASP.NET For JS and CSS files, I see responses like: Accept-Ranges bytes Cache-Control no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 597 Content-Type text/css Date Thu, 12 Apr 2012 21:55:15 GMT Etag "06e45ede15bca1:0" Last-Modified Mon, 02 Nov 2009 17:28:44 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET Accept-Ranges bytes Cache-Control no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 42060 Content-Type application/x-javascript Date Thu, 12 Apr 2012 21:55:14 GMT Etag "2356302de318cd1:0" Last-Modified Thu, 12 Apr 2012 19:33:45 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET So, why are my PNGs able to be cached, but JS and CSS files not? Then, I go into the Output Caching feature in IIS and set up profiles for .png, .css, and .js files. This updates the web.config file as follows: <caching> <profiles> <add extension=".png" policy="CacheUntilChange" kernelCachePolicy="DontCache" /> <add extension=".css" policy="CacheUntilChange" kernelCachePolicy="DontCache" /> <add extension=".js" policy="CacheUntilChange" kernelCachePolicy="DontCache" /> </profiles> </caching> I do a "precautionary" IISReset then try accessing my site again. For PNG files, I see the following response: Accept-Ranges bytes Age 0 Connection Keep-Alive Content-Length 3833 Content-Type image/png Date Thu, 12 Apr 2012 22:02:30 GMT Etag "0548c9e2c5dc81:0" Last-Modified Tue, 22 Jan 2008 19:26:00 GMT Server Microsoft-IIS/7.5 X-Powered-By ASP.NET For CSS and JS files, I see the following responses: Accept-Ranges bytes Cache-Control no-cache,no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 2680 Content-Type application/x-javascript Date Thu, 12 Apr 2012 22:02:29 GMT Etag "0f743af9015c81:0" Last-Modified Tue, 23 Oct 2007 16:20:54 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET Accept-Ranges bytes Cache-Control no-cache,no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 3831 Content-Type text/css Date Thu, 12 Apr 2012 22:02:29 GMT Etag "c3f42d2de318cd1:0" Last-Modified Thu, 12 Apr 2012 19:33:45 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET What am I doing wrong? Have I completely misunderstood the features of IIS, or is there a bug. Most importantly, how do I achieve what I want - that is get the browser to cache only PNG, JS and CSS files?

    Read the article

  • Troubleshooting an NFS server hanging after authenticated mount request

    - by Christoph
    I need some advice on troubleshooting an NFS server problem on Scientific Linux (RHEL) 6.1. The log on the server shows that an authenticated mount request was made: Jan 13 16:30:02 ??? rpc.mountd[3996]: authenticated mount request from ????:784 for /shared-storage/cm/shared (/shared-storage/cm/shared) But after that, it does not continue. On the client, it is also hanging. The interesting thing now is that I have two NFS servers, which should be identical, and the one is working perfectly, but the other exhibits the above mentioned behaviour. The problem is also not completely persistent, i. e. sometimes the mount request succeeds. I assume that the problem must be related to the server rather than to the client, because it is working perfectly on the other server. My question is where I should search the problem. I have already re-created the exports using exportfs -r, I have restarted the NFS server, I have compared the rpcinfo outputs of both server - no success. The problem even survives a reboot. Any other ideas are appreciated. As answer to Tim's question: I have sporadically the following in dmesg, but do not know whether it is related e1000e 0000:0c:00.0: eth4: Detected Hardware Unit Hang: TDH <24> TDT <25> next_to_use <25> next_to_clean <24> buffer_info[next_to_clean]: time_stamp <1c3d12940> next_to_watch <24> jiffies <1c3d12940> next_to_watch.status <0> MAC Status <80383> PHY Status <792d> PHY 1000BASE-T Status <7800> PHY Extended Status <3000> PCI Status <10> Further edit: The problem above does not occur on the machine that is working, so it probably is related. Again an edit: The error is not on the (software) device that is used for NFS, but on another one. The NFS mount also does not trigger the message.

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • dig @my-server-ip mydomain.com works from inside, not from outside?

    - by x4954
    My server has 2 ips: x.x.x.73 and x.x.x.248. I can access my site via these ips, using Web browser. {Now, from a CentOS machine (not my server), using terminal} If I: dig @x.x.x.73 mydomain.com dig @x.x.x.248 mydomain.com I get the result: Connection timed out; no server could be reached. Could somebody please tell me how to fix it? Thank you. More information: If I log in to my server using ssh and do: dig @x.x.x.73 mydomain.com dig @x.x.x.248 mydomain.com I can see my zone shown as expected: ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5_7.1 <<>> @x.x.x.73 mydomain.com ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12757 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 38400 IN A x.x.x.73 mydomain.com. 38400 IN A x.x.x.248 ;; AUTHORITY SECTION: mydomain.com. 38400 IN NS ns2.mydomain.com. mydomain.com. 38400 IN NS ns1.mydomain.com. ;; ADDITIONAL SECTION: ns1.mydomain.com. 38400 IN A x.x.x.73 ns2.mydomain.com. 38400 IN A x.x.x.248 ;; Query time: 20 msec ;; SERVER: x.x.x.73#53(x.x.x.73) ;; WHEN: Sun Jan 15 11:46:30 2012 ;; MSG SIZE rcvd: 129 BIND version 9.3.6, Centos 5. Logging to my server using ssh, do inga "dig google.com" also shows expected results.

    Read the article

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44  | Next Page >