Search Results

Search found 37487 results on 1500 pages for 'kate moss open space'.

Page 282/1500 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • IE8 window.opener problems

    - by fire
    Having problems with IE8... I have a button that onclick fires the showImageBrowser() function. function showImageBrowser(params) { var open = window.open('http://localhost/admin/browse?'+params,'newwin','toolbar=0,location=0,directories=0,status=1,menubar=0,scrollbars=1,resizable=1,width=950,height=500'); if (!open) { alert('Could not open the image browser, please disable your popup blocker.'); } } Now in the image browser when you click on an image it calls this function: function selectFile(url, el) { window.opener.replaceImage('Test_Image', url); window.close(); } Which is calling the replaceImage() function in the parent window, as expeted. This is the code: function replaceImage(el, url) { $('#'+el).html('<a href="'+url+'" target="_blank" class="image">'+basename(url)+'</a>'); $("input[name='"+el+"']").val(url); } Now if you click on the original showImageBrowser() button for the second time, IE will bring up the window but this time it freezes for a few seconds and then you get the alert "Could not open the image browser, please disable your popup blocker." This works fine in Firefox (obviously) but not in IE. I haven't even tried it in IE7/6 because if it doesn't work in 8 then I know I'm going to have problems. Any advice?

    Read the article

  • FancyURLOpener failing since moving to python 3.1.2

    - by Andrew Shepherd
    I had an application that was downloading a .CSV file from a password-protected website then processing it futher. I was using FancyURLOpener, and simply hardcoding the username and password. (Obviously, security is not a high priority in this particular instance). Since downloading Python 3.1.2, this code has stopped working. Does anyone know of the changes that have happened to the implementation? Here is a cut down version of the code: import urllib.request; class TracOpener (urllib.request.FancyURLopener) : def prompt_user_passwd(self, host, realm) : return ('andrew_ee', '_my_unenctryped_password') csvUrl='http://mysite/report/19?format=csv@USER=fred_nukre' opener = TracOpener(); f = opener.open(csvUrl); s = f.read(); f.close(); s; For the sake of completeness, here's the entire call stack: Traceback (most recent call last): File "C:\reporting\download_csv_file.py", line 12, in <module> f = opener.open(csvUrl); File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1624, in _open_generic_http response.status, response.reason, response.msg, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1640, in http_error result = method(url, fp, errcode, errmsg, headers) File "C:\Program Files\Python31\lib\urllib\request.py", line 1878, in http_error_401 return getattr(self,name)(url, realm) File "C:\Program Files\Python31\lib\urllib\request.py", line 1950, in retry_http_basic_auth return self.open(newurl) File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1590, in _open_generic_http auth = base64.b64encode(user_passwd).strip() File "C:\Program Files\Python31\lib\base64.py", line 56, in b64encode raise TypeError("expected bytes, not %s" % s.__class__.__name__) TypeError: expected bytes, not str

    Read the article

  • Asynchronous Webcrawling F#, something wrong ?

    - by jlezard
    Not quite sure if it is ok to do this but, my question is: Is there something wrong with my code ? It doesn't go as fast as I would like, and since I am using lots of async workflows maybe I am doing something wrong. The goal here is to build something that can crawl 20 000 pages in less than an hour. open System open System.Text open System.Net open System.IO open System.Text.RegularExpressions open System.Collections.Generic open System.ComponentModel open Microsoft.FSharp open System.Threading //This is the Parallel.Fs file type ComparableUri ( uri: string ) = inherit System.Uri( uri ) let elts (uri:System.Uri) = uri.Scheme, uri.Host, uri.Port, uri.Segments interface System.IComparable with member this.CompareTo( uri2 ) = compare (elts this) (elts(uri2 :?> ComparableUri)) override this.Equals(uri2) = compare this (uri2 :?> ComparableUri ) = 0 override this.GetHashCode() = 0 ///////////////////////////////////////////////Funtions to retreive html string////////////////////////////// let mutable error = Set.empty<ComparableUri> let mutable visited = Set.empty<ComparableUri> let getHtmlPrimitiveAsyncDelay (delay:int) (uri : ComparableUri) = async{ try let req = (WebRequest.Create(uri)) :?> HttpWebRequest // 'use' is equivalent to ‘using’ in C# for an IDisposable req.UserAgent<-"Mozilla" //Console.WriteLine("Waiting") do! Async.Sleep(delay * 250) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response after delay "+string delay) use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> Console.WriteLine( ex.ToString() ) lock error (fun () -> error<- error.Add uri ) lock visited (fun () -> visited<-visited.Add uri ) return "BadUri" } ///////////////////////////////////////////////Active Pattern Matching to retreive href////////////////////////////// let (|Matches|_|) (pat:string) (inp:string) = let m = Regex.Matches(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Count > 0 then Some (List.tail [ for g in m -> g.Value ]) else None let (|Match|_|) (pat:string) (inp:string) = let m = Regex.Match(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Success then Some (List.tail [ for g in m.Groups -> g.Value ]) else None ///////////////////////////////////////////////Find Bad href////////////////////////////// let isEmail (link:string) = link.Contains("@") let isMailto (link:string) = if Seq.length link >=6 then link.[0..5] = "mailto" else false let isJavascript (link:string) = if Seq.length link >=10 then link.[0..9] = "javascript" else false let isBadUri (link:string) = link="BadUri" let isEmptyHttp (link:string) = link="http://" let isFile (link:string)= if Seq.length link >=6 then link.[0..5] = "file:/" else false let containsPipe (link:string) = link.Contains("|") let isAdLink (link:string) = if Seq.length link >=6 then link.[0..5] = "adlink" elif Seq.length link >=9 then link.[0..8] = "http://adLink" else false ///////////////////////////////////////////////Find Bad href////////////////////////////// let getHref (htmlString:string) = let urlPat = "href=\"([^\"]+)" match htmlString with | Matches urlPat urls -> urls |> List.map( fun href -> match href with | Match (urlPat) (link::[]) -> link | _ -> failwith "The href was not in correct format, there was more than one match" ) | _ -> Console.WriteLine( "No links for this page" );[] |> List.filter( fun link -> not(isEmail link) ) |> List.filter( fun link -> not(isMailto link) ) |> List.filter( fun link -> not(isJavascript link) ) |> List.filter( fun link -> not(isBadUri link) ) |> List.filter( fun link -> not(isEmptyHttp link) ) |> List.filter( fun link -> not(isFile link) ) |> List.filter( fun link -> not(containsPipe link) ) |> List.filter( fun link -> not(isAdLink link) ) let treatAjax (href:System.Uri) = let link = href.ToString() let firstPart = (link.Split([|"#"|],System.StringSplitOptions.None)).[0] new Uri(firstPart) //only follow pages with certain extnsion or ones with no exensions let followHref (href:System.Uri) = let valid2 = set[".py"] let valid3 = set[".php";".htm";".asp"] let valid4 = set[".php3";".php4";".php5";".html";".aspx"] let arrLength = href.Segments |> Array.length let lastExtension = (href.Segments).[arrLength-1] let lengthLastExtension = Seq.length lastExtension if (lengthLastExtension <= 3) then not( lastExtension.Contains(".") ) else //test for the 2 case let last4 = lastExtension.[(lengthLastExtension-1)-3..(lengthLastExtension-1)] let isValid2 = valid2|>Seq.exists(fun validEnd -> last4.EndsWith( validEnd) ) if isValid2 then true else if lengthLastExtension <= 4 then not( last4.Contains(".") ) else let last5 = lastExtension.[(lengthLastExtension-1)-4..(lengthLastExtension-1)] let isValid3 = valid3|>Seq.exists(fun validEnd -> last5.EndsWith( validEnd) ) if isValid3 then true else if lengthLastExtension <= 5 then not( last5.Contains(".") ) else let last6 = lastExtension.[(lengthLastExtension-1)-5..(lengthLastExtension-1)] let isValid4 = valid4|>Seq.exists(fun validEnd -> last6.EndsWith( validEnd) ) if isValid4 then true else not( last6.Contains(".") ) && not(lastExtension.[0..5] = "mailto") //Create the correct links / -> add the homepage , make them a comparabel Uri let hrefLinksToUri ( uri:ComparableUri ) (hrefLinks:string list) = hrefLinks |> List.map( fun link -> try if Seq.length link <4 then Some(new Uri( uri, link )) else if link.[0..3] = "http" then Some(new Uri(link)) else Some(new Uri( uri, link )) with | _ as ex -> Console.WriteLine(link); lock error (fun () ->error<-error.Add uri) None ) |> List.filter( fun link -> link.IsSome ) |> List.map( fun o -> o.Value) |> List.map( fun uri -> new ComparableUri( string uri ) ) //Treat uri , removing ajax last part , and only following links specified b Benoit let linksToFollow (hrefUris:ComparableUri list) = hrefUris |>List.map( treatAjax ) |>List.filter( fun link -> followHref link ) |>List.map( fun uri -> new ComparableUri( string uri ) ) |>Set.ofList let needToVisit uri = ( lock visited (fun () -> not( visited.Contains uri) ) ) && (lock error (fun () -> not( error.Contains uri) )) let getLinksToFollowAsyncDelay (delay:int) ( uri: ComparableUri ) = async{ let! links = getHtmlPrimitiveAsyncDelay delay uri lock visited (fun () ->visited<-visited.Add uri) let linksToFollow = getHref links |> hrefLinksToUri uri |> linksToFollow |> Set.filter( needToVisit ) |> Set.map( fun link -> if uri.Authority=link.Authority then link else link ) return linksToFollow } //Add delays if visitng same authority let getDelay(uri:ComparableUri) (authorityDelay:Dictionary<string,int>) = let uriAuthority = uri.Authority let hasAuthority,delay = authorityDelay.TryGetValue(uriAuthority) if hasAuthority then authorityDelay.[uriAuthority] <-delay+1 delay else authorityDelay.Add(uriAuthority,1) 0 let rec getLinksToFollowFromSetAsync maxIteration ( uris: seq<ComparableUri> ) = let authorityDelay = Dictionary<string,int>() if maxIteration = 100 then Console.WriteLine("Finished") else //Unite by authority add delay for those we same authority others ignore let stopwatch= System.Diagnostics.Stopwatch() stopwatch.Start() let newLinks = uris |> Seq.map( fun uri -> let delay = lock authorityDelay (fun () -> getDelay uri authorityDelay ) getLinksToFollowAsyncDelay delay uri ) |> Async.Parallel |> Async.RunSynchronously |> Seq.concat stopwatch.Stop() Console.WriteLine("\n\n\n\n\n\n\nTimeElapse : "+string stopwatch.Elapsed+"\n\n\n\n\n\n\n\n\n") getLinksToFollowFromSetAsync (maxIteration+1) newLinks getLinksToFollowFromSetAsync 0 (seq[ComparableUri( "http://twitter.com/" )]) Console.WriteLine("Finished") Some feedBack would be great ! Thank you (note this is just something I am doing for fun)

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • Programs won't write to a file, and I do not know if it is reading it

    - by user320950
    This program is supposed to read files and write them. I took the file open checks out because they kept causing errors. The problem is that the files open like they are supposed to and the names are correct but nothing is on any of the text screens. Do you know what is wrong? #include<iostream> #include<fstream> #include<cstdlib> #include<iomanip> using namespace std; int main() { ifstream in_stream; // reads itemlist.txt ofstream out_stream1; // writes in items.txt ifstream in_stream2; // reads pricelist.txt ofstream out_stream3;// writes in plist.txt ifstream in_stream4;// read recipt.txt ofstream out_stream5;// write display.txt float price=' ',curr_total=0.0; int itemnum=' ', wrong=0; char next; in_stream.open("ITEMLIST.txt", ios::in); // list of avaliable items out_stream1.open("listWititems.txt", ios::out); // list of avaliable items in_stream2.open("PRICELIST.txt", ios::in); out_stream3.open("listWitdollars.txt", ios::out); in_stream4.open("display.txt", ios::in); out_stream5.open("showitems.txt", ios::out); in_stream.close(); // closing files. out_stream1.close(); in_stream2.close(); out_stream3.close(); in_stream4.close(); out_stream5.close(); system("pause"); in_stream.setf(ios::fixed); while(in_stream.eof()) { in_stream >> itemnum; cin.clear(); cin >> next; } out_stream1.setf(ios::fixed); while (out_stream1.eof()) { out_stream1 << itemnum; cin.clear(); cin >> next; } in_stream2.setf(ios::fixed); in_stream2.setf(ios::showpoint); in_stream2.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (in_stream2 >> itemnum >> price) // gets itemnum and price { while (in_stream2.eof()) // reads file to end of file { in_stream2 >> itemnum; in_stream2 >> price; price++; curr_total= price++; in_stream2 >> curr_total; cin.clear(); // allows more reading cin >> next; } } } out_stream3.setf(ios::fixed); out_stream3.setf(ios::showpoint); out_stream3.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (out_stream3 << itemnum << price) { while (out_stream3.eof()) // reads file to end of file { out_stream3 << itemnum; out_stream3 << price; price++; curr_total= price++; out_stream3 << curr_total; cin.clear(); // allows more reading cin >> next; } return itemnum, price; } } in_stream4.setf(ios::fixed); in_stream4.setf(ios::showpoint); in_stream4.precision(2); while ( in_stream4.eof()) { in_stream4 >> itemnum >> price >> curr_total; cin.clear(); cin >> next; } out_stream5.setf(ios::fixed); out_stream5.setf(ios::showpoint); out_stream5.precision(2); out_stream5 <<setw(5)<< " itemnum " <<setw(5)<<" price "<<setw(5)<<" curr_total " <<endl; // sends items and prices to receipt.txt out_stream5 << setw(5) << itemnum << setw(5) <<price << setw(5)<< curr_total; // sends items and prices to receipt.txt out_stream5 << " You have a total of " << wrong++ << " errors " << endl; }

    Read the article

  • File extensions

    - by Sten
    I am trying to open a file with file extension '.lib' (open file library) and (.dll) but i dont know what program application to open it with. any advice? Thanks

    Read the article

  • TreeView Root Node Style

    - by ScG
    I have a tree view <asp:TreeView ID="TreeView1" runat="server" DataSourceID="SiteMapDataSource1" ShowExpandCollapse="False"> </asp:TreeView> As you can see, root node is hidden. However when the treenode gets rendered, there is a whitespace to the left of each leafnode. This is probably because the root node is hidden but is taking up space. How do i remove that white space.

    Read the article

  • How to read values from file. tokenizer

    - by user69514
    I have a file in which each line contains two numbers. The problem is that the two number are separated by a space, but the space can be any number of blank spaces. either one, two, or more. I want to read the line and store each of the numbers in a variable, but I'm not sure how to tokenize it. i.e 1 5 3 2 5 6 3 4 83 54 23 23 32 88 8 203

    Read the article

  • str is not callable error in python .

    - by mekasperasky
    import sys import md5 from TOSSIM import * from RadioCountMsg import * t = Tossim([]) #The Tossim object is defined here m = t.mac()#The mac layer is defined here , in which the communication takes place r = t.radio()#The radio communication link object is defined here , as the communication needs Rf frequency to transfer t.addChannel("RadioCountToLedsC", sys.stdout)# The various channels through which communication will take place t.addChannel("LedsC", sys.stdout) #The no of nodes that would be required in the simulation has to be entered here print("enter the no of nodes you want ") n=input() for i in range(0, n): m = t.getNode(i) m.bootAtTime((31 + t.ticksPerSecond() / 10) * i + 1) #The booting time is defined so that the time at which the node would be booted is given f = open("topo.txt", "r") #The topography is defined in topo.txt so that the RF frequencies of the transmission between nodes are are set lines = f.readlines() for line in lines: s = line.split() if (len(s) > 0): if (s[0] == "gain"): r.add(int(s[1]), int(s[2]), float(s[3])) #The topogrography is added to the radio object noise = open("meyer-heavy.txt", "r") #The noise model is defined for the nodes lines = noise.readlines() for line in lines: str = line.strip() if (str != ""): val = int(str) for i in range(0, 4): t.getNode(i).addNoiseTraceReading(val) for i in range (0, n): t.getNode(i).createNoiseModel() #The noise model is created for each node for i in range(0,n): t.runNextEvent() fk=open("key.txt","w") for i in range(0,n): if i ==0 : key=raw_input() fk.write(key) ak=key key=md5.new() key.update(str(ak)) ak=key.digest() fk.write(ak) fk.close() fk=open("key.txt","w") plaint=open("pt.txt") for i in range(0,n): msg = RadioCountMsg() msg.set_counter(7) pkt = t.newPacket()#A packet is defined according to a certain format print("enter message to be transported") ms=raw_input()#The message to be transported is taken as input #The RC5 encryption has to be done here plaint.write(ms) pkt.setData(msg.data) pkt.setType(msg.get_amType()) pkt.setDestination(i+1)#The destination to which the packet will be sent is set print "Delivering " + " to" ,i+1 pkt.deliver(i+1, t.time() + 3) fk.close() print "the key to be displayed" ki=raw_input() fk=open("key.txt") for i in range(0,n): if i==ki: ms=fk.readline() for i in range(0,n): msg=RadioCountMsg() msg.set_counter(7) pkt=t.newPacket() msg.data=ms pkt.setData(msg.data) pkt.setType(msg.get_amType()) pkt.setDestination(i+1) pkt.deliver(i+1,t.time()+3) #The key has to be broadcasted here so that the decryption can take place for i in range(0, n): t.runNextEvent(); this code gives me error here key.update(str(ak)) . when i run a similar code on the python terminal there is no such error but this code pops up an error . why so?

    Read the article

  • Connection.State Issue

    - by Mostafa
    Hi I've published my new website , In my computer it works fine and no problem , But in the Server I when some user connect at the same time it crash . I found out There is a error at this Method : public static DbDataReader ExecuteReader(DbCommand dbCommand, CommandBehavior commandBehavior) { if (dbConnection.State != ConnectionState.Open) dbConnection.Open(); return dbCommand.ExecuteReader(commandBehavior); } When i trace it , It says ConnectionState is not open , and it's doing opening . Here are my questions : 1- Is it a problem if while ConnectionState is doing opening , we Open the connection again ? 2- What I've missed , that i receive this error ?

    Read the article

  • Displaying Image On SmallBASIC

    - by Nathan Campos
    I want to display a image using SmallBASIC. For this I've started by searching on the references, then I found a reference for IMAGE, that is like this: IMAGE #handle, index, x, y [,sx,sy [,w,h]] Then I found another to open files(OPEN): OPEN file [FOR {INPUT|OUTPUT|APPEND}] AS #fileN But I want to know some things: What image types this function can display? There is any real example to use IMAGE?

    Read the article

  • Selenium RC: how to capture/handle error?

    - by KenBurnsFan1
    Hi, My test uses Selenium to loop through a CSV list of URLs via an HTTP proxy (working script below). As I watch the script run I can see about 10% of the calls produce "Proxy error: 502" ("Bad_Gateway"); however, the errors are not captured by my catch-all "except Exception" clause -- ie: instead of writing 'error' in the appropriate row of the "output.csv", they get passed to the else clause and produce a short piece of html that starts: "Proxy error: 502 Read from server failed: Unknown error." Also, if I collect all the URLs which returned 502s and re-run the script, they all pass, which leads me to believe that this is a sporadic network path issue. Question: Can the script be made to recognize the the 502 errors, sleep a minute, and then retry the URL instead of moving on to the next URL in the list? The only alternative that I can think of is to apply re.search("Proxy error: 502") after "get_html_source" as a way to catch the bad calls. Then, if the RE matches, put the script to sleep for a minute and then retry 'sel.open(row[0]' on the URL which produced the 502. Any advice would be much appreciated. Thanks! #python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://baseDomain.com") self.selenium.start() self.selenium.set_timeout("60000") def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('ListOfSubDomains.csv', 'rb')) for row in spamReader: try: sel.open(row[0]) except Exception: ofile = open('output.csv', 'ab') ofile.write("error" + '\n') ofile.close() else: time.sleep(5) html = sel.get_html_source() ofile = open('output.csv', 'ab') ofile.write(html.encode('utf-8') + '\n') ofile.close() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()

    Read the article

  • MS SQL Error: Primary file group is full

    - by aximili
    I have a very large table in my database and I am starting to get this error Could not allocate a new page for database 'mydatabase' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. How do you fix this error? I don't understand the suggestions there.

    Read the article

  • Having problem with C++ file handling

    - by caramel1991
    Our lecturer has given us a task,I've attempted it and try every single effort I can,but I still struggle with one of the problem in it,here goes the question: The company you work at receives a monthly report in a text format. The report contains the following information. • Department Name • Head of Department Name • Month • Minimum spending of the month • Maximum spending of the month Your program is to obtain the name of the input file from the user. Implement a structure to represent the data: Once the file has been read into your program, print out the following statistics for the user: • List which department has the minimum spending per month by month • List which department has the minimum spending by month by month Write the information into a file called “MaxMin.txt” Then do a processing so that the Department Name, Head of Department Name, Minimum spending and Maximum spending are written to separate files based on the month, eg Jan, Feb, March and so on. and of course our lecturer does send us a text file with the content: Engineering Bill Jan 2000 15000 IT Jack Jan 300 20000 HR Jill Jan 1500 10000 Engineering Bill Feb 5000 45000 IT Jack Feb 4500 7000 HR Jill Feb 5600 60000 Engineering Bill Mar 5000 45000 IT Jack Mar 4500 7000 HR Jill Mar 5600 60000 Engineering Bill Apr 5000 45000 IT Jack Apr 4500 7000 HR Jill Apr 5600 60000 Engineering Bill May 2000 15000 IT Jack May 300 20000 HR Jill May 1500 10000 Engineering Bill Jun 2000 15000 IT Jack Jun 300 20000 HR Jill Jun 1500 10000 and here's the c++ code I've written ue#include include include using namespace std; struct Record { string depName; string head; string month; float max; float min; string name; }myRecord[19]; int main () { string line; ofstream minmax,jan,feb,mar,apr,may,jun; char a[50]; char b[50]; int i = 0,j,k; float temp; //float maxjan=myRecord[0].max,maxfeb=myRecord[0].max,maxmar=myRecord[0].max,maxapr=myRecord[0].max,maxmay=myRecord[0].max,maxjune=myRecord[0].max; float minjan=myRecord[1].min,minfeb=myRecord[1].min,minmar=myRecord[1].min,minapr=myRecord[1].min,minmay=myRecord[1].min,minjune=myRecord[1].min; float maxjan=0,maxfeb=0,maxmar=0,maxapr=0,maxmay=0,maxjune=0; //float minjan=0,minfeb=0,minmar=0,minapr=0,minmay=0,minjune=0; string maxjanDep,maxfebDep,maxmarDep,maxaprDep,maxmayDep,maxjunDep; string minjanDep,minfebDep,minmarDep,minaprDep,minmayDep,minjunDep; cout<<"Enter file name: "; cina; ifstream myfile (a); //minmax.open ("MaxMin.txt"); if (myfile.is_open()){ while (! myfile.eof()){ myfilemyRecord[i].depNamemyRecord[i].headmyRecord[i].monthmyRecord[i].minmyRecord[i].max; cout << myRecord[i].depName<<"\t"< cout<<"Enter file name: "; cinb; ifstream myfile1 (b); minmax.open ("MaxMin.txt"); jan.open ("Jan.txt"); feb.open ("Feb.txt"); mar.open ("March.txt"); apr.open ("April.txt"); may.open ("May.txt"); jun.open ("Jun.txt"); if (myfile1.is_open()){ while (! myfile1.eof()){ myfile1myRecord[i].depNamemyRecord[i].headmyRecord[i].monthmyRecord[i].minmyRecord[i].max; if (myRecord[i].month == "Jan"){ jan<< myRecord[i].depName<<"\t"< if (maxjan< myRecord[i].max){ maxjan=myRecord[i].max; maxjanDep=myRecord[i].depName;} //if (minjan myRecord[i].min){ // minjan=myRecord[i].min; //minjanDep=myRecord[i].depName; //} for (k=1;k<=3;k++){ for (j=0;j<2;j++){ if (myRecord[j].minmyRecord[j+1].min){ temp=myRecord[j].min; myRecord[j].min=myRecord[j+1].min; myRecord[j+1].min=temp; minjanDep=myRecord[j].depName; }}} } if (myRecord[i].month == "Feb"){ feb<< myRecord[i].depName<<"\t"< //if (minfebmyRecord[i].min){ //minfeb=myRecord[i].min; //minfebDep=myRecord[i].depName; //} for (k=1;k<=3;k++){ for (j=0;j<2;j++){ if (myRecord[j].minmyRecord[j+1].min){ temp=myRecord[j].min; myRecord[j].min=myRecord[j+1].min; myRecord[j+1].min=temp; minfebDep=myRecord[j+1].depName; }}} } if (myRecord[i].month == "Mar"){ mar<< myRecord[i].depName<<"\t"< if (myRecord[i].month == "Apr"){ apr<< myRecord[i].depName<<"\t"< if (minaprmyRecord[i].min){ minapr=myRecord[i].min; minaprDep=myRecord[i].min;} } if (myRecord[i].month == "May"){ may< if (minmaymyRecord[i].min){ minmay=myRecord[i].min; minmayDep=myRecord[i].depName;} } if (myRecord[i].month == "Jun"){ jun<< myRecord[i].depName<<"\t"< if (minjunemyRecord[i].min){ minjune=myRecord[i].min; minjunDep=myRecord[i].depName;} } i++; myfile.close(); } minmax<<"department that has maximum spending at jan "< else{ cout << "Unable to open file"< } sorry inside that code ue#include should has iostream along with another two #include fstream and string,but at here it was treated as html tag,so i can't type it. my problem here is,I can't seems to get the minimum spending,I've try all I can but I'm still lingering on it,any idea??THANK YOU!

    Read the article

  • [JS] XMLHttpRequest problem

    - by mcco
    I am trying to do "POST" with XMLHttpRequest in a firefox extension, and after that I try to get the "Location" header and open it in a new tab. For some reason, the XMLHttpRequest doesn't contain a location header. My code function web(url,request) { var http = new XMLHttpRequest(); http.open('POST',url,true); http.setRequestHeader('Content-Type','application/x-www-form-urlencoded'); http.onreadystatechange=function() { if(http.readyState == 2) { alert(http.getResponseHeader("Location")); } } http.send(request); } Also, if I change the alert to getAllResponseHeaders() to see all headers, I just don't see the location header there. If I try to spy on the request of the original site with Firebug, it does show me the location header in the response. Please help me resolve my issues. Thanks :) P.S. I'm also unable to open a link in new tab using window.open(url, this.window.title);, but as that's not directly related to the rest of this I'll post a separate question to ask about it.

    Read the article

  • How to write mmap input memory to O_DIRECT output file?

    - by Friedrich
    why doesn't following pseudo-code work (O_DIRECT results in EFAULT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file", O_DIRECT); write(out_fd, in_mmap, PAGE_SIZE); while following does (no O_DIRECT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file"); write(out_fd, in_mmap, PAGE_SIZE); I guess it's something with virtual kernel pages to virtual user pages, which cannot be translated in the write call? Best regards, Friedrich

    Read the article

  • Dev environment - Cubicles or pods?

    - by jon
    We're reorganizing our workspaces at work, and are individually being given the choice of working in a more open space with a few other developers, or a more closed off space by ourselves. Which should I choose?

    Read the article

  • Batch file that returns folder size

    - by Paul Wall
    Hi, I'm having space issues on my Vista machine and need to figure out what's taking up so much space. I would like to write a simple batch file that returns all folders under C: and the size of each folder. The dir command doesn't appear to return folder size. Unfortunately we don't have admin rights and can't install a third party application and we have other users in our group that also need this information. Thanks...

    Read the article

  • Looking for an algorithm that will show how to fit the most boxes in a container.

    - by Abe Miessler
    I've been interested in writing an application that will show how to fit boxes (of random dimensions) in a container so there is as little space as possible left. A real life example would be something that would tell you how to use the most space in a UPS truck. Does anyone know of a good place to start for something like this? Is there an existing algorithm that does something similar to what I'm talking about?

    Read the article

  • Opening Blob Data stored in Sqlite as a File with Tcl/Tk

    - by dmullins
    I am using the following tcl code to store a file from my deskstop into a Sqlite database as blob data ($fileText is a path to a text file): sqlite3 db Docs.db set fileID [open $fileText RDONLY] fconfigure $fileID -translation binary set content [read $fileID] close $fileID db eval {insert into Document (Doc) VALUES ($content)} db close I have found many resources on how to open the blob data to read and write to it, but I cannot find any resources on openning the blob data as a file. For example, if $fileText was a pdf, how would I open it, from Sqlite, as a pdf?

    Read the article

  • c# Remove special chars from a File

    - by jmpena
    Hello i have a problem, im trying to open a textfile and remove all the special chars ñ Ñ ' á í etc... the file its a Layout that the clients send to me and i parse it to send the file to an AS400 server but i have to remove all special chars. THE PROBLES IS: some files with some special chars when i open it in c# it read the special chars and Two different chars and move the entire line one space to the right and then the information that has to be in that position wont be OK. i take the same file and open it in Notepad and the file is OK but when i open it in WordPad it looks like 2 chars (for just 1 especial char) Example: in the file i have: "0001 0003JUAN PEÑA33441JPENATEST" But in c# it shows "0001 0003JUAN PEï¦A33441JPENATEST" im using the encondig 1251 any help?

    Read the article

  • C# - How do find a string within a string even if it spans across whitespace?

    - by James Heaney
    I want to be able to find and highlight a string within a string BUT I no not want to remove the space. So if my original string is : There are 12 monkeys I want to find '12 mon' and highlight those characters ending up with : There are < font color='red' 12 mon< /font keys BUT I also want the same result if I search for '12mon' (no space this time) This has really bent my mind! I'm sure it can be done with Regex.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >