Search Results

Search found 6055 results on 243 pages for 'ryan max'.

Page 206/243 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • how to retrive pK using spring security

    - by aditya
    i implement this method of the UserDetailService interface, public UserDetails loadUserByUsername(final String username) throws UsernameNotFoundException, DataAccessException { final EmailCredential userDetails = persistentEmailCredential .getUniqueEmailCredential(username); if (userDetails == null) { throw new UsernameNotFoundException(username + "is not registered"); } final HashSet<GrantedAuthority> authorities = new HashSet<GrantedAuthority>(); authorities.add(new GrantedAuthorityImpl("ROLE_USER")); for (UserRole role:userDetails.getAccount().getRoles()) { authorities.add(new GrantedAuthorityImpl(role.getRole())); } return new User(userDetails.getEmailAddress(), userDetails .getPassword(), true, true, true, true, authorities); } in the security context i do some thing like this <!-- Login Info --> <form-login default-target-url='/dashboard.htm' login-page="/login.htm" authentication-failure-url="/login.htm?authfailed=true" always-use-default-target='false' /> <logout logout-success-url="/login.htm" invalidate-session="true" /> <remember-me user-service-ref="emailAccountService" key="fuellingsport" /> <session-management> <concurrency-control max-sessions="1" /> </session-management> </http> now i want to pop out the Pk of the logged in user, how can i show it in my jsp pages, any idea thanks in advance

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • sql query - how to apply limit within group by

    - by Raj
    hey guys assuming i have a table named t1 with following fields: ROWID, CID, PID, Score, SortKey it has the following data: 1, C1, P1, 10, 1 2, C1, P2, 20, 2 3, C1, P3, 30, 3 4, C2, P4, 20, 3 5, C2, P5, 30, 2 6, C3, P6, 10, 1 7, C3, P7, 20, 2 what query do i write so that it applies group by on CID, but instead of returning me 1 single result per group, it returns me a max of 2 results per group. also where condition is score = 20 and i want the results ordered by CID and SortKey. If I had to run my query on above data, i would expect the following result: RESULTS FOR C1 - note: ROWID 1 is not considered as its score < 20 C1, P2, 20, 2 C1, P3, 30, 3 RESULTS FOR C2 - note: ROWID 5 appears before ROWID 4 as ROWID 5 has lesser value SortKey C2, P5, 30, 2 C2, P4, 20, 3 RESULTS FOR C3 - note: ROWID 6 does not appear as its score is less than 20 so only 1 record returned here C3, P7, 20, 2 IN SHORT, I WANT A LIMIT WITHIN A GROUP BY. I want the simplest solution and want to avoid temp tables. sub queries are fine. also note i am using sqlite for this

    Read the article

  • categorize a set of phrases into a set of similar phrases

    - by Dingo
    I have a few apps that generate textual tracing information (logs) to log files. The tracing information is the typical printf() style - i.e. there are a lot of log entries that are similar (same format argument to printf), but differ where the format string had parameters. What would be an algorithm (url, books, articles, ...) that will allow me to analyze the log entries and categorize them into several bins/containers, where each bin has one associated format? Essentially, what I would like is to transform the raw log entries into (formatA, arg0 ... argN) instances, where formatA is shared among many log entries. The formatA does not have to be the exact format used to generate the entry (even more so if this makes the algo simpler). Most of the literature and web-info I found deals with exact matching, a max substring matching, or a k-difference (with k known/fixed ahead of time). Also, it focuses on matching a pair of (long) strings, or a single bin output (one match among all input). My case is somewhat different, since I have to discover what represents a (good-enough) match (generally a sequence of discontinuous strings), and then categorize each input entries to one of the discovered matches. Lastly, I'm not looking for a perfect algorithm, but something simple/easy to maintain. Thanks!

    Read the article

  • I'm making a simulated tv

    - by Jam
    I need to make a tv that shows the user the channel and the volume, and shows whether or not the television is on. I have the majority of the code made, but for some reason the channels won't switch. I'm fairly unfamiliar with how properties work, and I think that's what my problem here is. Help please. class Television(object): def __init__(self, __channel=1, volume=1, is_on=0): self.__channel=__channel self.volume=volume self.is_on=is_on def __str__(self): if self.is_on==1: print "The tv is on" print self.__channel print self.volume else: print "The television is off." def toggle_power(self): if self.is_on==1: self.is_on=0 return self.is_on if self.is_on==0: self.is_on=1 return self.is_on def get_channel(self): return channel def set_channel(self, choice): if self.is_on==1: if choice>=0 and choice<=499: channel=self.__channel else: print "Invalid channel!" else: print "The television isn't on!" channel=property(get_channel, set_channel) def raise_volume(self, up=1): if self.is_on==1: self.volume+=up if self.volume>=10: self.volume=10 print "Max volume!" else: print "The television isn't on!" def lower_volume(self, down=1): if self.is_on==1: self.volume-=down if self.volume<=0: self.volume=0 print "Muted!" else: print "The television isn't on!" def main(): tv=Television() choice=None while choice!="0": print \ """ Television 0 - Exit 1 - Toggle Power 2 - Change Channel 3 - Raise Volume 4 - Lower Volume """ choice=raw_input("Choice: ") print if choice=="0": print "Good-bye." elif choice=="1": tv.toggle_power() tv.__str__() elif choice=="2": change=raw_input("What would you like to change the channel to?") tv.set_channel(change) tv.__str__() elif choice=="3": tv.raise_volume() tv.__str__() elif choice=="4": tv.lower_volume() tv.__str__() else: print "\nSorry, but", choice, "isn't a valid choice." main() raw_input("Press enter to exit.")

    Read the article

  • Choosing circle radius to fully fill a rectangle

    - by Andy
    Hi, the pixman image library can draw radial color gradients between two circles. I'd like the radial gradient to fill a rectangular area defined by "width" and "height" completely. Now my question, how should I choose the radius of the outer circle? My current parameters are the following: A) inner circle (start of gradient) center pointer of inner circle: (width*0.5|height*0.5) radius of inner circle: 1 color: black B) outer circle (end of gradient) center pointer of outer circle: (width*0.5|height*0.5) radius of outer circle: ??? color: white How should I choose the radius of the outer circle to make sure that the outer circle will entirely fill my bounding rectangle defined by width*height. There shall be no empty areas in the corners, the area shall be completely covered by the circle. In other words, the bounding rectangle width,height must fit entirely into the outer circle. Choosing outer_radius = max(width, height) * 0.5 as the radius for the outer circle is obviously not enough. It must be bigger, but how much bigger? Thanks!

    Read the article

  • Unable to access Java-created file -- sometimes

    - by BlairHippo
    In Java, I'm working with code running under WinXP that creates a file like this: public synchronized void store(Properties props, byte[] data) { try { File file = filenameBasedOnProperties(props); if ( file.exists() ) { return; } File temp = File.createTempFile("tempfile", null); FileOutputStream out = new FileOutputStream(temp); out.write(data); out.flush(); out.close(); file.getParentFile().mkdirs(); temp.renameTo(file); } catch (IOException ex) { // Complain and whine and stuff } } Sometimes, when a file is created this way, it's just about totally inaccessible from outside the code (though the code responsible for opening and reading the file has no problem), even when the application isn't running. When accessed via Windows Explorer, I can't move, rename, delete, or even open the file. Under Cygwin, I get the following when I ls -l the directory: ls: cannot access [big-honkin-filename] total 0 ?????????? ? ? ? ? ? [big-honkin-filename] As implied, the filenames are big, but under the 260-character max for XP (though they are slightly over 200 characters). To further add to the sense the my computer just wants me to feel stupid, sometimes the files created by this code are perfectly normal. The only pattern I've spotted is that once one file in the directory "locks", the rest are screwed. Anybody ever run into something like this before, or have any insights into what's going on here?

    Read the article

  • Configuring Displays for Different Mobile Devices

    - by Mike
    Does anyone know a way to have specific CSS style sheets based on the type of Mobile Device? I have been researching it a few days now and haven't found anything except this snippet of code for iPhones. <link media="only screen and (max-device-width: 480px)" href="iPhone.css" type="text/css" rel="stylesheet" /> This works great for iPhones, but on all other mobile devices (android, blackberry, Nokia), it's still displaying the same as my site. I tried: <link media="handheld" href="iPhone.css" type="text/css" rel="stylesheet" /> but that didn't seem to have any effect on the other mobile devices. So I'm not sure how to reach the blackberry's/androids/nokia's without effect the code of my actual site. I'm building my site using the PHP framework CodeIgniter and I looked into this code which is suppose to be able to tell if it is being looked at through a mobile device or browser. if ($this->agent->is_browser()) { $agent = $this->agent->browser().' '.$this->agent->version(); } elseif ($this->agent->is_mobile()) { $agent = $this->agent->mobile(); } else { $agent = 'Unidentified User Agent'; } The only problem is that the newer phones we are building on render the site as a browser and not as a mobile (I think, I've only tested the iphone because it's all I have at the moment). So does anyone have any work arounds for the other phone platforms?

    Read the article

  • curl: downloading from dynamic url

    - by adam n
    I'm trying to download an html file with curl in bash. Like this site: http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S&subareasel=PHYSICS&idxcrs=0001B+++ When I download it manually, it works fine. However, when i try and run my script through crontab, the output html file is very small and just says "Object moved to here." with a broken link. Does this have something to do with the sparse environment the crontab commands run it? I found this question: http://stackoverflow.com/questions/1279340/php-ssl-curl-object-moved-error but i'm using bash, not php. What are the equivalent command line options or variables to set to fix this problem in bash? (I want to do this with curl, not wget) Edit: well, sometimes downloading the file manually (via interactive shell) works, but sometimes it doesn't (I still get the "Object moved here" message). So it may not be a a specifically be a problem with cron's environment, but with curl itself. the cron entry: * * * * * ~/.class/test.sh >> ~/.class/test_out 2>&1 test.sh: #! /bin/bash PATH=/usr/local/bin:/usr/bin:/bin:/sbin cd ~/.class course="physics 1b" url="http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S<URL>subareasel=PHYSICS<URL>idxcrs=0001B+++" curl "$url" -sLo "$course".html --max-redirs 5 As I was searching around on google, someone suggested that the problem might happen because there are parameters in the url. (Because it is a dynamic url?)

    Read the article

  • Different Linux message queues have the same id?

    - by Halo
    I open a mesage queue in a .c file, and upon success it says the message queue id is 3. While that program is still running, in another terminal I start another program (of another .c file), that creates a new message queue with a different mqd_t. But its id also appears as 3. Is this a problem? server file goes like this: void server(char* req_mq) { struct mq_attr attr; mqd_t mqdes; struct request* msgptr; int n; char *bufptr; int buflen; pid_t apid; //attr.mq_maxmsg = 300; //attr.mq_msgsize = 1024; mqdes = mq_open(req_mq, O_RDWR | O_CREAT, 0666, NULL); if (mqdes == -1) { perror("can not create msg queue\n"); exit(1); } printf("server mq created, mq id = %d\n", (int) mqdes); and the client goes like: void client(char* req_mq, int min, int max, char* dir_path_name, char* outfile) { pid_t pid; /* get the process id */ if ((pid = getpid()) < 0) { perror("unable to get client pid"); } mqd_t mqd, dq; char pfx[50] = DQ_PRFX; char suffix[50]; // sprintf(suffix, "%d", pid); strcat(pfx, suffix); dq = mq_open(pfx, O_RDWR | O_CREAT, 0666, NULL); if (dq == -1) { perror("can not open data queue\n"); exit(1); } printf("data queue created, mq id = %d\n", (int) dq); mqd = mq_open(req_mq, O_RDWR); if (mqd == -1) { perror("can not open msg queue\n"); exit(1); } mqdes and dq seem to share the same id 3.

    Read the article

  • How to get height for NSAttributedString at a fixed width

    - by bonaldi
    I want to do some drawing of NSAttributedStrings in fixed-width boxes, but am having trouble calculating the right height they'll take up when drawn. So far, I've tried: Calling - (NSSize) size, but the results are useless (for this purpose), as they'll give whatever width the string desires. Calling - (void)drawWithRect:(NSRect)rect options:(NSStringDrawingOptions)options with a rect shaped to the width I want and NSStringDrawingUsesLineFragmentOrigin in the options, exactly as I'm using in my drawing. The results are ... difficult to understand; certainly not what I'm looking for. (As is pointed out in a number of places, including this Cocoa-Dev thread). Creating a temporary NSTextView and doing: [[tmpView textStorage] setAttributedString:aString]; [tmpView setHorizontallyResizable:NO]; [tmpView sizeToFit]; When I query the frame of tmpView, the width is still as desired, and the height is often correct ... until I get to longer strings, when it's often half the size that's required. (There doesn't seem to be a max size being hit: one frame will be 273.0 high (about 300 too short), the other will be 478.0 (only 60-ish too short)). I'd appreciate any pointers, if anyone else has managed this.

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc 2109, 2965)?

    - by Artyom
    Hello, According to RFC 2109, 2965 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109 and RFC2965 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • Dynamic memory inside a struct

    - by Maximilien
    Hello, I'm editing a piece of code, that is part of a big project, that uses "const's" to initialize a bunch of arrays. Because I want to parametrize these const's I have to adapt the code to use "malloc" in order to allocate the memory. Unfortunately there is a problem with structs: I'm not able to allocate dynamic memory in the struct itself. Doing it outside would cause to much modification of the original code. Here's a small example: int globalx,globaly; struct bigStruct{ struct subStruct{ double info1; double info2; bool valid; }; double data; //subStruct bar[globalx][globaly]; subStruct ** bar=(subStruct**)malloc(globalx*sizeof(subStruct*)); for(int i=0;i<globalx;i++) bar[i]=(*subStruct)malloc(globaly*sizeof(subStruct)); }; int main(){ globalx=2; globaly=3; bigStruct foo; for(int i=0;i<globalx;i++) for(int j=0;j<globaly;j++){ foo.bar[i][j].info1=i+j; foo.bar[i][j].info2=i*j; foo.bar[i][j].valid=(i==j); } return 0; } Note: in the program code I'm editing globalx and globaly were const's in a specified namespace. Now I removed the "const" so they can act as parameters that are set exactly once. Summarized: How can I properly allocate memory for the substruct inside the struct? Thank you very much! Max

    Read the article

  • How to cache an HTTP POST response?

    - by KARASZI István
    I would like to create a cacheable HTTP response for a POST request. My actual implementation responses the following for the POST request: HTTP/1.1 201 Created Expires: Sat, 03 Oct 2020 15:33:00 GMT Cache-Control: private,max-age=315360000,no-transform Content-Type: application/x-www-form-urlencoded; charset=UTF-8 Content-Length: 9 ETag: 2120507660800737950 Last-Modified: Wed, 06 Oct 2010 15:33:00 GMT ......... But it looks like that the browsers (Safari, Firefox tested) are not cacheing the response. In the HTTP RFC the corresponding part says: Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource. So I think it should be cached. I know I could set a session variable and set a cookie and do a 303 redirect, but I want to cache the response of the POST request. Is there any way to do this? P.S.: I've started with a simple 200 OK, so it does not work. Thanks,

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

  • Rails: Need a helping hand to finish this Jquery/Ajax problem.

    - by DJTripleThreat
    Here's my problem: I have a combo box that when its index changes I want a div tag with the id="services" to repopulate with checkboxes based on that comboboxes value. I want this to be done using ajax. This is my first time working with ajax for rails so I need a helping hand. Here is what I have so far: My application.js file. Something that Ryan uses in one of his railscasts. This is supposed to be a helper method for handling ajax requests. Is this useful? Should I be using this?: //<![CDATA[ $.ajaxSetup({ 'beforeSend': function(xhr) {xhr.setRequestHeader("Accept","text/javascript")} }); // This function doesn't return any results. How might I change that? Or // should I have another function to do that? $.fn.submitWithAjax = function() { this.submit(function() { $.post($(this).attr("action"), $(this).serialize(), null, "script"); return true; }); }; //]]> An external javascript file for this template (/public/javascripts/combo_box.js): //<![CDATA[ $(document).ready(function(){ $('#event_service_time_allotment').change(function () { // maybe I should be using submitWithAjax(); ?? $(this).parent().submit(); }); }); //]]> My ???.js.erb file. I'm not sure where this file should go. Should I make an ajax controller?? Someone help me out with that part please. I can write this code no problem, I just need to know where it should go and what the file name should be called (best practices etc): // new.js.erb: dynamic choices... expecting a time_allotment alert('test'); // TODO: Return a json object or something with a result set of services // I should be expecting something like: // params[:event_service][:time_allotment] i think which I should use // to return a json object (??) to be parsed or rendered html // for the div#services. Here is my controller's new action. Am I supposed to respond to javascript here? Should I make an ajax controller instead? What's the best way to do this?: # /app/controllers/event_services_controller.rb def new @event_service = EventService.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @event_service } format.js # should I have a javascript handler here? i'm lost! end end My /app/views/event_service/new.html.erb. My ajax call I think should be a different action then the form: <% content_for :head do %> <%= javascript_include_tag '/javascripts/combo_box.js' %> <% end %> <% form_for @event_service, :url => admin_events_path, :html => {:method => :post} do |f| %> <!-- TimeAllotment is a tabless model which is why this is done like so... --> <!-- This select produces an id of: "event_service_time_allotment" and a name of: "event_service[time_allotment]" --> <%= select("event_service", "time_allotment", TimeAllotment.all.collect {|ta| [ta.title, ta.value]}, {:prompt => true}) %> Services: <!-- this div right here needs to be repopulated when the above select changes. --> <div id="services"> <% for service_type in ServiceType.all %> <div> <%= check_box_tag "event_service[service_type_ids][]", service_type.id, false %> <%=h service_type.title %> </div> <% end %> </div> <% end %> ok so right now ALL of the services are there to be chosen from. I want them to change based on what is selected in the combobox event_service_time_allotment. Thanks, I know this is super complicated so any helpful answers will get an upvote.

    Read the article

  • Data mining - parsing a log file in Java

    - by nuvio
    Hello I am carrying on a Java project for the university, where I should analyse poker hands. I found some poker hands in a txt log file. They would typically look like this: PokerStars Zoom Hand #86981279921: Hold'em No Limit ($0.10/$0.25 USD) - 2012/09/30 23:49:51 ET Table 'Whirlpool Zoom 40-100 bb' 9-max Seat #1 is the button Seat 1: lgwong ($30.99 in chips) Seat 2: hastyboots ($28.61 in chips) Seat 3: seula i ($25.31 in chips) Seat 4: fr_kevin01 ($31.81 in chips) Seat 5: limey05 ($27.45 in chips) Seat 6: sanlu ($24.65 in chips) Seat 7: Masterfrank ($25.35 in chips) Seat 8: Refu$e2Lose ($33.23 in chips) Seat 9: 1pepepe0114 ($37.62 in chips) hastyboots: posts small blind $0.10 seula i: posts big blind $0.25 *** HOLE CARDS *** fr_kevin01: folds limey05: folds sanlu: folds Masterfrank: folds Refu$e2Lose: folds 1pepepe0114: folds lgwong: folds hastyboots: folds Uncalled bet ($0.15) returned to seula i seula i collected $0.20 from pot seula i: doesn't show hand *** SUMMARY *** Total pot $0.20 | Rake $0 Seat 1: lgwong (button) folded before Flop (didn't bet) Seat 2: hastyboots (small blind) folded before Flop Seat 3: seula i (big blind) collected ($0.20) Seat 4: fr_kevin01 folded before Flop (didn't bet) Seat 5: limey05 folded before Flop (didn't bet) Seat 6: sanlu folded before Flop (didn't bet) Seat 7: Masterfrank folded before Flop (didn't bet) Seat 8: Refu$e2Lose folded before Flop (didn't bet) Seat 9: 1pepepe0114 folded before Flop (didn't bet) My problem is that I am not sure about how to proceed to parse the log file: the only knowledge I have is "manually" scanning line by line for a particular character or symbol, but I am afraid it would need exhaustive error handling. So I was wandering if there is any other techniques or better way to parse these poker hands? Many thanks for your help

    Read the article

  • A question about DOM parser used with Python

    - by fixxxer
    I'm using the following python code to search for a node in an XML file and changing the value of an attribute of one of it's children.Changes are happening correctly when the node is displayed using toxml().But, when it is written to a file, the attributes rearrange themselves(as seen in the Source and the Final XML below). Could anyone explain how and why this happen? Python code: #!/usr/bin/env python import xml from xml.dom.minidom import parse dom=parse("max.xml") #print "Please enter the store name:" for sku in dom.getElementsByTagName("node"): if sku.getAttribute("name") == "store": sku.childNodes[1].childNodes[5].setAttribute("value","Delhi,India") print sku.toxml() xml.dom.ext.PrettyPrint(dom, open("new.xml", "w")) a part of the Source XML: <node name='store' node_id='515' module='mpx.lib.node.simple_value.SimpleValue' config_builder='' inherant='false' description='Configurable Value'> <match> <property name='1' value='point'/> <property name='2' value='0'/> <property name='val' value='Store# 09204 Staten Island, NY'/> <property name='3' value='str'/> </match> </node> Final XML : <node config_builder="" description="Configurable Value" inherant="false" module="mpx.lib.node.simple_value.SimpleValue" name="store" node_id="515"> <match> <property name="1" value="point"/> <property name="2" value="0"/> <property name="val" value="Delhi,India"/> <property name="3" value="str"/> </match> </node>

    Read the article

  • Traceroute comparison and statistics

    - by ben-casey
    I have a number of traceroutes that i need to compare against each other but i dont know the best way to do it, ive been told that hash maps are a good technique but i dont know how to implement them on my code. so far i have: FileInputStream fstream = new FileInputStream("traceroute.log"); // Get the object of DataInputStream DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String strLine; // reads lines in while ((strLine = br.readLine()) != null) { System.out.println(strLine); } and the output looks like this: Wed Mar 31 01:00:03 BST 2010 traceroute to www.bbc.co.uk (212.58.251.195), 30 hops max, 40 byte packets 1 139.222.0.1 (139.222.0.1) 0.873 ms 1.074 ms 1.162 ms 2 core-from-cmp.uea.ac.uk (10.0.0.1) 0.312 ms 0.350 ms 0.463 ms 3 ueaha1btm-from-uea1 (172.16.0.34) 0.791 ms 0.772 ms 1.238 ms 4 bound-from-ueahatop.uea.ac.uk (193.62.92.71) 5.094 ms 4.451 ms 4.441 ms 5 gi0-3.norw-rbr1.eastnet.ja.net (193.60.0.21) 4.426 ms 5.014 ms 4.389 ms 6 gi3-0-2.chel-rbr1.eastnet.ja.net (193.63.107.114) 6.055 ms 6.039 ms * 7 lond-sbr1.ja.net (146.97.40.45) 6.994 ms 7.493 ms 7.457 ms 8 so-6-0-0.lond-sbr4.ja.net (146.97.33.154) 8.206 ms 8.187 ms 8.234 ms 9 po1.lond-ban4.ja.net (146.97.35.110) 8.673 ms 6.294 ms 7.668 ms 10 bbc.lond-sbr4.ja.net (193.62.157.178) 6.303 ms 8.118 ms 8.107 ms 11 212.58.238.153 (212.58.238.153) 6.245 ms 8.066 ms 6.541 ms 12 212.58.239.62 (212.58.239.62) 7.023 ms 8.419 ms 7.068 ms what i need to do is compare this trace against another one just like it and look for the changes and time differences etc, then print a stats page.

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Accelerated C++, problem 5-6 (copying values from inside a vector to the front)

    - by Darel
    Hello, I'm working through the exercises in Accelerated C++ and I'm stuck on question 5-6. Here's the problem description: (somewhat abbreviated, I've removed extraneous info.) 5-6. Write the extract_fails function so that it copies the records for the passing students to the beginning of students, and then uses the resize function to remove the extra elements from the end of students. (students is a vector of student structures. student structures contain an individual student's name and grades.) More specifically, I'm having trouble getting the vector.insert function to properly copy the passing student structures to the start of the vector students. Here's the extract_fails function as I have it so far (note it doesn't resize the vector yet, as directed by the problem description; that should be trivial once I get past my current issue.) // Extract the students who failed from the "students" vector. void extract_fails(vector<Student_info>& students) { typedef vector<Student_info>::size_type str_sz; typedef vector<Student_info>::iterator iter; iter it = students.begin(); str_sz i = 0, count = 0; while (it != students.end()) { // fgrade tests wether or not the student failed if (!fgrade(*it)) { // if student passed, copy to front of vector students.insert(students.begin(), it, it); // tracks of the number of passing students(so we can properly resize the array) count++; } cout << it->name << endl; // output to verify that each student is iterated to it++; } } The code compiles and runs, but the students vector isn't adding any student structures to its front. My program's output displays that the students vector is unchanged. Here's my complete source code, followed by a sample input file (I redirect input from the console by typing " < grades" after the compiled program name at the command prompt.) #include <iostream> #include <string> #include <algorithm> // to get the declaration of `sort' #include <stdexcept> // to get the declaration of `domain_error' #include <vector> // to get the declaration of `vector' //driver program for grade partitioning examples using std::cin; using std::cout; using std::endl; using std::string; using std::domain_error; using std::sort; using std::vector; using std::max; using std::istream; struct Student_info { std::string name; double midterm, final; std::vector<double> homework; }; bool compare(const Student_info&, const Student_info&); std::istream& read(std::istream&, Student_info&); std::istream& read_hw(std::istream&, std::vector<double>&); double median(std::vector<double>); double grade(double, double, double); double grade(double, double, const std::vector<double>&); double grade(const Student_info&); bool fgrade(const Student_info&); void extract_fails(vector<Student_info>& v); int main() { vector<Student_info> vs; Student_info s; string::size_type maxlen = 0; while (read(cin, s)) { maxlen = max(maxlen, s.name.size()); vs.push_back(s); } sort(vs.begin(), vs.end(), compare); extract_fails(vs); // display the new, modified vector - it should be larger than // the input vector, due to some student structures being // added to the front of the vector. cout << "count: " << vs.size() << endl << endl; vector<Student_info>::iterator it = vs.begin(); while (it != vs.end()) cout << it++->name << endl; return 0; } // Extract the students who failed from the "students" vector. void extract_fails(vector<Student_info>& students) { typedef vector<Student_info>::size_type str_sz; typedef vector<Student_info>::iterator iter; iter it = students.begin(); str_sz i = 0, count = 0; while (it != students.end()) { // fgrade tests wether or not the student failed if (!fgrade(*it)) { // if student passed, copy to front of vector students.insert(students.begin(), it, it); // tracks of the number of passing students(so we can properly resize the array) count++; } cout << it->name << endl; // output to verify that each student is iterated to it++; } } bool compare(const Student_info& x, const Student_info& y) { return x.name < y.name; } istream& read(istream& is, Student_info& s) { // read and store the student's name and midterm and final exam grades is >> s.name >> s.midterm >> s.final; read_hw(is, s.homework); // read and store all the student's homework grades return is; } // read homework grades from an input stream into a `vector<double>' istream& read_hw(istream& in, vector<double>& hw) { if (in) { // get rid of previous contents hw.clear(); // read homework grades double x; while (in >> x) hw.push_back(x); // clear the stream so that input will work for the next student in.clear(); } return in; } // compute the median of a `vector<double>' // note that calling this function copies the entire argument `vector' double median(vector<double> vec) { typedef vector<double>::size_type vec_sz; vec_sz size = vec.size(); if (size == 0) throw domain_error("median of an empty vector"); sort(vec.begin(), vec.end()); vec_sz mid = size/2; return size % 2 == 0 ? (vec[mid] + vec[mid-1]) / 2 : vec[mid]; } // compute a student's overall grade from midterm and final exam grades and homework grade double grade(double midterm, double final, double homework) { return 0.2 * midterm + 0.4 * final + 0.4 * homework; } // compute a student's overall grade from midterm and final exam grades // and vector of homework grades. // this function does not copy its argument, because `median' does so for us. double grade(double midterm, double final, const vector<double>& hw) { if (hw.size() == 0) throw domain_error("student has done no homework"); return grade(midterm, final, median(hw)); } double grade(const Student_info& s) { return grade(s.midterm, s.final, s.homework); } // predicate to determine whether a student failed bool fgrade(const Student_info& s) { return grade(s) < 60; } Sample input file: Moo 100 100 100 100 100 100 100 100 Fail1 45 55 65 80 90 70 65 60 Moore 75 85 77 59 0 85 75 89 Norman 57 78 73 66 78 70 88 89 Olson 89 86 70 90 55 73 80 84 Peerson 47 70 82 73 50 87 73 71 Baker 67 72 73 40 0 78 55 70 Davis 77 70 82 65 70 77 83 81 Edwards 77 72 73 80 90 93 75 90 Fail2 55 55 65 50 55 60 65 60 Thanks to anyone who takes the time to look at this!

    Read the article

  • acts_as_xapian jobs table

    - by Grnbeagle
    Hi, Can someone explain to me the inner workings of acts_as_xapian_jobs table? I ran into an issue with the acts_as_xapian plugin recently, where I kept getting the following error when it creates an object with xapian indexed fields: Mysql::Error: Duplicate entry 'String-2147483647' for key 2: INSERT INTO `acts_as_xapian_jobs` (`action`, `model`, `model_id`) VALUES ('update', 'String', 23730251831560) It turns out the model_id exceeded the max int value of 2147483647. The workaround was to update model_id to use bigint. Why would the model_id be so huge? By looking at content of acts_as_xapian_jobs, it seems it creates a row for every field that is being indexed.. Understanding how a job gets created in the table would help a great deal. Here's a sampling of the table: mysql> select * from acts_as_xapian_jobs limit 5\G *************************** 1. row *************************** id: 19 model: String model_id: 23804037900560 action: update *************************** 2. row *************************** id: 49 model: String model_id: 23804037191200 action: update *************************** 3. row *************************** id: 79 model: String model_id: 23804037932180 action: update *************************** 4. row *************************** id: 109 model: String model_id: 23804037101700 action: update *************************** 5. row *************************** id: 139 model: String model_id: 23804037722160 action: update Thanks in advance, Amie

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • Gzip and subprocess' stdout in python

    - by pythonic metaphor
    I'm using python 2.6.4 and discovered that I can't use gzip with subprocess the way I might hope. This illustrates the problem: May 17 18:05:36> python Python 2.6.4 (r264:75706, Mar 10 2010, 14:41:19) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gzip >>> import subprocess >>> fh = gzip.open("tmp","wb") >>> subprocess.Popen("echo HI", shell=True, stdout=fh).wait() 0 >>> fh.close() >>> [2]+ Stopped python May 17 18:17:49> file tmp tmp: data May 17 18:17:53> less tmp "tmp" may be a binary file. See it anyway? May 17 18:17:58> zcat tmp zcat: tmp: not in gzip format Here's what it looks like inside less HI ^_<8B>^H^Hh<C0><F1>K^B<FF>tmp^@^C^@^@^@^@^@^@^@^@^@ which looks like it put in the stdout as text and then put in an empty gzip file. Indeed, if I remove the "Hi\n", then I get this: May 17 18:22:34> file tmp tmp: gzip compressed data, was "tmp", last modified: Mon May 17 18:17:12 2010, max compression What is going on here?

    Read the article

  • Sending jpegs by tcp socket...sometimes incomplete.

    - by Guy
    Vb.net Hi I've been working on a project for months now (vb 2008 express). There is one final problem which I can't solve. I need to send images to a client from a 'server'(listener). The code below works most of the time but sometimes the image is incomplete. I believe this might be something to do with the tcp packet sizes varying, maybe limited by how busy it is out there on the net. I have seen examples of code that splits the image into chunks and sends them out, but I can't get them to work maybe because I'm using a different vb version. The pictures to be sent are small 20k max. Any working code examples would be wonderful. I have been experimenting and failing with this final hurdle for weeks. Thanks in anticipation. Client----- Sub GetPic() '------- Connect to Server ClientSocket = New Socket(AddressFamily.InterNetwork, SocketType.Stream, _ ProtocolType.Tcp) ClientSocket.Connect(Epoint) '------- Send Picture Request Dim Bytes() As Byte = System.Text.ASCIIEncoding.ASCII.GetBytes("Send Picture") ClientSocket.Send(Bytes, Bytes.Length, SocketFlags.None) '------- Receive Response Dim RecvBuffer(20000) As Byte Dim Numbytes As Integer Numbytes = ClientSocket.Receive(RecvBuffer) Dim Darray(Numbytes) As Byte Buffer.BlockCopy(RecvBuffer, 0, Darray, 0, Numbytes) '------- Close Connection ClientSocket.Shutdown(SocketShutdown.Both) ClientSocket.Close() '------- Dim MStrm = New MemoryStream(Darray) Picture = Image.FromStream(MStrm) End Sub Listener----- 'Threaded from a listener Sub ClientThread(ByVal Client As TcpClient) Dim MStrm As New MemoryStream Dim Rbuffer(1024) As Byte Dim Tbyte As Byte() Dim NStrm As NetworkStream = Client.GetStream() Dim I As Integer = NStrm.Read(Rbuffer, 0, Rbuffer.Length) Dim Incoming As String = System.Text.Encoding.ASCII.GetString(Rbuffer, 0, I) If Incoming = "Send Picture" then Picture Save(MStrm, Picture.RawFormat) Tbyte = MStrm.ToArray NStrm.Write(Tbyte, 0, Tbyte.Length) End if Client.Close() End Sub

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >