Search Results

Search found 3760 results on 151 pages for 'mutlple entries'.

Page 39/151 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Use multiple WSGI mount points in Apache with an Nginx reverse proxy

    - by Thomas
    I am trying to set up multiple virtual hosts on the same server with Nginx and Apache and have run into a curious configuration issue. I have nginx is configured with a generic upstream to apache. upstream backend { server 1.1.1.1:8080; } I'm trying to set up multiple subdomains in nginx that hit different mountpoints in apache. Each would act like the following examples. server { listen 80; server_name foo.yoursite.com; location / { proxy_pass http://backend/bar/; include /etc/nginx/proxy.conf; } ... } server { listen 80; server_name delta.yoursite.com; location / { proxy_pass http://backend/gamma/; include /etc/nginx/proxy.conf; } ... } These mountpoints are pointed at django projects, however each of the url entries are coming back prepended with the apache mountpoint path. So, if I called the django url entry for foo.yoursite.com/wiki/biz/, django appears to be returning foo.yoursite.com/bar/wiki/biz/. Similarly, if I call for the url entry for delta.yoursite.com/wiki/biz/, I get delta.yoursite.com/gamma/wiki/biz/. Is there any way get rid of the prefix being returned on the url entries by django and apache?

    Read the article

  • JPA merge fails due to duplicate key

    - by wobblycogs
    I have a simple entity, Code, that I need to persist to a MySQL database. public class Code implements Serializable { @Id private String key; private String description; ...getters and setters... } The user supplies a file full of key, description pairs which I read, convert to Code objects and then insert in a single transaction using em.merge(code). The file will generally have duplicate entries which I deal with by first adding them to a map keyed on the key field as I read them in. A problem arises though when keys differ only by case (for example: XYZ and XyZ). My map will, of course, contain both entries but during the merge process MySQL sees the two keys as being the same and the call to merge fails with a MySQLIntegrityConstraintViolationException. I could easily fix this by uppercasing the keys as I read them in but I'd like to understand exactly what is going wrong. The conclusion I have come to is that JPA considers XYZ and XyZ to be different keys but MySQL considers them to be the same. As such when JPA checks its list of known keys (or does whatever it does to determine whether it needs to perform an insert or update) it fails to find the previous insert and issuing another which then fails. Is this corrent? Is there anyway round this other than better filtering the client data? I haven't defined .equals or .hashCode on the Code class so perhaps this is the problem.

    Read the article

  • Tokenizing a string with unequal number of spaces between fields

    - by gatechgrad
    I am tryint to tokenize entries from a file. However I am not able to use the line.split("") option because of unequal number of spaces between files. I am copying a few lines from my file below: "08-09-2010 21:21:46 00:22:7f:a6:9b:69 -79" "08-09-2010 21:21:46 04:4f:aa:b4:49:49 -79" "08-09-2010 21:21:46 04:4f:aa:31:4e:59 tikona 18002090044 -83" "08-09-2010 21:21:46 00:22:7f:26:9b:69 tikona 18002090044 -74" "08-09-2010 21:21:46 04:4f:aa:34:0d:c9 tikona 18002090044 -82" "08-09-2010 21:21:46 04:4f:aa:71:4e:59 -85" "08-09-2010 21:21:46 04:4f:aa:34:21:89 tikona 18002090044 -75" "08-09-2010 21:21:46 04:4f:aa:34:49:49 tikona 18002090044 -77" "08-09-2010 21:21:46 04:4f:aa:74:0d:c9 -85" "08-09-2010 21:22:47 18 APs were seen " I need to access the first column (which is a datetime object) the second column (00:22...) and the last column (-79 etc.). I have no trouble accessing the first and second columns, but not the last column. When I do a info=line.spilt(""), since the third column might or might no entries, I am not able to determine the token number. How do i access the 4th column? Is there a way i can use info[i].contains(" -")?

    Read the article

  • MySQL Unique hash insertion

    - by Jesse
    So, imagine a mysql table with a few simple columns, an auto increment, and a hash (varchar, UNIQUE). Is it possible to give mysql a query that will add a column, and generate a unique hash without multiple queries? Currently, the only way I can think of to achieve this is with a while, which I worry would become more and more processor intensive the more entries were in the db. Here's some pseudo-php, obviously untested, but gets the general idea across: while(!query("INSERT INTO table (hash) VALUES (".generate_hash().");")){ //found conflict, try again. } In the above example, the hash column would be UNIQUE, and so the query would fail. The problem is, say there's 500,000 entries in the db and I'm working off of a base36 hash generator, with 4 characters. The likelyhood of a conflict would be almost 1 in 3, and I definitely can't be running 160,000 queries. In fact, any more than 5 I would consider unacceptable. So, can I do this with pure SQL? I would need to generate a base62, 6 char string (like: "j8Du7X", chars a-z, A-Z, and 0-9), and either update the last_insert_id with it, or even better, generate it during the insert. I can handle basic CRUD with MySQL, but even JOINs are a little outside of my MySQL comfort zone, so excuse my ignorance if this is cake. Any ideas? I'd prefer to use either pure MySQL or PHP & MySQL, but hell, if another language can get this done cleanly, I'd build a script and AJAX it too. Thanks!

    Read the article

  • Zend_Feed_Reader Not supported Schema

    - by LookUp Webmaster
    Hello, I'm using Zend FW and wanted to make a feed reader. I did the following: $feed = Zend_Feed_Reader::import('feed://blog.lookup.cl/?feed=rss2'); $data = array( 'title' => $feed->getTitle(), 'link' => $feed->getLink(), 'dateModified' => $feed->getDateModified(), 'description' => $feed->getDescription(), 'language' => $feed->getLanguage(), 'entries' => array(), ); foreach ($feed as $entry) { $edata = array( 'title' => $entry->getTitle(), 'description' => $entry->getDescription(), 'dateModified' => $entry->getDateModified(), 'authors' => $entry->getAuthors(), 'link' => $entry->getLink(), 'content' => $entry->getContent() ); $data['entries'][] = $edata; } And it throws the following exception: Scheme "feed" is not supported The blog was made using Wordpress. What's wrong? If "feed it's not supported", how can I change the type of feed that Wordpress does? Thanks in advance, Take care,

    Read the article

  • Best full text search for mysql?

    - by ConroyP
    We're currently running MySQL on a LAMP stack and have been looking at implementing a more thorough, full-text search on our site. We've looked at MySQL's own freetext search, but it doesn't seem to cope well with large databases, which makes it far too slow for our needs. Our main requirements are: speed returning results simple updating of index In addition to the above, our "nice to have"s are: ideally not something that requires adding a module to MySQL plays nicely with PHP (majority of our dev work done using PHP) There seems to be quite a few healthy open-source projects to add fast, reliable full-text search to MySQL, so I'm basically looking for recommendations/suggestions on what you've found to be the most useful product out there, easiest to set up, etc. So far, the list of ones we've been starting to play around with are: Sphinx, C++ based, used by craigslist, thepiratebay Lucene, Java-based Apache project, powers zeoh.com and zoomf.com Solr, Java-based offshoot of Lucene, used to power searches on Digg, CNet & AOL Channels Are there any better ones out there that we haven't come across yet? Can you recommend / suggest against any of the options we've gathered so far? Thanks for your help! Update @Cletus suggested Google's Custom Search Engine. We recently trialled this on a couple of projects, and it's an almost-perfect fit for our needs. The problem is that entries on our site are updated quite regularly, and unfortunately the speed at which entries go in/get updated in Google's index was just too slow and erratic for us to rely on, even with the addition of sitemaps and requested crawl rate changes.

    Read the article

  • ifconfig networking telnet

    - by jhon
    Hi guys, I'm newbie around networking, I have a question: what I want is to telnet a specific IP/server, let us say 192.168.128.1 then, I try $telnet 192.168.128.1 Trying... and that's all.. I never get connected one of my friends made some script that "fixes" it, AFTER running it I was able to connect to the server using $telnet 192.168.128.1 $ user: unfortunately I lost that script, so I'm here requesting your help. Reading my old notes, I remember that the script performed some modification to the entries listed by ifconfig -a, I also have the ifconfig's output (copy & paste) $ ifconfig -a adapter0: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.128.150 netmask 0xffffff00 broadcast 192.168.128.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 adapter1: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.251.150 netmask 0xffffff00 broadcast 192.168.251.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 adapter2: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.250.150 netmask 0xffffff00 broadcast 192.168.250.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT> inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255 inet6 ::1/0 tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1 more than commands, I'm looking for some explanation why does "adding/changing" those entries enables me to connect to the server. I do not see the server ip (i.e 192.168.128.1) listed above. thanks

    Read the article

  • Efficient Context-Free Grammar parser, preferably Python-friendly

    - by Max Shawabkeh
    I am in need of parsing a small subset of English for one of my project, described as a context-free grammar with (1-level) feature structures (example) and I need to do it efficiently . Right now I'm using NLTK's parser which produces the right output but is very slow. For my grammar of ~450 fairly ambiguous non-lexicon rules and half a million lexical entries, parsing simple sentences can take anywhere from 2 to 30 seconds, depending it seems on the number of resulting trees. Lexical entries have little to no effect on performance. Another problem is that loading the (25MB) grammar+lexicon at the beginning can take up to a minute. From what I can find in literature, the running time of the algorithm used to parse such a grammar (Earley or CKY) should be linear to the size of the grammar and cubic to the size of the input token list. My experience with NLTK indicates that ambiguity is what hurts the performance most, not the absolute size of the grammar. So now I'm looking for a CFG parser to replace NLTK. I've been considering PLY but I can't tell whether it supports feature structures in CFGs, which are required in my case, and the examples I've seen seem to be doing a lot of procedural parsing rather than just specifying a grammar. Can anybody show me an example of PLY both supporting feature structs and using a declarative grammar? I'm also fine with any other parser that can do what I need efficiently. A Python interface is preferable but not absolutely necessary.

    Read the article

  • How do I read a secure rss feed into a SyndicationFeed without providing credentials?

    - by John Kaster
    For whatever reason, IBM uses https (without requiring credentials) for their RSS feeds. I'm trying to consume https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/rendering/feed/gradybooch/entries/rss?lang=en with a .NET 4 SyndicationFeed. I can open this feed in a browser and it loads just fine. Here's the code: using (XmlReader xml = XmlReader.Create("https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/rendering/feed/gradybooch/entries/rss?lang=en")) { var items = from item in SyndicationFeed.Load(xml).Items select item; } Here's the exception: System.Net.WebException was unhandled by user code Message=The remote server returned an error: (500) Internal Server Error. Source=System StackTrace: at System.Net.HttpWebRequest.GetResponse() at System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials, IWebProxy proxy, RequestCachePolicy cachePolicy) at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials, IWebProxy proxy, RequestCachePolicy cachePolicy) at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) at System.Xml.XmlReaderSettings.CreateReader(String inputUri, XmlParserContext inputContext) at System.Xml.XmlReader.Create(String inputUri, XmlReaderSettings settings, XmlParserContext inputContext) at System.Xml.XmlReader.Create(String inputUri) at EDN.Util.Test.FeedAggTest.LoadFeedInfoTest() in D:\cdn\trunk\CDN\Dev\Shared\net\EDN.Util\EDN.Util.Test\FeedAggTest.cs:line 126 How do I configure the reader to work with an https feed?

    Read the article

  • How to call a C# DLL from InstallerClass overrides?

    - by simpleton
    I have a setup and deployment project. In it I have an Installer.cs class, created from the basic Installer Class template. In the Install override, I have this code taken from http://msdn.microsoft.com/en-us/library/d9k65z2d.aspx. public override void Install(IDictionary stateSaver) { base.Install(stateSaver); System.Diagnostics.Process.Start("http://www.microsoft.com"); } Everything works when I install the application, but the problem arises when I try to call anything from an external C# class library that I also wrote. Example: public override void Install(IDictionary stateSaver) { base.Install(stateSaver); System.Diagnostics.Process.Start("http://www.microsoft.com"); MyLibrary.SomeMethod(); } The problem is, I don't believe the library call is actually happening. I tagged the entire process line by line, with EventLog.WriteEvent()s, and each line in the Install override is seemlingly 'hit', putting entries in the event log, but I do not receive any exceptions for the library call SomeMethod(), yet the EventLog entries and other actions inside SomeMethod() are never created nor any of its actions taken. Any ideas?

    Read the article

  • Bibliography behaves strange in lyx.

    - by Orjanp
    Hi! I have created a Bibliography section in my document written in lyx. It uses a book layout. For some reason it did start over again when I added some more entries. The new entries was made some time later than the first ones. I just went down to key-27 and hit enter. Then it started on key-1 again. Does anyone know why it behaves like this? The lyx code is below. \begin{thebibliography}{34} \bibitem{key-6}Lego mindstorms, http://mindstorms.lego.com/en-us/default.aspx \bibitem{key-7}C.A.R. Hoare. Communicating sequential processes. Communications of the ACM, 21(8):666-677, pages 666\textendash{}677, August 1978. \bibitem{key-8}C.A.R. Hoare. Communicating sequential processes. Prentice-Hall, 1985. \bibitem{key-9}CSPBuilder, http://code.google.com/p/cspbuilder/ \bibitem{key-10}Rune Møllegård Friborg and Brian Vinter. CSPBuilder - CSP baset Scientific Workflow Modelling, 2008. \bibitem{key-11}Labview, http://www.ni.com/labview \bibitem{key-12}Robolab, http://www.lego.com/eng/education/mindstorms/home.asp?pagename=robolab \bibitem{key-13}http://code.google.com/p/pycsp/ \bibitem{key-14}Paparazzi, http://paparazzi.enac.fr \bibitem{key-15}Debian, http://www.debian.org \bibitem{key-16}Ubuntu, http://www.ubuntu.com \bibitem{key-17}GNU, http://www.gnu.org \bibitem{key-18}IVY, http://www2.tls.cena.fr/products/ivy/ \bibitem{key-19}Tkinter, http://wiki.python.org/moin/TkInter \bibitem{key-20}pyGKT, http://www.pygtk.org/ \bibitem{key-21}pyQT4, http://wiki.python.org/moin/PyQt4 \bibitem{key-22}wxWidgets, http://www.wxwidgets.org/ \bibitem{key-23}wxPython GUI toolkit, http://www.wxPython.org \bibitem{key-24}Python programming language, http://www.python.org \bibitem{key-25}wxGlade, http://wxglade.sourceforge.net/ \bibitem{key-26}http://numpy.scipy.org/ \bibitem{key-27}http://www.w3.org/XML/ \bibitem{key-1}IVY software bus, http://www2.tls.cena.fr/products/ivy/ \bibitem{key-2}sdas \bibitem{key-3}sad \bibitem{key-4}sad \bibitem{key-5}fsa \bibitem{key-6}sad \bibitem{key-7} \end{thebibliography}

    Read the article

  • Adding a guideline to the editor in Visual Studio

    - by xsl
    Introduction I've always been searching for a way to make Visual Studio draw a line after a certain amount of characters: Below is a guide to enable these so called guidelines for various versions of Visual Studio. Visual Studio 2010 Install Paul Harrington's Editor Guidelines extension. Open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. Or install the Guidelines UI extension, which will add entries to the editor's context menu for adding/removing the entries without needing to edit the registry directly. The current disadvantage of this method is that you can't specify the column directly. Visual Studio 2008 and Other Versions If you are using Visual Studio 2008 open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. The vertical line will appear, when you restart Visual Studio. This trick also works for various other version of Visual Studio, as long as you use the correct path: 2003: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\7.1\Text Editor 2005: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor 2008: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor 2008 Express: HKEY_CURRENT_USER\Software\Microsoft\VCExpress\9.0\Text Editor This also works in SQL Server 2005 and probably other versions.

    Read the article

  • Filter a date property between a begin and end Dates with JDOQL

    - by Sergio del Amo
    I want to code a function to get a list of Entry objects whose date field is between a beginPeriod and endPeriod I post below a code snippet which works with a HACK. I have to substract a day from the begin period date. It seems the condition great or equal does not work. Any idea why I have this issue? public static List<Entry> getEntries(Date beginPeriod, Date endPeriod) { /* TODO * The great or equal condition does not seem to work in the filter below * Substract a day and it seems to work */ Calendar calendar = Calendar.getInstance(); calendar.set(beginPeriod.getYear(), beginPeriod.getMonth(), beginPeriod.getDate() - 1); beginPeriod = calendar.getTime(); PersistenceManager pm = JdoUtil.getPm(); Query q = pm.newQuery(Entry.class); q.setFilter("this.date >= beginPeriodParam && this.date <= endPeriodParam"); q.declareParameters("java.util.Date beginPeriodParam, java.util.Date endPeriodParam"); List<Entry> entries = (List<Entry>) q.execute(beginPeriod,endPeriod); return entries; }

    Read the article

  • XML Comparison?

    - by CrazyNick
    I got a books.xml file from http://msdn.microsoft.com/en-us/library/ms762271(VS.85).aspx and just saved it with two different names, removed few lines from the second file and tried to compare the files using the below powershell code: clear-host $xml = New-Object XML $xml = [xml](Get-Content D:\SharePoint\Powershell\Comparator\Comparator_Config.xml) $xml.config.compare | ForEach-Object { $strFile1 = get-Content $.source; $strFile2 = get-Content $.destination; $diff  = Compare-Object $strFile1 $strFile2; $result1 = $diff | where {$.SideIndicator -eq "<=" } | select InputObject; $result2 = $diff | where {$.SideIndicator -eq "=" } | select InputObject; Write-host "nEntries are differ in First web.config filen"; $result1 | ForEach-Object {write-host $.InputObject}; Write-host "nEntries are differ in Second web.config filen" ;$result2 | ForEach-Object {write-host $.InputObject}; $.parameters | ForEach-Object { write-host $.parameter } } and it is working perfectly and giving me the below results: Entries are differ in First web.config file < price36.95< /price < descriptionMicrosoft Visual Studio 7 is explored in depth, looking at how Visual Basic, Visual C++, C#, and ASP+ are integrated into a comprehensive development environment.< /description Entries are differ in Second web.config file < price< /price < description< /description however I wan to know the root node of the above mentioned nodes, is that possible to find? if so, how?

    Read the article

  • Quering container with Linq + group by ?

    - by Prix
    public class ItemList { public int GuID { get; set; } public int ItemID { get; set; } public string Name { get; set; } public entityType Status { get; set; } public class Waypoint { public int Zone { get; set; } public int SubID { get; set; } public int Heading { get; set; } public float PosX { get; set; } public float PosY { get; set; } public float PosZ { get; set; } } public List<Waypoint> Routes = new List<Waypoint>(); } I have a list of items using the above class and now I need to group it by ItemID and join the first entry of Routes of each iqual ItemID. So for example, let's say on my list I have: GUID ItemID ListOfRoutes 1 23 first entry only 2 23 first entry only 3 23 first entry only 4 23 first entry only 5 23 first entry only 6 23 first entry only 7 23 first entry only Means I have to group entries 1 to 7 as 1 Item with all the Routes entries. So I would have one ItemID 23 with 7 Routes on it where those routes are the first element of that given GUID Routes List. My question is if it is possible using LINQ to make a statment to do something like that this: var query = from ItemList entry in myList where status.Contains(entry.Status) group entry by entry.ItemID into result select new { items = new { ID = entry.ItemID, Name = entry.Name }, routes = from ItemList m in entry group m.Routes.FirstOrDefault() by n.NpcID into m2 }; So basicly I would have list of unique IDS information with a inner list of all the first entry of each GUID route that had the same ItemID.

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Full Text Index type column is empty

    - by RemotecUk
    I am trying to create an index on a VarBinary(max) field in my SQL Server 2008 database. The steps I am taking are as follows: Table: dbo.Records Right click on table and select "Full Text Index" Then select "Define Index..." I choose the primary key which is the PK of my table (field name Id, type UniqueIndentifier). I then get the screen with the options Available Columns, Language for Word Breaker and Type Column I select my VarBinary(max) field called Chart as the Available Column by ticking the box. I select "English" as the Language for Word Breaker field. Then... I try to select the Type Column but there are no entries in here. I cannot proceed by clicking "Next" until this column is populated. Why are there no entries in this column for selection and what should be in there? Note 1: The VarBinary(max) field is linked to a file group if that makes any difference. Note 2: Also noticed that in the table designer I cannot set the full text option on that same field to "Yes" - its permanently stuck on "No". Thanks.

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • using fopen on uploaded file with php

    - by Patrick
    Im uploading a file, and then attempting to use fgetcsv to do things to it. This is my script below. Everything was working fine before i added the file upload portions. Now its not showing any errors, its just not displaying the uploaded data. Im getting my "file uploaded" notice, and the file is saving properly, but the counter for number of entries is showing 0. if(isset($_FILES[csvgo])){ $tmp = $_FILES[csvgo][tmp_name]; $name = "temp.csv"; $tempdir = "csvtemp/"; if(move_uploaded_file($tmp, $tempdir.$name)){ echo "file uploaded<br>"; } $csvfile = $tempdir.$name; $fh = fopen($csvfile, 'r'); $headers = fgetcsv($fh); $data = array(); while (! feof($fh)) { $row = fgetcsv($fh); if (!empty($row)) { $obj = new stdClass; foreach ($row as $i => $value) { $key = $headers[$i]; $obj->$key = $value; } $data[] = $obj; } } fclose($fh); $c=0; foreach($data AS $key){ $c++; $phone = $key->PHONE; $fax = $key->Fax; $email = $key->Email; // do things with the data here } echo "$c entries <br>"; }

    Read the article

  • Design pattern to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • Remove undesired indexed keywords from Sql Server FTS Index

    - by Scott
    Could anyone tell me if SQL Server 2008 has a way to prevent keywords from being indexed that aren't really relevant to the types of searches that will be performed? For example, we have the IFilters for PDF and Word hooked in and our documents are being indexed properly as far as I can tell. These documents, however, have lots of numeric values in them that people won't really be searching for or bring back meaningful results. These are still being indexed and creating lots of entries in the full text catalog. Basically we are trying to optimize our search engine in any way we can and assumed all these unnecessary entries couldn't be helping performance. I want my catalog to consist of alphabetic keywords only. The current iFilters work better than I would be able to write in the time I have but it just has more than I need. This is an example of some of the terms from sys.dm_fts_index_keywords_by_document that I want out: $1,000, $100, $250, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 129, 13.1, 14, 14.12, 145, 15, 16.2, 16.4, 18, 18.1, 18.2, 18.3, 18.4, 18.5 These are some examples from the same management view that I think are desirable for keeping and searching on: above, accordingly, accounts, add, addition, additional, additive Any help would be greatly appreciated!

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Rails preventing duplicates in polymorphic has_many :through associations

    - by seaneshbaugh
    Is there an easy or at least elegant way to prevent duplicate entries in polymorphic has_many through associations? I've got two models, stories and links that can be tagged. I'm making a conscious decision to not use a plugin here. I want to actually understand everything that's going on and not be dependent on someone else's code that I don't fully grasp. To see what my question is getting at, if I run the following in the console (assuming the story and tag objects exist in the database already) s = Story.find_by_id(1) t = Tag.find_by_id(1) s.tags << t s.tags << t My taggings join table will have two entries added to it, each with the same exact data (tag_id = 1, taggable_id = 1, taggable_type = "Story"). That just doesn't seem very proper to me. So in an attempt to prevent this from happening I added the following to my Tagging model: before_validation :validate_uniqueness def validate_uniqueness taggings = Tagging.find(:all, :conditions => { :tag_id => self.tag_id, :taggable_id => self.taggable_id, :taggable_type => self.taggable_type }) if !taggings.empty? return false end return true end And it works almost as intended, but if I attempt to add a duplicate tag to a story or link I get an ActiveRecord::RecordInvalid: Validation failed exception. It seems that when you add an association to a list it calls the save! (rather than save sans !) method which raises exceptions if something goes wrong rather than just returning false. That isn't quite what I want to happen. I suppose I can surround any attempts to add new tags with a try/catch but that goes against the idea that you shouldn't expect your exceptions and this is something I fully expect to happen. Is there a better way of doing this that won't raise exceptions when all I want to do is just silently not save the object to the database because a duplicate exists?

    Read the article

  • Which SCM/VCS cope well with moving text between files?

    - by pfctdayelise
    We are having havoc with our project at work, because our VCS is doing some awful merging when we move information across files. The scenario is thus: You have lots of files that, say, contain information about terms from a dictionary, so you have a file for each letter of the alphabet. Users entering terms blindly follow the dictionary order, so they will put an entry like "kick the bucket" under B if that is where the dictionary happened to list it (or it might have been listed under both B, bucket and K, kick). Later, other users move the terms to their correct files. Lots of work is being done on the dictionary terms all the time. e.g. User A may have taken the B file and elaborated on the "kick the bucket" entry. User B took the B and K files, and moved the "kick the bucket" entry to the K file. Whichever order they end up getting committed in, the VCS will probably lose entries and not "figure out" that an entry has been moved. (These entries are later automatically converted to an SQL database. But they are kept in a "human friendly" form for working on them, with lots of comments, examples etc. So it is not acceptable to say "make your users enter SQL directly".) It is so bad that we have taken to almost manually merging these kinds of files now, because we can't trust our VCS. :( So what is the solution? I would love to hear that there is a VCS that could cope with this. Or a better merge algorithm? Or otherwise, maybe someone can suggest a better workflow or file arrangement to try and avoid this problem?

    Read the article

  • Core data setReturnsDistinctResult not working

    - by Moze
    So i'm building a small application, it uses core data database of ~25mb size with 4 entities. It's for bus timetables. In one table named "Stop" there are have ~1300 entries of bus stops with atributes "name", "id", "longitude", "latitude" and couple relationships. Now there are many stops with same name but different coordinates and id. Search is implemented using NSPredicate. So I want to show all distinct stop names in table view, i'm using setReturnsDistinctResults with NSDictionaryResultType and setPropertiesToFetch. But setReturnsDistinctResult is not working and I'm still getting all entries. Heres code: - (NSFetchRequest *)fetchRequest { if (fetchRequest == nil) { fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Stop" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSArray *sortDescriptors = [NSArray arrayWithObject:[[[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES] autorelease]]; [fetchRequest setSortDescriptors:sortDescriptors]; [fetchRequest setResultType:NSDictionaryResultType]; [fetchRequest setPropertiesToFetch:[NSArray arrayWithObject:[[entity propertiesByName] objectForKey:@"name"]]]; [fetchRequest setReturnsDistinctResults:YES]; DebugLog(@"fetchRequest initialized"); } return fetchRequest; } ///////////////////// - (NSFetchedResultsController *)fetchedResultsController { if (self.predicateString != nil) { self.predicate = [NSPredicate predicateWithFormat:@"name CONTAINS[cd] %@", self.predicateString]; [self.fetchRequest setPredicate:predicate]; } else { self.predicate = nil; [self.fetchRequest setPredicate:predicate]; } fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:self.fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:sectionNameKeyPath cacheName:nil]; return fetchedResultsController; } ////////////// - (UITableViewCell *)tableView:(UITableView *)table cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text = [[fetchedResultsController objectAtIndexPath:indexPath] valueForKey:@"name"]; return cell; }

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >