Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 74/145 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Delete a line from a file in java

    - by dalton conley
    Ok, so I'm trying to delete lines from a text file with java. Currently the way I'm doing this, is I'm keep track of a line number and inputting an index. The index is the line I want deleted. So each time I read a new line of data I increment the line count. Now when I reach the line count that is the same index, I dont write the data to the temporary file. Now this works, but what if for example I'm working with huge files and I have to worry about memory restraints. How can I do this with.. file markers? For example.. place the file marker on the line I want to do delete. Then delete that line? Or is that just too much work?

    Read the article

  • Is it better to have client (Javascript) processing HTML rather than C# processing HTML?

    - by Raja
    We are in the process of building a huge site. We are contemplating on whether to do the processing of HTML at server side (ASP .Net) or at the client side. For example we have HTML files which acts like templates for the generation of tabs. Is it better for the server side to get hold of content section (div) of HTML load the appropriate values and send the updated HTML to the browser or is it better that a chunk of data is passed onto client and make Javascript do the work? Any justification with respect to either ways will be helpful. Thanks.

    Read the article

  • Strange behaviour of code inside TransactionScope?

    - by Krishna
    We are facing a very complex issue in our production application. We have a WCF method which creates a complex Entity in the database with all its relation. public void InsertEntity(Entity entity) { using(TransactionScope scope = new TransactionScope()) { EntityDao.Create(entity); } } EntityDao.Create(entity) method is very complex and has huge pieces of logic. During the entire process of creation it creates several child entities and also have several queries to database. During the entire WCF request of entity creation usually Connection is maintained in a ThreadStatic variable and reused by the DAOs. Although some of the queries in DAO described in step 2 uses a new connection and closes it after use. Overall we have seen that the above process behaviour is erratic. Some of the queries in the inner DAO does not even return actual data from the database? The same query when run to the actaul data store gives correct result. What can be possible reason of this behaviour?

    Read the article

  • PHP Special Characters Test

    - by pws5068
    What's an efficient way of checking if a username contains a number of special characters that I define. Examples: % # ^ . ! @ & ( ) + / " ? ` ~ < { } [ ] | = - ; I need to detect them and return a boolean, not just strip them out. Probably a super easy question but I need a better way of doing this than a huge list of conditionals or a sloppy loop.

    Read the article

  • dbpedia auto-suggest labels

    - by Sid
    Wikipedia has a auto-suggest feature on its search field. If you for instance type in "mars" it lists a few items including Mars, Marseille, Marsh. I am looking to implement something similar working off the latest DBpedia export (wikipedia in database form). If I do a search for all labels in the labels_en.nt file that DBpedia offer that begin with "mars" then, even if I remove ones that redirect on to others that are listed, I end up with a huge list. In trying to understand how wikipedia does this I noticed that I'm actually querying this URL which returns a JSON string. Now my problem is that I don't know how wikipedia narrows the list down. Perhaps it does so based on page popularity. The higher views/edits a page has the higher it goes in this suggestion box. If so, does DBpedia export this information?

    Read the article

  • Ruby Net::HTTP - Read only x number of bytes of the body

    - by bvanderw
    It seems like the methods of Ruby's Net::HTTP are all or nothing when it comes to reading the body of a web page. How can I read, say, the just the first 100 bytes of the body? I am trying to read from a content server that returns a short error message in the body of the response if the file requested isn't available. I need to read enough of the body to determine whether the file is there. The files are huge, so I don't want to get the whole body just to check if the file is available.

    Read the article

  • Decrease DB requests number from Django templates

    - by Andrew
    I publish discount offers for my city. Offer models are passed to template ( ~15 offers per page). Every offer has lot of items(every item has FK to it's offer), thus i have to make huge number of DB request from template. {% for item in offer.1 %} {{item.descr}} {{item.start_date}} {{item.price|floatformat}} {%if not item.tax_included %}{%trans "Without taxes"%}{%endif%} <a href="{{item.offer.wwwlink}}" >{%trans "Buy now!"%}</a> </div> <div class="clear"></div> {% endfor %} So there are ~200-400 DB requests per page, that's abnormal i expect. In django code it is possible to use select_related to prepopulate needed values, how can i decrease number of requests in template?

    Read the article

  • Search for short words with SOLR

    - by Carsten Gehling
    I am using SOLR along with NGramTokenizerFactory to help create search tokens for substrings of words NGramTokenizer is configured with a minimum word length of 3 This means that I can search for e.g. "unb" and then match the word "unbelievable". However I have a problem with short words like "I" and "in". These are not indexed by SOLR (I suspect it is because of NGramTokenizer) and therefore I cannot search for them. I don't want to reduce the minimum word length to 1 or 2, since this creates a huge search index. But I would like SOLR to include whole words whose length is already below this minimum. How can I do that? /Carsten

    Read the article

  • On-the-fly lossless image compression

    - by geschema
    I have an embedded application where an image scanner sends out a stream of 16-bit pixels that are later assembled to a grayscale image. As I need to both save this data locally and forward it to a network interface, I'd like to compress the data stream to reduce the required storage space and network bandwidth. Is there a simple algorithm that I can use to losslessly compress the pixel data? I first thought of computing the difference between two consecutive pixels and then encoding this difference with a Huffman code. Unfortunately, the pixels are unsigned 16-bit quantities so the difference can be anywhere in the range -65535 .. +65535 which leads to potentially huge codeword lengths. If a few really long codewords occur in a row, I'll run into buffer overflow problems.

    Read the article

  • Is there a way to automatically make a makefile from a template toolkit template?

    - by Smack my batch up
    My static web pages are built from a huge bunch of templates which are inter-included using Template Toolkit's "import" and "include", so page.html looks like this: [% INCLUDE top %] [% IMPORT middle %] Then top might have even more files included. I have very many of these files, and they have to be run through to create the web pages in various languages (English, French, etc., not computer languages). This is a very complicated process and when one file is updated I would like to be able to automatically remake only the necessary files, using a makefile or something similar. Are there any tools which can parse template toolkit templates and create a dependency list for use in a makefile? Or are there better ways to automate this process?

    Read the article

  • Facebook Connect auto-signon?

    - by mattbasta
    So with all these fancy new APIs and such that Facebook is making available, I've noticed that on the partner sites (Pandora, docs.com, etc.), there is no login---Facebook automatically signs you in. You don't even need to press a button to connect if you already have a FB session established. Is this a feature of the new API? Or is this a Facebook partners-only feature? I haven't seen any information on whether this is possible for cool guys that don't run huge companies. Thanks in advance

    Read the article

  • Using sed for introducing newline after each > in a +1 gigabyte large one-line text file

    - by wasatz
    I have a giant text file (about 1,5 gigabyte) with xml data in it. All text in the file is on a single line, and attempting to open it in any text editor (even the ones mentioned in this thread: http://stackoverflow.com/questions/159521/text-editor-to-open-big-giant-huge-large-text-files ) either fails horribly or is totally unusable due to the text editor hanging when attempting to scroll. I was hoping to introduce newlines into the file by using the following sed command sed 's/>/>\n/g' data.xml > data_with_newlines.xml Sadly, this caused sed to give me a segmentation fault. From what I understand, sed reads the file line-by-line which would in this case mean that it attempts to read the entire 1,5 gig file in one line which would most certainly explain the segfault. However, the problem remains. How do I introduce newlines after each in the xml file? Do I have to resort to writing a small program to do this for me by reading the file character-by-character?

    Read the article

  • Visual Studio Web Application edit source while running like in Tomcat\Eclipse\Java

    - by Bryan Migliorisi
    In an ASP.NET Web Site project, I've always been able to make changes to the underlying C# code and simply refresh the page in the browser and my changes would be there instantly. I can do the same thing when working with Java and Eclipse - edit my Java source and refresh the page and my changes are there. I cannot do this in ASP.NET MVC though and it is a real downer - I have to stop the running process and make my changes, and then restart debugging. This is a huge waste of time. Am I doing it wrong? What is the best approach to ASP.NET MVC development?

    Read the article

  • saving information in file in php

    - by Mac Taylor
    hey guys i want to write a tracking system and now i can save in my Mysql database . but saving information about each ip that visits is a huge work for mysql so i think if i could save the information in a file , then there is no discussion about database and its problems . but to begin this : i realy dont know how to save in a file in a way that i can read it with no problem and show the details this is what is used to insert into my database sql_query("insert into tracking (date_time, ip_address, hostname,referer, page , page_title) values ('".sql_quote($dt)."', '".sql_quote($ipaddr)."', '".sql_quote($hostnm)."','$referer', '".sql_quote($pg)."', '".sql_quote($pagetitle)."')", $dbi); i need to show information about all ips in rows , after saving in a file what should i do to save and show in row order ( table ) php/mysql

    Read the article

  • Use php to zip large files

    - by Joseph
    Hi, I have a php form that has a bunch of checkboxes that all contain links to files. Once a user clicks on which checkboxes (files) they want, it then zips up the files and forces a download. I got a simple php zip force download to work, but when one of the files is huge or if someone lets say selects the whole list to zip up and download, my server errors out. I understand that I can increase the server size, but are there any other ways? Thanks!

    Read the article

  • How do i convert hdb file? ... believed to be from act! source

    - by Wardy
    Any ideas ? I think the original source was a goldmine database, looking around it appears that the file was likely built using an application called ACT which I gather is a huge product I don't really want to be deploying for a one off file total size less than 5 meg. So ... Anyone know of a simple tool that I can run this file through to convert it to a standard CSV or something? It does appear to be (when looking at it in notepad and excel) in some sort of csv type format but it's like the data is encrypted somehow.

    Read the article

  • Apple has bigger market capitalization than Microsoft

    - by Paja
    Should I worry that my Microsoft programming skills (C#, .NET, WinAPI) will be obsolete in a few years, because everybody will be using Mac OS X, iPhone OS or some other Apple-originated OS? According to reuters, Apple has now bigger market capitalization than Microsoft. I know there is probably huge speculative bubble around Apple's shares, but just look at this graph (taken from business insider): I also know Windows have currently 90% of the OS market share, so we (Win devs) should be OK for a few years, but who knows, maybe in 10 years, it will be just 40%, and 0% on mobile phone OS market. Personally, I'm not much worried, because I think Microsoft will fight hard for their desktop OS market share, but I'm interested in your opinions.

    Read the article

  • Rake tasks in other files

    - by Arcath
    Im trying to use rake in a project, and if I put everything into Rakefile it will be huge and hard to read/find things, so I tried to stick each namesapce in its own file in lib/rake, i added this to the top of my rake file: Dir['#{File.dirname(__FILE__)}/lib/rake/*.rake'].map { |f| require f } it loads the file no problem, but doesn't have the tasks. I only have one .rake file as a test for now called "servers.rake" and it looks like this: namespace :server do task :test do puts "test" end end so when I run rake server:test id expect to see one line appear saying "test", instead I get rake aborted! Don't know how to build task 'server:test' at first I thought my codes wrong but if I copy the contents of lib/rake/servers.rake into Rakefile it works fine. How do I get rake tasks to work that are in another file?

    Read the article

  • Interspire to Magento migration

    - by patrikas
    Hello, I recently started with Magento and decided to migrate Interspire shopping cart I already made time ago to it. At first look Magento seems a very huge beast - lots of options, maybe lack of simplicity resulting in some performance loss. I've got user guide from which I am not getting much of benefit since there're just descriptions of very ordinary tasks that I could easily discover myself by poking around frontend/backend. So my first tasks are category and product export. Interspire seems to be exporting ONLY products in three available formats: Default MYOB Peachtree accounting I did some searching on Magento's product importing and found a blog post which says that I should create a few sample products with all the necessary attributes myself and then start the import. But what should I do with categories ? Is it possible to import them or instruct Magento to automatically create categories when importing product file if unknown category is encountered ? Thanks

    Read the article

  • SharePoint's CAML query the "Created By" field with username

    - by yellowblood
    Hey, I have a form for administrators where they insert a user name ("domain\name") and the code gets and sets some information out of it. It's a huge project and some of the lists contain the username as a string ("domain\name"), but some lists only count on the "Created By" column, which is auto-created. I want to know what's the fastest way to query these lists using the username string. I tried to use the same query as the one I use for the first kind of lists and it obviously didn't work - <Where><Eq><FieldRef Name='UserName'/><Value Type='Text'>domain\\username</Value></Eq></Where> Thank you.

    Read the article

  • How to save the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • How to the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • LNK1106 with big binary resource

    - by E Dominique
    I have a rather huge .dat-file (896MB) included as a BIN resource in my project. Now I get a LNK1106 link error ("fatal error LNK1106: invalid file or disk full: cannot seek to 0x382A3920".) I use Visual Studio 2005 under Windows XP, and have tried on a 4GB RAM machine with high Virtual Memory settings and lots of disk space. I have tried a number of different optimization flags, but to no avail. Does anyone have a clue? EDIT: I have narrowed it down to a specific size of the compiled resource. If the .res file is 544078588 bytes (about 518.9MB) or larger, the error occurs. If it is smaller it works just fine. Still no solution, though...

    Read the article

  • setting source classpath in eclipse with stupid project structure

    - by lisak
    What do you guys do, when you have huge project built with ant for instance, where the source folders are right bellow the root project folder, for building classpath from source files ? putting entire project as a source folder is nonsense. Putting separate folders as source folders can't be done if they are part of the package hierarchy and the only thing I could think of, is to copy the source folders into a separate folder and add it then as source folder which is weird but I don't know how else to do it. Having to duplicate sources just because of the eclipse way of making classpath and also because of somebody doing stupid project structure

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >