Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 296/1944 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • How do you share pre-calculated data between calls to a Rails web service?

    - by Nigel Thorne
    I have a Rails app that allows users to build up a network structure and then ask questions about how to navigate around it. When adding nodes and connections these are just saved to the database. At the point you make a query of the network I calculate the shortest path from any node to any other node. Constructing this in memory takes a while (something I need to fix), but once it is there, you can instantly get the answer to any of these path questions. The question is... How do I share this network between calls to the website, so each request doesn't regenerate the paths network each time? Note: I am hosting this on apache server using passenger (mod ruby) Thoughts?

    Read the article

  • Cuboid inside generic polyhedron

    - by DOFHandler
    I am searching for an efficient algorithm to find if a cuboid is completely inside or completely outside or (not-inside and not-outside) a generic (concave or convex) polyhedron. The polyhedron is defined by a list of 3D points and a list of facets. Each facet is defined by the subset of the contour points ordinated such as the right-hand normal points outward the solid. Any suggestion? Thank you

    Read the article

  • Python: how to run several scripts (or functions) at the same time under windows 7 multicore processor 64bit

    - by Gianni
    sorry for this question because there are several examples in Stackoverflow. I am writing in order to clarify some of my doubts because I am quite new in Python language. i wrote a function: def clipmyfile(inFile,poly,outFile): ... # doing something with inFile and poly and return outFile Normally I do this: clipmyfile(inFile="File1.txt",poly="poly1.shp",outFile="res1.txt") clipmyfile(inFile="File2.txt",poly="poly2.shp",outFile="res2.txt") clipmyfile(inFile="File3.txt",poly="poly3.shp",outFile="res3.txt") ...... clipmyfile(inFile="File21.txt",poly="poly21.shp",outFile="res21.txt") I had read in this example Run several python programs at the same time and i can use (but probably i wrong) from multiprocessing import Pool p = Pool(21) # like in your example, running 21 separate processes to run the function in the same time and speed my analysis I am really honest to say that I didn't understand the next step. Thanks in advance for help and suggestion Gianni

    Read the article

  • How To Find the Control or Page a User Control is Embedded In

    - by 5arx
    I've written a web user control which I want to be able to drop into the markup for either aspx pages or other web user controls. I need my user control to be able to easily and efficiently work out if its inside another user control or an aspx page. My initial idea is to do it recursively with checks on the Parent property - continue looking up the nesting hierarchy until I find either a web form or a user control - but I'm not sure this the best way of going about this. Can you suggest an easier way? thanks 5arx

    Read the article

  • Single-Page Web Apps: Client-side datastores & server persistence

    - by fig-gnuton
    How should client-side datastores & persistence be handled in a single-page web application? Global vars vs. DI/IoC: Should datastores be assigned to global variables so any part of the application can access them? Or should they be dependency injected where required? Server persistence: Assuming a datastore's data needn't always be persisted to the server immediately, should the datastore itself handle persistence? If not, then what class should handle persistence and how should the persistence class fit into the client-side architecture overall? Is the datastore considered the model in MVC, or is it something else since it just stores raw data?

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • If a table has two xml columns, will inserting records be a lot slower?

    - by Lieven Cardoen
    Is it a bad thing to have two xml columns in one table? + How much slower are these xml columns in terms of updating/inserting/reading data? In profiler this kind of insert normally takes 0 ms, but sometimes it goes up to 160ms: declare @p8 xml set @p8=convert(xml,N'<interactions><interaction correct="false" score="0" id="0" gapid="0" x="61" y="225"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="1" gapid="1" x="64" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="2" gapid="2" x="131" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction></interactions>') declare @p14 xml set @p14=convert(xml,N'<contentinteractions/>') exec sp_executesql N'INSERT INTO [dbo].[PackageSessionNodes]([dbo].[PackageSessionNodes].[PackageSessionId], [dbo].[PackageSessionNodes].[TreeNodeId],[dbo].[PackageSessionNodes].[Duration], [dbo].[PackageSessionNodes].[Score],[dbo].[PackageSessionNodes].[ScoreMax], [dbo].[PackageSessionNodes].[Interactions],[dbo].[PackageSessionNodes].[BrainTeaser], [dbo].[PackageSessionNodes].[DateCreated], [dbo].[PackageSessionNodes].[CompletionStatus], [dbo].[PackageSessionNodes].[ReducedScore], [dbo].[PackageSessionNodes].[ReducedScoreMax], [dbo].[PackageSessionNodes].[ContentInteractions]) VALUES (@ins_dboPackageSessionNodesPackageSessionId, @ins_dboPackageSessionNodesTreeNodeId, @ins_dboPackageSessionNodesDuration, @ins_dboPackageSessionNodesScore, @ins_dboPackageSessionNodesScoreMax, @ins_dboPackageSessionNodesInteractions, @ins_dboPackageSessionNodesBrainTeaser, @ins_dboPackageSessionNodesDateCreated, @ins_dboPackageSessionNodesCompletionStatus, @ins_dboPackageSessionNodesReducedScore, @ins_dboPackageSessionNodesReducedScoreMax, @ins_dboPackageSessionNodesContentInteractions) ; SELECT SCOPE_IDENTITY() as new_id This is the table: CREATE TABLE [dbo].[PackageSessionNodes]( [PackageSessionNodeId] [int] IDENTITY(1,1) NOT NULL, [PackageSessionId] [int] NOT NULL, [TreeNodeId] [int] NOT NULL, [Duration] [int] NULL, [Score] [float] NOT NULL, [ScoreMax] [float] NOT NULL, [Interactions] [xml] NOT NULL, [BrainTeaser] [bit] NOT NULL, [DateCreated] [datetime] NULL, [CompletionStatus] [int] NOT NULL, [ReducedScore] [float] NOT NULL, [ReducedScoreMax] [float] NOT NULL, [ContentInteractions] [xml] NOT NULL, CONSTRAINT [PK_PackageSessionNodes] PRIMARY KEY CLUSTERED ( [PackageSessionNodeId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_PackageSessions] FOREIGN KEY([PackageSessionId]) REFERENCES [dbo].[PackageSessions] ([PackageSessionId]) ON UPDATE CASCADE ON DELETE CASCADE GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_PackageSessions] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_TreeNodes] FOREIGN KEY([TreeNodeId]) REFERENCES [dbo].[TreeNodes] ([TreeNodeId]) GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_TreeNodes] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_Score] DEFAULT ((-1)) FOR [Score] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ScoreMax] DEFAULT ((-1)) FOR [ScoreMax] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_DateCreated] DEFAULT (getdate()) FOR [DateCreated] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScore] DEFAULT ((-1)) FOR [ReducedScore] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScoreMax] DEFAULT ((-1)) FOR [ReducedScoreMax] GO

    Read the article

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • How to 'insert if not exists' in MySQL?

    - by warren
    I started by googling, and found this article which talks about mutex tables. I have a table with ~14 million records. If I want to add more data in the same format, is there a way to ensure the record I want to insert does not already exist without using a pair of queries (ie, one query to check and one to insert is the result set is empty)? Does a unique constraint on a field guarantee the insert will fail if it's already there? It seems that with merely a constraint, when I issue the insert via php, the script croaks.

    Read the article

  • Are Conditional subquery

    - by Tobias Schulte
    I have a table foo and a table bar, where each foo might have a bar (and a bar might belong to multiple foos). Now I need to select all foos with a bar. My sql looks like this SELECT * FROM foo f WHERE [...] AND ($param IS NULL OR (SELECT ((COUNT(*))>0) FROM bar b WHERE f.bar = b.id)) with $param being replaced at runtime. The question is: Will the subquery be executed even if param is null, or will the dbms optimize the subquery out?

    Read the article

  • Fastest Way to generate 1,000,000+ random numbers in python

    - by Sandro
    I am currently writing an app in python that needs to generate large amount of random numbers, FAST. Currently I have a scheme going that uses numpy to generate all of the numbers in a giant batch (about ~500,000 at a time). While this seems to be faster than python's implementation. I still need it to go faster. Any ideas? I'm open to writing it in C and embedding it in the program or doing w/e it takes. Constraints on the random numbers: A Set of numbers 7 numbers that can all have different bounds: eg: [0-X1, 0-X2, 0-X3, 0-X4, 0-X5, 0-X6, 0-X7] Currently I am generating a list of 7 numbers with random values from [0-1) then multiplying by [X1..X7] A Set of 13 numbers that all add up to 1 Currently just generating 13 numbers then dividing by their sum Any ideas? Would pre calculating these numbers and storing them in a file make this faster? Thanks!

    Read the article

  • CSS not displayed depending on page

    - by Kanjiroushi
    I have a friend that has a really strange issue with my website. When he clicks on http://www.copeo.fr/ the page displays fine but when he clicks on a link like www.copeo.fr/user/ the CSS is not applied even after a refresh. The raw html does display. I asked him to display the CSS that is hosted on amazon S3 hcopeoressources.s3.amazonaws.com/style/futurvert/style.css and it displays fine. The code validates on W3C validator so does the CSS. I am lost what could be the origin of the issue. Could it be its enterprise cache? configuration of IE7 on his machine? If it happens to someone else who could explain the issue to me, I am all hears. Thanks

    Read the article

  • Is there a better way to count the messages in an Message Queue (MSMQ)?

    - by Damovisa
    I'm currently doing it like this: MessageQueue queue = new MessageQueue(".\Private$\myqueue"); MessageEnumerator messageEnumerator = queue.GetMessageEnumerator2(); int i = 0; while (messageEnumerator.MoveNext()) { i++; } return i; But for obvious reasons, it just feels wrong - I shouldn't have to iterate through every message just to get a count, should I? Is there a better way?

    Read the article

  • Will Algorithm written in OCaml compiled from C be Faster than Algorithm written in Pure C code?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case will my OCaml code compiled into C run faster than algorithm implemented in pure C?

    Read the article

  • Get a list of events owned by a facebook page

    - by Tom Wright
    Does anyone know how I can get a list of events owned (created) by a Facebook page? I seem to be able to use the "graph api" to generate a list of the events an entity is attending. I also looked at FQL, but it seems to require that the 'where' clause is an indexable field (and, naturally, the id is the only indexable field). For bonus points, we'd be able to do this without any authentication. (Though I'm resigned to the fact that I'm likely going to need at least a permanent access_token.) If anyone know how to do this I'd be eternally grateful.

    Read the article

  • output all data from db to a php page

    - by Sarit
    Hi, I'm a real beginner with PHP & mysql. Just for studying and for a simple example at school I would like to work this simple query and possibly output all the rows (or maybe even one) to a very basic output on a php page: <?php $user= "root"; $host="localhost"; $password=""; $database = "PetCatalog"; $cxn = mysqli_connect($host,$user,$password,$database) or die ("couldn’t connect to server"); $query = "SELECT * FROM Pet"; $result = mysql_query($cxn,$query) or die ("Couldn’t execute query."); $row = mysqli_fetch_assoc($result); echo "$result"; ?> this is the error message i've been getting: Warning: mysql_query() expects parameter 1 to be string, object given in C:\xampplite\htdocs\myblog\query.php on line 18 Couldn’t execute query. What should I do? Thanks!

    Read the article

  • mysql varchar innodb page size limit 8100 bytes

    - by David19801
    Hi, Regarding innodb, someone recently told me: "the varchar content beyond 768 bytes is stored in supplemental 16K pages" This is very interesting. If each varchar will be latin1, which I believe stores as 1byte per letter, would a single varchar(500) (<768 bytes) require an extra i/o as a varchar(1000) (768 bytes) would?? (this question is to find out if all varchars or just big varchars are split into a separate page) Is the 768 limit per varchar or for all varchars in the row added together? (for example, does this get optimized - varchar(300), varchar(300), varchar(300): [where each individual varchar column is below 768 but together they are above 768 characters]? I am confused about if the 768 limit relates to each individual varchar or all varchars in the row totaled (as in the question). Any clarification? EDIT: Removed part about CHARS due to finding out about their limit of 255 max.

    Read the article

  • how to display a array value in view page

    - by udaya
    Hai I am using codeigniter...I my function phpcalview()... This is my function function phpcalview() { $year = $this->input->post('yearvv'); $data['year'] = $this->adminmodel->selectyear(); $data['date'] = $this->adminmodel->selectmonth(); print_r($data['date'] ); $this->load->view('phpcal',$data); } I am printing the values with print_r($data['date'] ); I get values like Array ( [0] => Array ( [dbdatenum] => 1 ) [1] => Array ( [dbdatenum] => 2 ) [2] => Array ( [dbdatenum] => 3 ) [3] => Array ( [dbdatenum] => 4 )) I want to display the array seperately as array[0],array[1] in my view page how can i do that

    Read the article

  • General question: Filesystem or database?

    - by poeschlorn
    Hey guys, i want to create a small document management system. there are several users who store their files. each file which is uploaded contains an info which user uploaded it and the document content itself. In a view there are displayed all files of ONE specific user, ordered by date. What would be better: 1) giving the documents a name or metadata(XML) which contain the date and user (and iterate through them to get the metadata) or 2) giving the files a random/unique name and store metadata in a DB? something like this: date | user | filename What would you say and why? The used programming language is java and the DB is MySQL.

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • apache alias and .htacess willing to understand configuration?

    - by sushil bharwani
    On our local dev enviornment we had just one server and to add far future expires and cache control header to static images we kept a .htaccess file in the root of the application things worked fine. But on our prod we have multiple apache servers having aliases to a code base on a different server. Here in this case i am not sure where to keep .htacess file on. Should i be keeping it on code base or on the individual apache servers. How can i write the same stuff that i have written in .htaccess file to httpd.conf file.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >