Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 60/66 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to override PHP configuration when running in CGI mode

    - by Fitrah M
    There are some tutorials out there telling me how to override PHP configuration when it is running in CGI mode. But I'm still confused because lots of them assume that the server is running on Linux. While I need to do that also in Windows. My hosting is indeed using Linux but my local development computer is using Windows XP with Xampp 1.7.3. So I need to do that in my local computer first, then I want to change the configuration on hosting server. The PHP in my hosting server is already run as CGI while in my local computer still run as Apache module. At this point, the processes that I understand are: Change PHP to work in CGI mode. I did this by commenting these two line in "httpd-xampp.conf": # LoadFile "C:/xampp/php/php5ts.dll" # LoadModule php5_module modules/php5apache2_2.dll Create "cgi-bin" directory in DocumentRoot. My DocumentRoot is in "D:\www\" (I'm using apache with virtual host). So it is now "D:\www\cgi-bin". Change the default "cgi-bin" directory settings from "C:/xampp/cgi-bin/" to "D:\www\cgi-bin": ScriptAlias /cgi-bin/ "D:/www/cgi-bin/" <Directory "D:\www\cgi-bin"> Options MultiViews Indexes SymLinksIfOwnerMatch Includes ExecCGI AllowOverride All Allow from All </Directory> At this point, my PHP is now running as CGI. I checked this with phpinfo(). It tells me that Server API is now CGI/FastCGI. Now I want to override php configuration. I copied 'php.ini' file to "D:\www\cgi-bin" and modify upload_max_filesize setting from 128M to 10M. Create 'php.cgi' file in "D:\www\cgi-bin" and put these code inside the file: #!/bin/sh /usr/local/cpanel/cgi-sys/php5 -c /home/user/public_html/cgi-bin/ That's it. I'm stuck at this point. All of tutorials tell me to create 'php.cgi' file and put shell code inside the file. How to do the 6th step on Windows? I know the next step is to create handler in .htaccess file to load that 'php.cgi'. And also, because I will also need to change PHP configuration on my hosting server (Linux), is the 6th step above right? Some tutorial tells to insert these line instead of above: #!/bin/sh export PHPRC=/site/ini/1 exec /cgi-bin/php5.cgi I'm sorry if my question is not clear. I'm a new member and this is my first question in this site. Thank you.

    Read the article

  • Programmatically specifying Django model attributes

    - by mojbro
    Hi! I would like to add attributes to a Django models programmatically, at run time. For instance, lets say I have a Car model class and want to add one price attribute (database column) per currency, given a list of currencies. What is the best way to do this? I had an approach that I thought would work, but it didn't exactly. This is how I tried doing it, using the car example above: from django.db import models class Car(models.Model): name = models.CharField(max_length=50) currencies = ['EUR', 'USD'] for currency in currencies: Car.add_to_class('price_%s' % currency.lower(), models.IntegerField()) This does seem to work pretty well at first sight: $ ./manage.py syncdb Creating table shop_car $ ./manage.py dbshell shop=# \d shop_car Table "public.shop_car" Column | Type | Modifiers -----------+-----------------------+------------------------------------------------------- id | integer | not null default nextval('shop_car_id_seq'::regclass) name | character varying(50) | not null price_eur | integer | not null price_usd | integer | not null Indexes: "shop_car_pkey" PRIMARY KEY, btree (id) But when I try to create a new Car, it doesn't really work anymore: >>> from shop.models import Car >>> mycar = Car(name='VW Jetta', price_eur=100, price_usd=130) >>> mycar <Car: Car object> >>> mycar.save() Traceback (most recent call last): File "<console>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/db/models/base.py", line 410, in save self.save_base(force_insert=force_insert, force_update=force_update) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/db/models/base.py", line 495, in save_base result = manager._insert(values, return_id=update_pk) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/db/models/manager.py", line 177, in _insert return insert_query(self.model, values, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/db/models/query.py", line 1087, in insert_query return query.execute_sql(return_id) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/db/models/sql/subqueries.py", line 320, in execute_sql cursor = super(InsertQuery, self).execute_sql(None) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) ProgrammingError: column "price_eur" specified more than once LINE 1: ...NTO "shop_car" ("name", "price_eur", "price_usd", "price_eur... ^

    Read the article

  • html widget communicating with server

    - by Nikita Rybak
    I'm making html widget for websites. Let's say, it will display current stock indexes. In short, arbitrary website owner takes code snippet from me and includes it on his webpage http://website.com/index.html. When arbitrary user opens http://website.com/index.html, my code sends request to my server (provider.com), which performs necessary operations and returns information to user's browser. When response has arrived, user will see relevant stock value on http://website.com/index.html. In index.html service could be called like this <script type="text/javascript" src="provider.com/service.js"> </script> <div id="target_area"></div> <script type="text/javascript"> service.show("target_area", options); </script> Now, the problem is in the same origin policy: I can't just send ajax request from website.com to provided.com and return html to embed in client's webpage. I see several solutions, which I list below, but none quite satisfy me. I wonder, if you could suggest something, especially if you had some relevant experience. 1) iframe, plain and simple. Disadvantage: must have fixed dimensions + stupid scroll bars appearing in some browsers. Can be fixed with javascript, but all this browser-specific tinkering doesn't sound good to me. 2) JSONP. Problem: can't return whole chunk of html, must return only data. Then, on browser side, I'll have to use javascript to embed data into html snippet placed statically in index.html. Doesn't sound nice, because data format is not very simple and may even change later. 3) Use hidden iframe to do ajax requests. A bit tricky, but sounds like a way to go. Well, that's my thoughts on the subject. Are there any better ways? BTW, I tried to check some existing widgets too, but didn't find much useful information. All domain names used in this text are fictional and any resemblance is purely coincidental :)

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory consumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paradigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • Adding rows to a data-bound DataGridView [Winforms]

    - by Mishko
    I want to bind a table from a database to a DataGridView, but I want to also add one more row with a sum of the values in the columns with indexes 3,4,7,8,9... How can I do that? Thanks! DataTable table1 = new DataTable(); double brutoUkupno1 = 0; double porezUkupno1 = 0; double doprinosUkupno1 = 0; double netoUkupno1 = 0; double doprinosTeretUkupno1 = 0; double topliObrokUkupno1 = 0; double regresUkupno1 = 0; Connection con = new Connection(); table1 = con.boundTable(month, Convert.ToInt32(year)); //This is method which returns DataTable table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); dgv2.Visible = true; dgv2.DataSource = table1; for (int i = 0; i < dgv2.RowCount - 2; i++) { topliObrokUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[7].Value); regresUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[8].Value); brutoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[9].Value); porezUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[10].Value); doprinosUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[11].Value); netoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[12].Value); doprinosTeretUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[13].Value); //Now I am having problems with this below, putting things above to dgv2 : } dgv2.Rows[dgv2.Rows.Count - 1].Cells[0].Value = "Ukupno"; dgv2.Rows[dgv2.Rows.Count - 1].Cells[3].Value = month.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[4].Value = year.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[7].Value = topliObrokUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[8].Value = regresUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[9].Value = brutoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[10].Value = porezUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[11].Value = doprinosUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[12].Value = netoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[13].Value = doprinosTeretUkupno1.ToString(); dgv2.Rows[dgv2.RowCount - 2].Height = 3; dgv2.Rows[dgv2.RowCount - 2].DefaultCellStyle.BackColor = Color.Black;

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory comsumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paridigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • Convert JSON flattened for forms back to an object

    - by George Jempty
    I am required (please therefore no nit-picking the requirement, I've already nit-picked it, and this is the req) to convert certain form fields that have "object nesting" embedded in the field names, back to the object(s) themselves. Below are some typical form field names: phones_0_patientPhoneTypeId phones_0_phone phones_1_patientPhoneTypeId phones_1_phone The form fields above were derived from an object such as the one toward the bottom (see "Data"), and that is the format of the object I need to reassemble. It can be assumed that any form field with a name that contains the underscore _ character needs to undergo this conversion. Also that the segment of the form field between underscores, if numeric, signifies a Javascript array, otherwise an object. I found it easy to devise a (somewhat naive) implementation for the "flattening" of the original object for use by the form, but am struggling going in the other direction; below the object/data below I'm pasting my current attempt. One problem (perhaps the only one?) with it is that it does not currently properly account for array indexes, but this might be tricky because the object will subsequently be encoded as JSON, which will not account for sparse arrays. So if "phones_1" exists, but "phones_0" does not, I would nevertheless like to ensure that a slot exists for phones[0] even if that value is null. Implementations that tweak what I have begun, or are entirely different, encouraged. If interested let me know if you'd like to see my code for the "flattening" part that is working. Thanks in advance Data: var obj = { phones: [{ "patientPhoneTypeId": 4, "phone": "8005551212" }, { "patientPhoneTypeId": 2, "phone": "8885551212" }]}; Code to date: var unflattened = {}; for (var prop in values) { if (prop.indexOf('_') > -1) { var lastUnderbarPos = prop.lastIndexOf('_'); var nestedProp = prop.substr(lastUnderbarPos + 1); var nesting = prop.substr(0, lastUnderbarPos).split("_"); var nestedRef, isArray, isObject; for (var i=0, n=nesting.length; i<n; i++) { if (i===0) { nestedRef = unflattened; } if (i < (n-1)) { // not last if (/^\d+$/.test(nesting[i+1])) { isArray = true; isObject = false; } else { isArray = true; isObject = false; } var currProp = nesting[i]; if (!nestedRef[currProp]) { if (isArray) { nestedRef[currProp] = []; } else if (isObject) { nestedRef[currProp] = {}; } } nestedRef = nestedRef[currProp]; } else { nestedRef[nestedProp] = values[prop]; } } }

    Read the article

  • Mac OS - Built SVN from source, now Apache2 not loading sites

    - by Geuis
    This relates to another question I asked earlier today. I built SVN 1.6.2 from source. In the process, it has completely screwed up my dev environment. After I built SVN, Apache wasn't loading. It was giving me this error: Syntax error on line 117 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec /apache2/mod_dav_svn.so into server: dlopen(/usr/libexec/apache2/mod_dav_svn.so, 10): no suitable image found. Did find:\n\t/usr/libexec/apache2/mod_dav_svn.so: mach-o, but wrong architecture It appears that SVN over-wrote the old mod_dav_svn.so and I am not able to get it to build as FAT, and I can't recover whatever was originally there. I resolved this(temporarily?) by commenting out the line that was loading the mod_dav_svn.so and got Apache to start at this point. However, even though Apache is running I am now getting this error when trying to access my dev sites: Directory index forbidden by Options directive: /usr/share/tomcat6/webapps/ROOT/ I have Apache2 sitting in front of Tomcat6. I access my local dev site using the internal name "http://localthesite". I have had virtual directories set up that have worked until this SVN debacle. Tomcat is installed at /usr/local/apache-tomcat, and webapps is /usr/local/apache-tomcat/webapps. Our production servers deploy tomcat to /usr/share/tomcat6, so I have symlinks setup on my system to replicate this as well. These point back to the actual installation path. This has all been working fine as well. None of our configurations for Apache2, Tomcat, or .htaccess have changed. Over the weekend, I performed a "Repair Disk Permissions" on the system. This was before I discovered the mod_dav_svn.so problem. I have been reading up on this all morning and the most common answer is that there is an Options -Indexes set. We have this in a config file, but it was there before and when I removed it during testing, I still got the same errors from Apache. At this point, I'm assuming I either totally borked the native Apache2 installation on this Mac, or that there is a permissions error somewhere that I'm missing. The permissions error could be from the SVN installation, or from my repair process. Does anyone have any idea what could be the problem? I'm totally blocked right now and have no idea where to check next.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Apache: How can i see my localhost on 192.168.1.101 from 192.168.1.102?

    - by takpar
    Hi, I'm running Apache on Ubuntu. My IP address is 192.168.1.101 While http://localhost and http://192.168.1.101 work fine in my PC, I cannot access it from within my laptop using http://192.168.1.102 It's strange. I can ping 192.168.1.101 but I got "The connection has timed out." in browser. I'm using default apache config. so this is what my sites-available/default looks like: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/www/public_html> Options Indexes FollowSymLinks MultiViews #AllowOverride None AllowOverride all Order allow,deny allow from all </Directory> /etc/apache2/posrts.conf NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> my laptop runs Ubuntu as well. so I don't think this is a firewall issue. commands executed in Laptop (192.168.1.102): adp@adp-laptop:~$ ping 192.168.1.101 PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data. 64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.101: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.101: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-laptop:~$ telnet 192.168.1.101 80 Trying 192.168.1.101... telnet: Unable to connect to remote host: Connection timed out commands executed in PC (192.168.1.101): adp@adp-desktop:~$ ps afx | grep http 12672 pts/4 S+ 0:00 | \_ grep --color=auto http adp@adp-desktop:~$ ping 192.168.1.102 PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data. 64 bytes from 192.168.1.102: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.102: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.102: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.102: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.102 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-desktop:~$ telnet 192.168.1.102 80 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused adp@adp-desktop:~$ telnet 192.168.1.102 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused What should i do?

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Using multiple aggregate functions in an algebraic expression in (ANSI) SQL statement

    - by morpheous
    I have the following aggregate functions (AGG FUNCs): foo(), foobar(), fredstats(), barneystats(). I want to know if I can use multiple AGG FUNCs in an algebraic expression. This may seem a strange/simplistic question for seasoned SQL developers - however, the but the reason I ask is that so far, all AGG FUNCs examples I have seen are of the simplistic variety e.g. max(salary) < 100, rather than using the AGG FUNCs in an expression which involves using multiple AGG FUNCs in an expression (like agg_func1() agg_func2()). The information below should help clarify further. Given tables with the following schemas: CREATE TABLE item (id int, length float, weight float); CREATE TABLE item_info (item_id, name varchar(32)); # Is it legal (ANSI) SQL to write queries of this format ? SELECT id, name, foo, foobar, fredstats FROM A, B (SELECT id, foo(123) as foo, foobar('red') as foobar, fredstats('weight') as fredstats FROM item GROUP BY id HAVING [ALGEBRAIC EXPRESSION] ORDER BY id AS A), item_info AS B WHERE item.id = B.id Where: ALGEBRAIC EXPRESSION is the type of expression that can be used in a WHERE clause - for example: ((foo(x) < foobar(y)) AND foobar(y) IN (1,2,3)) OR (fredstats(x) <> 0)) I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible. Assuming it is legal to include AGG FUNCS in the way I have done above, I'd like to know: Is there a more efficient way to write the above query ? Is there any way I can speed up the query in terms of a judicious choice of indexes on the tables item and item_info ? Is there a performance hit of using AGG FUNCs in an algebraic expression like I am (i.e. an expression involving the output of aggregate functions rather than constants? Can the expression also include 'scaled' AGG FUNC? (for example: 2*foo(123) < -3*foobar(456) ) - will scaling (i.e. multiplying an AGG FUNC by a number have an effect on performance?) How can I write the query above using INNER JOINS instead?

    Read the article

  • Why is zIndex not working from IE/Javascript?

    - by Vilx-
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=7" /> <title>Problem demo</title> </head> <body> <div style="background:red; position:relative;" id='div1'>1. <div style="background:lime; position: absolute; width: 300px;height: 300px; top: 3px; left: 30px" id="div2">3.</div> </div> <div style="background:blue;position:relative;color: white" id="div3">2.</div> <script type="text/javascript">/*<![CDATA[*/ window.onload= function() { // The container of the absolute DIV document.getElementById('div1').style.zIndex = 800; // The lowest DIV of all which obscures the absolute DIV document.getElementById('div2').style.zIndex = 1; // The absolute DIV document.getElementById('div3').style.zIndex = 1000; } /*]]>*/</script> </body> </html> In a nutshell, this script has two DIV elements with position:relative and the first of them has a third DIV with position:absolute in it. It's all set to run on IE-7 standards mode (I'm targeting IE7 and above). I know about the separate z-stacks of IE, so by default the third DIV should be beneath the second DIV. To fix this problem there is some Javascript which sets the z-orders of first and third DIV to 1000, and the z-order of the second DIV to 999. Unfortunately this does not help. If the z-indexes were set in markup, this would work, but why not from JS? Note: This problem does not exist in IE8 standards mode, but I'm targetting IE7, so I can't rely on that. Also, if you save this to your hard drive and then open it up, at first IE complains something about ActiveX and stuff. After you wave it away, everything works as expected. But if you refresh the page, the problem is there again.

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Optimising movement on hex grid

    - by Mloren
    I am making a turn based hex-grid game. The player selects units and moves them across the hex grid. Each tile in the grid is of a particular terrain type (eg desert, hills, mountains, etc) and each unit type has different abilities when it comes to moving over the terrain (e.g. some can move over mountains easily, some with difficulty and some not at all). Each unit has a movement value and each tile takes a certain amount of movement based on its terrain type and the unit type. E.g it costs a tank 1 to move over desert, 4 over swamp and cant move at all over mountains. Where as a flying unit moves over everything at a cost of 1. The issue I have is that when a unit is selected, I want to highlight an area around it showing where it can move, this means working out all the possible paths through the surrounding hexes, how much movement each path will take and lighting up the tiles based on that information. I got this working with a recursive function and found it took too long to calculate, I moved the function into a thread so that it didn't block the game but still it takes around 2 seconds for the thread to calculate the moveable area for a unit with a move of 8. Its over a million recursions which obviously is problematic. I'm wondering if anyone has an clever ideas on how I can optimize this problem. Here's the recursive function I'm currently using (its C# btw): private void CalcMoveGridRecursive(int nCenterIndex, int nMoveRemaining) { //List of the 6 tiles adjacent to the center tile int[] anAdjacentTiles = m_ThreadData.m_aHexData[nCenterIndex].m_anAdjacentTiles; foreach(int tileIndex in anAdjacentTiles) { //make sure this adjacent tile exists if(tileIndex == -1) continue; //How much would it cost the unit to move onto this adjacent tile int nMoveCost = m_ThreadData.m_anTerrainMoveCost[(int)m_ThreadData.m_aHexData[tileIndex].m_eTileType]; if(nMoveCost != -1 && nMoveCost <= nMoveRemaining) { //Make sure the adjacent tile isnt already in our list. if(!m_ThreadData.m_lPassableTiles.Contains(tileIndex)) m_ThreadData.m_lPassableTiles.Add(tileIndex); //Now check the 6 tiles surrounding the adjacent tile we just checked (it becomes the new center). CalcMoveGridRecursive(tileIndex, nMoveRemaining - nMoveCost); } } } At the end of the recursion, m_lPassableTiles contains a list of the indexes of all the tiles that the unit can possibly reach and they are made to glow. This all works, it just takes too long. Does anyone know a better approach to this?

    Read the article

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • asp.net custom templated datalist - throws argument out of range (index) on button press

    - by MrTortoise
    I have a class BaseTemplate public abstract class BaseTemplate : ITemplate This adds the controls, and provides abstract methods to implement in the inheriting class. The inheriting class then adds its html according to its data source and manages the data binding. this all works fine - I get the control appearing with properly parsed html. the problem is that the base class adds controls into the template that have their own CommandName arguments ... the idea is that the class that implements the custom templated dataList will provide the logic of setting the Selected and Edit Indexes. This class also manages the data binding etc. It sets all of the templates ont he datalist in the Init method (which was another cause of this exception). the exception gets throw when i hit one of these buttons .. but after the ItemCommand event is being processed. The stack trace does not include any references to my methods or objects which is why i am so stuck. The Exception Details Exception Details: System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: index The Stack Trace: [ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: index] System.Web.UI.ControlCollection.get_Item(Int32 index) +8665582 System.Web.UI.WebControls.DataList.GetItem(ListItemType itemType, Int32 repeatIndex) +8667655 System.Web.UI.WebControls.DataList.System.Web.UI.WebControls.IRepeatInfoUser.GetItemStyle(ListItemType itemType, Int32 repeatIndex) +11 System.Web.UI.WebControls.RepeatInfo.RenderVerticalRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +8640873 System.Web.UI.WebControls.RepeatInfo.RenderRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +27 System.Web.UI.WebControls.DataList.RenderContents(HtmlTextWriter writer) +208 System.Web.UI.WebControls.BaseDataList.Render(HtmlTextWriter writer) +30 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +163 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +32 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +51 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +40 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.Page.Render(HtmlTextWriter writer) +29 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1266 This is driving me absolutley stark raving bonkers ... im talking cthulu style.

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • How to replace all id attributes of a child collection of complex types using jQuery in ASP.net MVC

    - by TJB
    Here's my situation: I'm writing an ASP.net MVC 1 website and I have a create/edit form that uses the default model binding to parse the form into a strongly typed complex object. The object I'm posting has a child collection of another complex type and the way I format my id's for the model binder is as follows: <div class="childContainer" > <!-- There's one of these for each property for each child collection item --> <%= Html.TextBox("ChildCollectionName[0].ChildPropertyName", /* blah blah */ ) %> <%= Html.TextBox("ChildCollectionName[0].OtherChildPropertyName", /* blah blah */ ) %> <!-- ... --> </div> This gets rendered as <div class="childContainer" > <input id="ChildCollectionName[0]_ChildPropertyName" ... /> <input id="ChildCollectionName[0]_OtherChildPropertyName" ... /> ... </div> <div class="childContainer" > <input id="ChildCollectionName[1]_ChildPropertyName" ... /> <input id="ChildCollectionName[1]_OtherChildPropertyName" ... /> ... </div> For each entry in the chlid collection. This collection is dynamically created in the form using jQuery, so entries can be added, removed etc. and whenever there's an operation on the collection I need to update the indexes so that it's bound correctly on the server side. What's the best way to replace all the html input id's when I'm updating the index within the child e.g. replace all [*] -- [N] where N is the correct index. using jQuery / JavaScript ? I have something coded now, but its buggy and I think there is a simpler solution. Also, if you have an easier way to identify the child collection I'll take any advice on that as well. Thanx!

    Read the article

  • MATLAB image corner coordinates & referncing to cell arrays

    - by James
    Hi, I am having some problems comparing the elements in different cell arrays. The context of this problem is that I am using the bwboundaries function in MATLAB to trace the outline of an image. The image is of a structural cross section and I am trying to find if there is continuity throughout the section (i.e. there is only one outline produced by the bwboundaries command). Having done this and found where the is more than one section traced (i.e. it is not continuous), I have used the cornermetric command to find the corners of each section. The code I have is: %% Define the structural section as a binary matrix (Image is an I-section with the web broken) bw(20:40,50:150) = 1; bw(160:180,50:150) = 1; bw(20:60,95:105) = 1; bw(140:180,95:105) = 1; Trace = bw; [B] = bwboundaries(Trace,'noholes'); %Traces the outer boundary of each section L = length(B); % Finds number of boundaries if L > 1 disp('Multiple boundaries') % States whether more than one boundary found end %% Obtain perimeter coordinates for k=1:length(B) %For all the boundaries perim = B{k}; %Obtains perimeter coordinates (as a 2D matrix) from the cell array end %% Find the corner positions C = cornermetric(bw); Areacorners = find(C == max(max(C))) % Finds the corner coordinates of each boundary [rowindexcorners,colindexcorners] = ind2sub(size(Newgeometry),Areacorners) % Convert corner coordinate indexes into subcripts, to give x & y coordinates (i.e. the same format as B gives) %% Put these corner coordinates into a cell array Cornerscellarray = cell(length(rowindexcorners),1); % Initialises cell array of zeros for i =1:numel(rowindexcorners) Cornerscellarray(i) = {[rowindexcorners(i) colindexcorners(i)]}; %Assigns the corner indicies into the cell array %This is done so the cell arrays can be compared end for k=1:length(B) %For all the boundaries found perim = B{k}; %Obtains coordinates for each perimeter Z = perim; % Initialise the matrix containing the perimeter corners Sectioncellmatrix = cell(length(rowindexcorners),1); for i =1:length(perim) Sectioncellmatrix(i) = {[perim(i,1) perim(i,2)]}; end for i = 1:length(perim) if Sectioncellmatrix(i) ~= Cornerscellarray Sectioncellmatrix(i) = []; %Gets rid of the elements that are not corners, but keeps them associated with the relevent section end end end This creates an error in the last for loop. Is there a way I can check whether each cell of the array (containing an x and y coordinate) is equal to any pair of coordinates in cornercellarray? I know it is possible with matrices to compare whether a certain element matches any of the elements in another matrix. I want to be able to do the same here, but for the pair of coordinates within the cell array. The reason I don't just use the cornercellarray cell array itself, is because this lists all the corner coordinates and does not associate them with a specific traced boundary.

    Read the article

  • highlight query string in more than one field using solr search feature

    - by Romi
    i am using solr indexes for showing my search results. to show serch results i am parsing json data received from solr. i am able to highlight a query string in search result but only in a single field. for this i set hl=true and hl.fl="field1". i did it as $.getJSON("http://192.168.1.9:8983/solr/db/select/?wt=json&&start=0&rows=100&q="+lowerCaseQuery+"&hl=true&hl.fl=description,name&hl.usePhraseHighlighter=true&sort=price asc&json.wrf=?", function(result){ var n=result.response.numFound var highlight = new Array(n); $.each(result.highlighting, function(i, hitem){ var match = hitem.text[0].match(/<em>(.*?)<\/em>/); highlight[i]=match[1]; }); $.each(newresult.response.docs, function(i,item){ var word=highlight[item["UID_PK"]]; var result = item.text[0].replace(new RegExp(word,'g'), '<em>' + word + '</em>'); }); for this json object is as : { "responseHeader": { "status": 0, "QTime": 32 }, "response": { "numFound": 21, "start": 0, "docs": [ { "description": "The matte finish waves on this wedding band contrast with the high polish borders. This sharp and elegant design was finely crafted in Japan.", "UID_PK": "8252", }, { "description": "This elegant ring has an Akoya cultured pearl with a band of bezel-set round diamonds making it perfect for her to wear to work or the night out.", "UID_PK": "8142", }, ] }, "highlighting": { "8252": { "description": [ " and <em>elegant</em> design was finely crafted in Japan." ] }, "8142": { "description": [ "This <em>elegant</em> ring has an Akoya cultured pearl with a band of bezel-set round diamonds making" ] }, } } Now if i want to highlight query string in two fields i did as hl=true hl.fl=descrption, name my json is as: { "responseHeader":{ "status":0, "QTime":16 }, "response":{ "numFound":1904, "start":0, "docs":[ { "description":"", "UID_PK":"7780", "name":[ "Diamond bracelet with Milgrain Bezel1" ] }, { "description":"This pendant is sure to win hearts. Round diamonds form a simple and graceful line.", "UID_PK":"8121", "name":[ "Heartline Diamond Pendant" ] }, "highlighting":{ "7780":{ "name":[ "<em>Diamond</em> bracelet with Milgrain Bezel1" ] }, "8121":{ "description":[ "This pendant is sure to win hearts. Round <em>diamonds</em> form a simple and graceful line." ], "name":[ "Heartline <em>Diamond</em> Pendant" ] } } } Now how should i parse it to get the result. suggest me some general technique, so if i want to highlight query in more fields then i could do so. Thanks

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66  | Next Page >